Leaderboard
The search index is currently processing. Leaderboard results may not be complete.
Popular Content
Showing content with the highest reputation on 04/25/2015 in all areas
-
Amazing People Compilation Assassin's Creed FreeRunners In Real Life The World's Best Parkour And FreeRunners4 points
-
Nume Produs:Steam cs full pack+jocuri,lvl steam 15-20-30 ø Modul de plată (Paypal, SMS, transfer bancar):sms/paysafe/cod reincarcabil ø Sumă (preţul maxim cu care doriţi să achiziţionaţi produsul):15-20-252 points
-
2 points
-
2 points
-
Name of the song :Way Way Date of issuance the song : 2012 Type Song : Rai Link of song :1 point
-
¤ Nume în joc: Bunny ¤ Vârsta: 15 ¤ Nume: Adrian ¤ Oraș: Darabani ¤ Județ: Botosani ¤ Țară: Romania ¤ Jocuri preferate: CS 1.6 , CS:GO , Dota 2 ¤ O scurtă descriere despre tine: Frumos design ¤ De unde ai aflat de CsBlackDevil: LOADING! ¤ Server preferat (server doar din comunitate!): ThunderZM ¤ O poză cu tine (în cazul în care ai deja una și vrei să o postezi): next time p:1 point
-
Quantum computers won’t ever outperform today’s classical computers unless they can correct for errors that disrupt the fragile quantum states of their qubits. A team at Google has taken the next huge step toward making quantum computing practical by demonstrating the first system capable of correcting such errors. Google’s breakthrough originated with the hiring of a quantum computing research group from the University California, Santa Barbara in the autumn of 2014. The UCSB researchers had previously built a system of superconducting quantum circuits that performed with enough accuracy to make error correction a possibility. That earlier achievement paved the way for the researchers—many now employed at Google—to build a system that can correct the errors that naturally arise during quantum computing operations. Their work is detailed in the 4 March 2015 issue of the journal Nature. “This is the first time natural errors arising from the qubit environment were corrected,” said Rami Barends, a quantum electronics engineer at Google. “It’s the first device that can correct its own errors.” Quantum computers have the potential to perform many simultaneous calculations by relying upon quantum bits, or qubits, that can represent information as both 1 and 0 at the same time. That gives quantum computing a big edge over today’s classical computers that rely on bits that can only represent either 1 or 0. But a huge challenge in building practical quantum computers involves preserving the fragile quantum states of qubits long enough to run calculations. The solution that Google and UCSB have demonstrated is a quantum error-correction code that uses simple classical processing to correct the errors that arise during quantum computing operations. Such codes can’t directly detect errors in qubits without disrupting the fragile quantum states. But they get around that problem by relying on entanglement, a physics phenomenon that enables a single qubit to share its information with many other qubits through a quantum connection. The codes exploit entanglement with an architecture that includes “measurement” qubits entangled with neighboring “data” qubits. The Google and UCSB team has been developing a specific quantum error-correction code called “surface code.” They eventually hope to build a 2-D surface code architecture based on a checkerboard arrangement of qubits, so that “white squares” would represent the data qubits that perform operations and “black squares” would represent measurement qubits that can detect errors in neighboring qubits. For now, the researchers have been testing the surface code in a simplified “repetition code” architecture that involves a linear, 1-D array of qubits. Their unprecedented demonstration of error correction used a repetition code architecture that included nine qubits. They tested the repetition code through the equivalent of 90,000 test runs to gather the necessary statistics about its performance. “This validates years and years of theory to show that error correction can be a practical possibility,” said Julian Kelly, a quantum electronics engineer at Google. Just as importantly, the team’s demonstration showed that error correction rates actually improved when they increased the number of qubits in the system. That’s exciting news for quantum computing, because it shows that larger systems of qubits won’t necessarily collapse under a growing pile of errors. It means that larger quantum computing systems could be practical. For instance, the team compared the error rate of a single physical qubit with the error rate of multiple qubits working together to perform logic operations. When they used the repetition code array of five qubits, the logical error rate was 2.7 times less than the single physical qubit error rate. A larger array of nine qubits showed even more improvement with a logical error rate 8.5 times less than the single physical qubit error rate. As Kelly explains: “One, we wanted to show a system where qubits are cooperating and outperforming any single qubit. That’s an important milestone. Even more exciting than that, when we go from five to nine qubits, it gets better. If you want to get a better error rate, you add more qubits.” This first demonstration of error correction shows a clear path forward for scaling up the size of quantum computing systems. But on its own terms, it still falls well short of the error correction rate needed to make quantum computing practical, said Austin Fowler, a quantum electronics engineer at Google. The team would need to improve error correction rates by 5 to 10 times in order to make quantum computing truly practical. Still, the current quantum computing system managed to maintain coherence in the quantum states of its qubits despite all the uncorrected errors—a fact that left the researchers feeling optimistic about the [CENSORED]ure. Their work was supported with government funding from the Intelligence Advanced Research Projects Activity and Army Research Office grants. The latest error correction demonstration would mainly work with universal logic-gate quantum computers; systems that would represent super-fast versions of today’s classical “gate-model” computers. But Google has also invested in the alternate “quantum annealing” approach of Canadian company D-Wave Systems. D-Wave’s quantum annealing machines have sacrificed some qubit coherence to scale up quickly in size to the 512-qubit D-Wave Two machine—a system that dwarfs most experimental quantum computing systems containing just several qubits. Google has turned to John Martinis, a professor of physics at the University of California, Santa Barbara, along with his former UCSB researchers now on Google’s payroll, to try building a more stabilized version of D-Wave’s quantum annealing machines.1 point
-
1 point
-
1 point
-
1 point
-
1 point
-
1 point
-
1 point
-
1 point
-
Energy efficiency is (if you’ll pardon the pun) a hot topic. Foundries and semiconductor manufacturers now trumpet their power saving initiatives with the same fervor they once reserved for clock speed improvements and performance improvements. AMD is no exception to this trend, and the company has just published a new white paper that details the work it’s doing as part of its ’25×20′ project, which intends to increase performance per watt by 25x within five years. If you’ve followed our discussions on microprocessor trends and general power innovation, much of what the paper lays out will be familiar. The paper steps through hUMA (Heterogeneous Unified Memory Access) and the overall advantages of HSA, as well as the slowing rate of power improvements delivered strictly by foundry process shrinks. The most interesting area for our purposes is the additional information AMD is offering around Adaptive Voltage and Frequency Scaling, or AVFS. Most of these improvements are specific to Carrizo — the Carrizo-L platform doesn’t implement them. AVFS vs DVFS There are two primary methods of conserving power in a microprocessor — the aforementioned AVFS, and Dynamic Voltage and Frequency Scaling, or DVFS. Both AMD and Intel have made use of DVFS for over a decade. DVFS uses what’s called open-loop scaling. In this type of system, the CPU vendor determines the optimal voltage for the chip based on the target application and frequency. DVFS is not calibrated to any specific chips — instead, Intel, AMD, and other vendors create a statistical model that predicts what voltage level a chip that’s already verified as good will need to operate at a given frequency. DVFS is always designed to incorporate a significant amount of overhead. A CPU’s operating temperature will affect its voltage requirements. And since AMD and Intel don’t know if any given SoC will be operating at 40C or 80C, they tweak the DVFS model to ensure a chip won’t destabilize. In practice, this means margins of 10-20% at any given point.1 point
-
If you’re looking for faster WiFi performance, you want 802.11ac — it’s as simple as that. In essence, 802.11ac is a supercharged version of 802.11n (the current WiFi standard that your smartphone and laptop probably use), offering link speeds ranging from 433 megabits-per-second (Mbps), all the way through to several gigabits per second. To achieve speeds that are dozens of times faster than 802.11n, 802.11ac works exclusively in the 5GHz band, uses a ton of bandwidth (80 or 160MHz), operates in up to eight spatial streams (MIMO), and employs a kind of technology called beamforming. For more details on what 802.11ac is, and how it will eventually replace wired gigabit Ethernet networking at home and in the office, read on. How 802.11ac works Years ago, 802.11n introduced some exciting technologies that brought massive speed boosts over 802.11b and g. 802.11ac does something similar compared with 802.11n. For example, whereas 802.11n had support for four spatial streams (4×4 MIMO) and a channel width of 40MHz, 802.11ac can utilize eight spatial streams and has channels up to 80MHz wide — which can then be combined to make 160MHz channels. Even if everything else remained the same (and it doesn’t), this means 802.11ac has 8x160MHz of spectral bandwidth to play with, vs. 4x40MHz — a huge difference that allows it to squeeze vast amounts of data across the airwaves. To boost throughput further, 802.11ac also introduces 256-QAM modulation (up from 64-QAM in 802.11n), which basically squeezes 256 different signals over the same frequency by shifting and twisting each into a slightly different phase. In theory, that quadruples the spectral efficiency of 802.11ac over 802.11n. Spectral efficiency is a measure of how well a given wireless protocol or multiplexing technique uses the bandwidth available to it. In the 5GHz band, where channels are fairly wide (20MHz+), spectral efficiency isn’t so important. In cellular bands, though, channels are often only 5MHz wide, which makes spectral efficiency very important. 802.11ac also introduces standardized beamforming (802.11n had it, but it wasn’t standardized, which made interoperability an issue). Beamforming is essentially transmitting radio signals in such a way that they’re directed at a specific device. This can increase overall throughput and make it more consistent, as well as reduce power consumption. Beamforming can be done with smart antennae that physically move to track the device, or by modulating the amplitude and phase of the signals so that they destructively interfere with each other, leaving just a narrow, not-interfered-with beam. 802.11n uses this second method, which can be implemented by both routers and mobile devices. Finally, 802.11ac, like 802.11 versions before it, is fully backwards compatible with 802.11n and 802.11g — so you can buy an 802.11ac router today, and it should work just fine with your older WiFi devices. The range of 802.11ac In theory, on the 5GHz band and using beamforming, 802.11ac should have the same or better range than 802.11n (without beamforming). The 5GHz band, thanks to less penetration power, doesn’t have quite the same range as 2.4GHz (802.11b/g). But that’s the trade-off we have to make: There simply isn’t enough spectral bandwidth in the massively overused 2.4GHz band to allow for 802.11ac’s gigabit-level speeds. As long as your router is well-positioned, or you have multiple routers, it shouldn’t matter a huge amount. As always, the more important factor will likely be the transmission power of your devices, and the quality of their antennae. How fast is 802.11ac? And finally, the question everyone wants to know: Just how fast is WiFi 802.11ac? As always, there are two answers: the theoretical max speed that can be achieved in the lab, and the practical maximum speed that you’ll most likely receive at home in the real world, surrounded by lots of signal-attenuating obstacles. The theoretical max speed of 802.11ac is eight 160MHz 256-QAM channels, each of which are capable of 866.7Mbps — a grand total of 6,933Mbps, or just shy of 7Gbps. That’s a transfer rate of 900 megabytes per second — more than you can squeeze down a SATA 3 link. In the real world, thanks to channel contention, you probably won’t get more than two or three 160MHz channels, so the max speed comes down to somewhere between 1.7Gbps and 2.5Gbps. Compare this with 802.11n’s max theoretical speed, which is 600Mbps. Top-performing routers today (April 2015) include the D-Link AC3200 Ultra Wi-Fi Router (DIR-890L/R), the Linksys Smart Wi-Fi Router AC 1900 (WRT1900AC), and the Trendnet AC1750 Dual-Band Wireless Router (TEW-812DRU), as our sister site PCMag reports. With these routers, you can certainly expect some impressive speeds from 802.11ac, but it still won’t replace your wired Gigabit Ethernet network just yet. In Anandtech’s 2013 testing, they paired a WD MyNet AC1300 802.11ac router (up to three streams), paired with a range of 802.11ac devices that supported either one or two streams. The fastest data rate was achieved by a laptop with an Intel 7260 802.11ac wireless adapter, which used two streams to reach 364 megabits per second — over a distance of just five feet (1.5m) At 20 feet (6m) and through a wall, the same laptop was the fastest — but this time maxing out at 140Mbps. The listed max speed for the Intel 7260 is 867Mbps (2x433Mbps streams). In situations where you don’t need the maximum performance and reliability of wired GigE, though, 802.11ac is very compelling indeed. Instead of cluttering up your living room by running an Ethernet cable to the home theater PC under your TV, 802.11ac now has enough bandwidth to wirelessly stream the highest-definition content to your HTPC. For all but the most demanding use cases, 802.11ac is a very viable alternative to Ethernet. The [CENSORED]ure of 802.11ac 802.11ac will only get faster, too. As we mentioned earlier, the theoretical max speed of 802.11ac is just shy of 7Gbps — and while you’ll never hit that in a real-world scenario, we wouldn’t be surprised to see link speeds of 2Gbps or more in the next few years. At 2Gbps, you’ll get a transfer rate of 256MB/sec, and suddenly Ethernet serves less and less purpose if that happens. To reach such speeds, though, chipset and device makers will have to suss out how to implement four or more 802.11ac streams, both in terms of software and hardware. We imagine Broadcom, Qualcomm, MediaTek, Marvell, and Intel are already well on their way to implementing four- and eight-stream 802.11ac solutions for integration in the latest routers, access points, and mobile devices — but until the 802.11ac spec is finalized, second-wave chipsets and devices are unlikely to emerge. A lot of work will be have to done by the chipset and device makers to ensure that advanced features, such as beamforming, comply with the standard and are interoperable with other 802.11ac devices.1 point
-
The Tokyo Electric Power Company (TEPCO) has been under intense scrutiny ever since the 2011 meltdown at the Fukushima Daiichi nuclear energy complex. Following an investigation by Japan’s Board of Audit, TEPCO has been told to upgrade its computer systems. That doesn’t sound particularly unusual, except that TEPCO operates more than 48,000 PCs all running Windows XP. Oh, and they’re connected to the Internet. The Board of Audit is digging into TEPCO’s finances largely because the Japanese government wants the company to pay for ongoing cleanup efforts around Fukushima. It’s no surprise either. The 2011 meltdown was the largest nuclear disaster since Chernobyl in 1986. Decommissioning the plant is expected to cost tens of billions of dollars and take 30-40 years. No one is alleging that Windows XP was the cause of the disaster, of course. Power plant infrastructure runs on more robust embedded platforms, though TEPCO didn’t plan ahead very well in the case of Fukushima. The chain of events that led to the runaway fission reaction have been thoroughly investigated, from the tsunami to the system failures that prevented reactor shutdown. The heavy reliance on Windows XP could, however, be seen as more evidence of complacency within TEPCO. Windows XP was released in 2001, and enjoyed update support from Microsoft for more than a decade until it was finally cut off in 2014. That was after several extensions due to the poor performance of subsequent versions of Windows. A lack of security patches means XP systems will be vulnerable to any and all security flaws that are discovered going forward. This might not be a huge deal if the TEPCO computers weren’t connected to the Internet. TEPCO was reportedly aware of how dated its systems were (it would be hard not to), but had actively chosen to keep using XP until at least 2019 as a cost-saving measure. That means TEPCO workers would be using 18-year-old software by the time it was upgraded. It is possible for businesses to pay Microsoft large sums of money for custom XP support, but obviously TEPCO was not doing that. The Board of Audit calls this out as not only catastrophically unsafe, but not even likely to result in cost savings. Supporting ancient operating systems like this only gets harder as hardware and software moves on to support more modern platforms. TEPCO has reportedly agreed to make the upgrades. But really, it shouldn’t have taken a government audit to convince an operator of nuclear power plants that using outdated, insecure computers is a bad idea.1 point
-
Stick. Quite a bit north of, say, an Amazon Fire Stick, this is meant to be a computing device, not merely a media controller, although it can certainly serve that role. It's outfitted a bit more like what you'd expect from a smartphone. And for $149, with Windows 8.1 (the version with Bing, which is intended for low-cost hardware devices), it's a compelling thought. The Intel Compute Stick is powered by a Bay Trail processor (the Z3735F, essentially a quad-core Atom processor intended for tablets). It comes with 32 GB of eMMC storage and 2 GB of RAM. It has a USB port, HDMI of course (1.4a, using a standard connector), and a microUSB port that provides power, as well as a microSD slot. An Intel spokesperson said that it would eventually be able to draw power over HDMI. Its wireless capabilities include rather pedestrian wireless, at 802.11b/g/n, and also Bluetooth 4.0 support. There is a lower cost Compute Stick running Linux Ubuntu, with 8 GB of storage and 1 GB of memory for $89. The company sees the Compute Stick as a lower-entry computing device that can also be a powerful media player and consumption tool for consumers. For corporations, it can serve as a thin client device. While the first version is using Bay Trail, we'll soon also see it run on Cherry Trail, and, it was hinted, even Core M. The first Compute Sticks ship in March.1 point
-
1 point
-
GTA vice city gta san andreas pes7 pes10 pes11 pes12 pes13 fifa14 pes14 Dragon ball z budokai tenkachi 1 Dragon ball z budokai tenkachi 2 Dragon ball z budokai tenkachi 3 Dragon ball z budokai 3 Dragon ball z sagas dragon ball infinte world resident evil 4 Crach mind over mutant street fighter mortal combat God of war need for speed porsuit Naruto ultimate ninja 1 Naruto ultimate ninja 2 Naruto ultimate ninja 3 Naruto ultimate ninja4 Naruto ultimate ninja 5 Naruto Chronicles GRan turismo 4 spiderman counter strike 1.6 Crazy taxi1 point
-
Packcolectionat de catre Freezmy Folositile cu placere http://www17.zippyshare.com/v/39724541/file.html1 point
-
1 point
-
1 point
About Us
CsBlackDevil Community [www.csblackdevil.com], a virtual world from May 1, 2012, which continues to grow in the gaming world. CSBD has over 65k members in continuous expansion, coming from different parts of the world.
Donate for a coffee☕