Everything posted by DaNGeROuS KiLLeR
-
The downing of a Russian warplane by Turkey threatened to lead to a wholesale breach in the countries’ relations on Thursday, with the Kremlin preparing to sever economic ties and Turkish officials saying they had no reason to apologize. Prime Minister Dmitri A. Medvedev of Russia gave government officials two days to draw up a list of ways to curb economic links and investment projects. That included the possible shelving of a multibillion-dollar deal to build a gas pipeline through Turkey that President Vladimir V. Putin had trumpeted as a welcome alternative route for Russian gas exports to Europe. Mr. Putin and his Turkish counterpart, Recep Tayyip Erdogan, stoked the confrontation by hurling insults at each other and demanding redress. “We have still not heard any comprehensible apologies from the Turkish political leaders, or any offers to compensate for the damage caused, or promises to punish the criminals for their crime,” Mr. Putin said at the Kremlin. He reiterated Russia’s position that the warplane was brought down on Tuesday over Syria, not Turkey. “One gets the impression that the Turkish leaders are deliberately leading Russian-Turkish relations into a gridlock,” Mr. Putin said, adding later in the day: “Turkey was our friend, almost an ally, and it is a shame that this was destroyed in such a foolish manner.” The standoff between the two prideful leaders boded ill for the mission of President François Hollande of France, who met with Mr. Putin in Moscow on Thursday as part of his effort after the Paris attacks to cement an international coalition to confront the Islamic State. Moscow and Ankara had already been divided over the [CENSORED]ure of President Bashar al-Assad of Syria. Turkey insisted that he step aside, while Russia called Mr. Assad a central ally in the fight against the Islamic State. The downing of the Russian plane inflamed that rift. Mr. Erdogan maintained Thursday that Turkey was protecting its airspace from Russian incursions. “Faced with the same violation today, Turkey would give the same response,” he said. “It’s the country that carried out the violation which should question itself and take measures to prevent it from happening again, not the country that was subjected to a violation.” Later, Mr. Erdogan appeared to soften his remarks somewhat, telling France 24 television: “We might have been able to prevent this violation of our airspace differently.” During a news conference with Mr. Hollande late Thursday, Mr. Putin suggested that the United States, an ally of Turkey, was responsible for the fate of its warplane, since Moscow had passed on information about where and when its bombers would fly. “What did we give this information to the Americans for?” Mr. Putin asked, rhetorically, before adding: “We proceed from the assumption that it will never happen again. Otherwise we don’t need any such cooperation with any country.” Immediately after Turkey shot down the Russian warplane on Tuesday, senior officials in Moscow and Ankara vowed that they wanted to limit any larger conflict. Given that Turkey is a member of NATO, any military confrontation risks pulling in its Western allies. But the economic, geographic and historically competitive ties that bind the two faded empires are facing new strains. At the very least, the tension will hamper chances of resolving the bloody war in Syria. The Turkish foreign minister, Mevlut Cavusoglu, said that while he had expressed regret over the episode in a telephone call on Wednesday to his Russian counterpart, Sergey V. Lavrov, there would be no apology. “We do not need to apologize on an occasion that we are right,” Mr. Cavusoglu said. Maria Zakharova, the spokeswoman for the Russian Foreign Ministry, objected to the failure of Turkish or NATO officials to offer condolences over the two Russian military men who died after the plane was shot down. She also demanded an explanation from Turkey about the death of the pilot, who was killed after he parachuted from the plane. It is believed he was shot by Turkmen insurgents who live along the border on the Syrian side and who are supported by Ankara. The insurgents have accused the Russian Air Force of hitting their positions especially hard after the downing, in areas distant from Islamic State strongholds. “We think it is justified to intensify our airstrikes,” Mr. Putin said Thursday, suggesting that Russia will respond to the episode with more such attacks rather than with a direct military challenge to Turkey. Even before any formal plans for economic sanctions were drawn up, Russia was already retaliating. Moscow has a long history of suddenly discovering faults with the goods and services of other nations when diplomatic relations sour. Hundreds of trucks bearing Turkish fruits, vegetables and other products were lining up at the Georgian border with Russia, Russian news media reported, as inspections slowed to a crawl and Russian officials suggested there might be a terrorist threat from the goods. “This is only natural in light of Turkey’s unpredictable actions,” Dmitri S. Peskov, the presidential spokesman, told reporters. In the Krasnodar region, a group of 39 Turkish businessmen attending an agriculture exhibition were detained for entering Russia on tourist rather than business visas — a common practice — and were slated for deportation, according to a report on the website of the Rossiyskaya Gazeta newspaper. Government officials announced that a special year of cultural exchanges planned for all of 2016 would be canceled. The biggest question about possible economic fallout hung over major energy projects, including the gas pipeline across the Black Sea and the construction of Turkey’s first nuclear power plant. Alexei Ulyukayev, the minister of economic development, said on Thursday that both the pipeline, known as the Turkish Stream, and the Akkuyu nuclear power plant project might be included on any sanctions list. Gazprom was expected to invest some $10 billion in the pipeline project. Russia had been seeking to build the $22 billon South Stream project to avoid sending gas across Ukraine, given its conflict with its neighbor, but balked at the sharing conditions set by the European Union. The Russian government warned against tourism to Turkey, and most major tour operators stopped selling vacation packages. Turkey is among the most po[CENSORED]r destinations for the Russian middle class. Sanctions could be damaging for both countries, even if trade was down in 2015 from a year earlier. Russia was the biggest source of Turkish imports in 2014, some $25 billion or 10 percent of the total, according to an analysis by Renaissance Capital, much of it most likely natural gas. Turkey exported $6 billion worth of goods to Russia in 2014, 4 percent of all exports, and nearly 4.5 million Russians visited last year, according to the analysis. Russia does not always use a calculator in making sanctions decisions. In 2014, when the West imposed economic sanctions for the Russian annexation of Crimea from Ukraine, the Kremlin responded by banning food from the West. That caused a surge in prices for Russian consumers. Some Russian commentators mocked the prospect of sanctions against Turkey in response to the warplane downing. “Russia’s response to a loss of a military jet, to an actual declaration of war, involved a ban on chicken imports and a ban on its tourists going on vacation to Turkey,” Arkady Babchenko, a Russian journalist, wrote on his Facebook page. “That’s the whole set of tools this ‘energy superpower’ was able to set forth to project geopolitical influence when it came to real matters.” But the Ottoman Empire, Turkey’s ancestor, was a bloody rival of the Russian Empire, and the confrontation over the warplane mostly evoked a patriotic response across social media. “The Turkish people have never been our friends — artful, cunning and hypocritical,” wrote one man on Facebook, while another vowed that “I will not go to Turkey or buy Turkish products.” Other Russians lashed out directly at the man who was clashing with their president. “Erdogan completely lost the sense of reality — no good will come of it — not for him, not for Turkey,” wrote Igor Korotchenko, a military analyst, on Twitter.
-
A laptop is a personal computer that can be easily carried and used in a variety of locations. Many laptops are designed to have all of the functionality of a desktop computer, which means they can generally run the same software and open the same types of files. However, some laptops, such as netbooks, sacrifice some functionality in order to be even more portable. Watch the video to learn about the basic parts of a laptop computer. How is a laptop different from a desktop?Because laptops are designed for portability, there are some important differences between them and desktop computers. A laptop has an all-in-one design, with a built-in monitor, keyboard, touchpad (which replaces the mouse), and speakers. This means it is fully functional, even when there are no peripherals attached to it. A laptop is quicker to set up, and there are fewer cables to get in the way. Some newer laptops even have touchscreens, so you may not even need to use a keyboard or mouse. There also is the option of connecting a regular mouse, larger monitor, and other peripherals. This basically turns your laptop into a desktop computer, with one main difference: You can easily disconnect the peripherals and take the laptop with you wherever you go. Here are the main differences you can expect with a laptop. Touchpad: A touchpad—also called a trackpad—is a touch-sensitive pad that lets you control the pointer by making a drawing motion with your finger. Many touchpads now include multi-touch gestures, which allow you to perform specific tasks by making gestures with more than one finger. For example, a pinch gesture is often used to zoom in or out. Battery: Every laptop has a battery, which allows you to use the laptop when it's not plugged in. Whenever you plug in the laptop, the battery recharges. Another benefit of having a battery is that it can provide backup power to the laptop if the power goes out. AC adapter: A laptop usually has a specialized power cable called an AC adapter, which is designed to be used with that specific type of laptop. Some of these cables use magnetic MagSafe connectors that will safely pull out if someone trips over the power cable. This helps to prevent damage to the cable and the laptop. Ports: Most laptops have the same types of ports desktop computers have (such as USB), although they usually have fewer ports to save space. However, some ports may be different, and you may need an adapter in order to use them. For example, the monitor port is often a Mini DisplayPort, which is a smaller version of the normal DisplayPort. Because some ports have a similar appearance, you may need to look at your manual to determine what types of ports your laptop has.
-
Not long after the first personal computers started entering people’s homes, Intel fell victim to a nasty kind of memory error. The company, which had commercialized the very first dynamic random-access memory (DRAM) chip in 1971 with a 1,024-bit device, was continuing to increase data densities. A few years later, Intel’s then cutting-edge 16-kilobit DRAM chips were sometimes storing bits differently from the way they were written. Indeed, they were making these mistakes at an alarmingly high rate. The cause was ultimately traced to the ceramic packaging for these DRAM devices. Trace amounts of radioactive material that had gotten into the chip packaging were emitting alpha particles and corrupting the data. Once uncovered, this problem was easy enough to fix. But DRAM errors haven’t disappeared. As a computer user, you’re probably familiar with what can result: the infamous blue screen of death. In the middle of an important project, your machine crashes or applications grind to a halt. While there can be many reasons for such annoying glitches—including program bugs, clashing software packages, and malware—DRAM errors can also be the culprit. For personal-computer users, such episodes are mostly just an annoyance. But for large-scale commercial operators, reliability issues are becoming the limiting factor in the creation and design of their systems. Big Internet companies like Amazon, Facebook, and Google keep up with the growing demand for their services through massive parallelism, with their data centers routinely housing tens of thousands of individual computers, many of which might be working to serve just one end user. Supercomputer facilities are about as big and, if anything, run their equipment even more intensively. In computing systems built on such huge scales, even low-probability failures take place relatively frequently. If an individual computer can be expected to crash, say, three times a year, in a data center with 10,000 computers, there will be nearly 100 crashes a day. Our group at the University of Toronto has been investigating ways to prevent that. We started with the simple premise that before we could hope to make these computers work more reliably, we needed to fully understand how real systems fail. While it didn’t surprise us that DRAM errors are a big part of the problem, exactly how those memory chips were malfunctioning proved a great surprise. Flavors of Goofs: Most often DRAM errors repeatedly affect the same address—or perhaps just the same row or column—rather than being isolated occurrences. To probe how these devices sometimes fail, we collected field data from a variety of systems, including a dozen high-performance computing clusters at Argonne National Laboratory, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory. We also obtained information from operators of large data centers, including companies such as Google, about their experiences. We made two key observations: First, although most personal-computer users blame system failures on software problems (quirks of the operating system, browser, and so forth) or maybe on malware infections, hardware was the main culprit. At Los Alamos, for instance, more than 60 percent of machine outages came from hardware issues. Digging further, we found that the most common hardware problem was faulty DRAM. This meshes with the experience of people operating big data centers, DRAM modules being among the most frequently replaced components. An individual memory cell in a modern DRAM chip is made up of one capacitor and one transistor. Charging or discharging that capacitor stores one bit of information. Unlike static RAM, which keeps its data intact as long as power is applied to it, DRAM loses information, because the electric charge used to record the bits in its memory cells slowly leaks away. So circuitry must refresh the charge state of each of these memory cells many times each second—hence the appellation “dynamic.” Although having to include refresh circuitry complicates the construction of memory modules, the advantage of DRAM is that the capacitors for the memory cells can be made exceedingly small, which allows for billions of bits on a single chip. A DRAM error arises when one or more bits that are written in one way end up being read in another way. Most consumer-grade computers offer no protection against such problems, but servers typically use what is called an error-correcting code (ECC) in their DRAM. The basic strategy is that by storing more bits than are needed to hold the data, the chip can detect and possibly even correct memory errors, as long as not too many bits are flipped simultaneously. But errors that are too severe can still cause machines to crash. Although the general problem is well known, what stunned us when analyzing data from Lawrence Livermore, Argonne, and Google (and a few other sites later on) was how common these errors are. Between 12 percent and 45 percent of machines at Google experience at least one DRAM error per year. This is orders of magnitude more frequent than earlier estimates had suggested. And even though the machines at Google all employ various forms of ECC, between 0.2 percent and 4 percent of them succumb to uncorrectable DRAM errors each year, causing them to shut down unexpectedly. How can a problem whose existence has been known for more than three decades still be so poorly understood? Several reasons. First off, although these failures occur more often than anybody would like, they’re still rare enough to require very large data sets to obtain statistically significant frequency estimates: You have to study many thousands of machines for years. As a result, most reliability estimates for DRAM are produced in the lab—for example, by shooting particle beams at a device to simulate the effects of cosmic rays, which were long thought to be the main cause of these errors. The idea was that a high-energy cosmic ray would hit a gas molecule in the atmosphere, giving rise to a zoo of other particles, some of which could in turn cause bit errors when they hit a DRAM chip. Another reason it’s so hard to get data on memory failures “in the wild” is that manufacturers and companies running large systems are very reluctant to share information with the public about hardware failures. It’s just too sensitive. DRAM errors can usually be divided into two broad categories: soft errors and hard errors. Soft errors occur when the physical device is perfectly functional but some transient form of interference—say, a particle spawned by a cosmic ray—corrupts the stored data. Hard errors reflect problems with the physical device itself, where, for example, a specific bit in DRAM is permanently stuck at 0 or 1. The prevailing wisdom is that soft errors are much more common than hard errors. So nearly all previous research on this topic focused on soft errors. Curiously, before we began our investigations, there were no field studies available even to check this assumption. In the Crosshairs: Once a DRAM error occurs, the likelihood of other errors in the same row or column of memory cells increases. The likelihood of errors even grows for nearby rows or columns, as seen in this “heat map” of error correlations. To test this widely held belief—indeed, to answer many open questions about memory errors—we examined large-scale computing systems with a wide variety of workloads, DRAM technologies, and protection mechanisms. Specifically, we obtained access to a subset of Google’s data centers, two generations of IBM’s Blue Gene supercomputer (one at Lawrence Livermore and one at Argonne), as well as the largest high-performance cluster in Canada, housed at the SciNet supercomputer center at the University of Toronto. Most of these facilities were already logging the relevant data, although we needed to work with SciNet’s operators on that. In total, we analyzed more than 300 terabyte-years of DRAM usage. There was some unquestionably good news. For one, high temperatures don’t degrade memory as much as people had thought. This is valuable to know: By letting machines run somewhat hotter than usual, big data centers can save on cooling costs and also cut down on associated carbon emissions. One of the most important things we discovered was that a small minority of the machines caused a large majority of the errors. That is, the errors tended to hit the same memory modules time and again. Indeed, we calculated that the probability of having a DRAM error nearly doubles after a machine has had just two such errors. This was startling, given the prevailing assumption that soft errors are the dominant failure mode. After all, if most errors come from random events such as cosmic rays, each DRAM memory chip should have an equal chance of being struck, leading to a roughly uniform distribution of errors throughout the monitored systems. But that wasn’t happening. To understand more about our results, you need to know that a DRAM device is organized into several memory banks, each of which can be thought of as a 2-D array of memory cells laid out in neat rows and columns. A particular cell inside an array can be identified by two numbers: a row index and a column index. We found that more than half the memory banks suffering errors were failing repeatedly at the same row and column address—that is, at the location of one iffy cell. Another significant fraction of the errors cropped up in the same row or same column each time, although the exact address varied. It’s unlikely, of course, that the same location on a device would be hit twice by an errant nuclear particle. Extremely unlikely. This is clear evidence that, contrary to prevailing notions, hard DRAM errors are more common than soft ones. What are we to make of the unexpected prevalence of hard errors? The bad news is that hard errors are permanent. The good news is that they are easy to work around. If errors take place repeatedly in the same memory address, you can just blacklist that address. And you can do that well before the computer crashes. Remember, the only errors that really matter are the ones that flip too many bits for ECC to correct. So errors that corrupt fewer bits could be used as an early warning that drastic measures should be taken before a crash occurs. Our investigation showed that this strategy could indeed work well: More than half of the catastrophic multibit errors followed earlier errors that were less severe and thus correctable. So with the proper mechanisms in place, most crash-inducing DRAM glitches could be prevented. Indeed, rather than just relying on ECC or other expensive kinds of correction hardware, computer operating systems themselves could help to protect against memory errors. Rows and Columns: This image shows the internal makeup of a typical DRAM chip, in this case Micron Technology’s MT4C1024, which stores 220 bits of information (1 mebibit). Operating systems typically divide the computer’s main memory into areas known as pages (usually 4 kilobytes in size). And from the operating system’s point of view, the majority of errors come from a very small fraction of the available pages. Indeed, more than 85 percent of the errors come from just 0.0001 percent of all the pages. By removing those problematic pages from use, the operating system could prevent most of the risk of errors without giving up much memory capacity. Although others have previously proposed this very tactic, known as page retirement, it’s seldom used. None of the organizations we worked with for our DRAM study employed it. One reason might be that people just didn’t realize how often DRAM errors tend to crop up in the same page of memory. After our research brought that fact to light, Facebook adopted page retirement in its data centers. But it’s still not widely used, perhaps because there’s still some confusion about what sort of page-retirement schemes would work best. To help clarify that issue, we investigated five simple policies. Some were as conservative as “Retire a page when you see the first error on it.” Others involve trying to prevent row or column errors by retiring the entire row or column as soon as one memory cell starts to show problems. We used the data we collected to see how well such policies might protect real computer systems, and we found that almost 90 percent of the memory-access errors could have been prevented by sacrificing less than 1 megabyte of memory per computer—a tiny fraction of what is typically installed. Sure, a technician could replace the entire memory module when it starts to have errors. But it would probably have only a few bad cells. Page retirement could isolate the regions of memory prone to errors without sacrificing the other parts of an otherwise functional module. Indeed, applying sensible page-retirement policies in large data centers and supercomputing facilities would not only prevent the majority of machine crashes, it would also save the owners money. The same applies to consumer gadgets. With the growing ubiquity of smartphones, tablets, wearable electronics, and other high-tech gear, the number of devices that use DRAM memory is skyrocketing. And as DRAM technology advances, these devices will contain ever-larger quantities of memory, much of which is soldered straight onto the system board. A hard error in such DRAM would normally require replacing the entire gizmo. So having the software retire problematic parts of memory in such an environment would be especially valuable. Had we accepted the received wisdom that cosmic rays cause most DRAM errors, we would never have started looking at how these chips perform under real-world conditions. We would have continued to believe that distant events far off in the galaxy were the ultimate cause of many blue screens and server crashes, never realizing that the memory errors behind them usually stem from underlying problems in the way these chips are constructed. In that sense, DRAM chips are a little like people: Their faults are not so much in their stars as in themselves. And like so many people, they can function perfectly well once they compensate for a few small flaws. This article originally appeared in print as “Battling Borked Bits”
-
Given that it is basically a big iPad with a better keyboard option and stylus support, the iPad Pro has been getting decent reviews. It can be classified as a first-generation product -- and as such, it does far better than most. However, there have been a huge number of companies in Apple's space that have gone under or stalled when they switched from focusing exclusively on the consumer and tried to do IT. In fact, I can't really point to any firm that has done that well. So, the question is, will this pivot change Apple for the worse? I don't think so. Apple may have figured out how to sidestep this problem. I'll share my thoughts on that and close with my product of the week: the Qualcomm Snapdragon 820, which heralds the biggest change in smartphones this decade. The Tech Curses Along with many other analysts, I believe there are two curses that are alive and well in the tech industry. One is the curse of the palatial new headquarters. For some time, we watched company after company put up a palatial new corporate office only to fail shortly thereafter. The computer history museum is actually in one of those buildings, which was built for Silicon Graphics. Poor Novell never even got its headquarters built -- it announced its plans, just before imploding. Apple's flying saucer is about as palatial as we've ever seen. The other -- and more pertinent to this piece -- is the flip from consumer focus to corporate focus. Back in the 1980s, the most powerful PC company wasn't Apple or Microsoft. It was Commodore, and it dominated right up until it decided to create a business-focused machine. Apple's biggest failure in that decade was the Lisa, which largely was business-focused. After having the most successful consumer-focused product in Windows 95, Microsoft decided to force-feed its enterprise product, Windows NT, which hadn't been selling that well, to the same audience. It went from nearly 100 percent share to, well, a shadow of that. One of the efforts Steve Jobs killed when he came back to Apple was its business-focused PC effort (granted, along with PDAs, printers, cameras, etc.). With both curses, I think the core problem is focus. In building a palatial headquarters, executives shift focus from product to jockeying for who will get the best office, office appointments and other perks. When shifting to the enterprise, the focus moves from features and functions the user wants to compliance, volume pricing games, and trying not to change things. For the enterprise, excitement is a bad thing. What has made me wonder about the sanity of executives who make this move is that users can be motivated to buy new and interesting things as often as every 12 months, while enterprise buyers would prefer a replacement cycle longer than five years -- often much longer. Oh, and if users want something like an iPad or Windows 95, they'll drag it in over IT's objections. If IT wants something that the users don't want -- like terminals or thin clients -- it generally doesn't sell well. From this, the "smart" technology companies decide, hey let's focus on IT and let the users pound sand, and the result pretty much sucks. I'll bet you can't tell I'm tired of watching people relearn that lesson. What Makes Apple Different Apple is doing a number of really smart things in this case. First, Tim Cook has given up his Mac and is carrying a Windows PC. No, just kidding, he is carrying an iPad Pro. I've often felt the CEO should become a personal advocate for a company's business-oriented products. CEOs who do so can ensure that their company's products meet the needs of the user, because they are users -- and their needs won't be ignored. It makes them into real believers in the product, not just extremely well-paid shills. Cook's hands-on experience should ensure the product remains focused on the user, even as the effort pivots to address IT's needs. Second, Apple is not building an enterprise channel -- it is using Cisco's and IBM's channels to move the product. That keeps Apple focused on the user while letting its distribution partners focus on making the enterprise IT shops happy. The only problem with this is that at least some of the enterprise requirements do have to make it back into the product over time to ensure the offering's success -- particularly in areas like security and management. That doesn't seem to be happening yet. However, given history, I think it is far better that some of the enterprise needs be unmet than it would be if Apple started missing user requirements. The iPad Pro There are three initial problems with the iPad Pro, though none of them are deal killers. I expect most will be addressed in the next product if Apple listens. It doesn't have a touchpad on the keyboard. Taking your hands off the keys to touch the screen is annoying when you are moving fast, and given that Apple hasn't put touchscreens on its PCs, this omission seems particularly strange. The lack of a USB port means certain required accessories, like some types of secure memory, USB RSA tokens, and lots of printers can't be used with the device, which will be a problem for a lot of shops where people are trying to use them as laptops. Finally, the feel of the stylus isn't right yet, suggesting Apple did this itself and didn't buy the tip from one of the firms developing in this space for over a decade. On this last, there is a chance that some users actually might find the Apple feel better -- for example, those who don't go back and forth between the iPad and paper, or those who just started with tablets and never developed paper muscle memory. The iPad Pro is not bad for a first shot, but how quickly Apple steps up and addresses these shortcomings will tell us how strong its relationship is with Cisco and IBM, and whether these are true partnerships or partnerships in name only. Wrapping Up The use of Cisco and IBM coupled with Tim Cook's personal engagement with the iPad Pro should ensure the product's success. It really will depend on just how much Cisco and IBM influence [CENSORED]ure products; it needs to be enough to hit IT minimums, but not enough to break the connection with users -- and the latter is more important than the former. While there are some initial issues with the product, it likely will go down as one of the best first-generation offerings from any company, and that is a strong indication of its [CENSORED]ure success. In the end, products like the iPad Pro, the Surface Pro, and the Surface Book showcase a renewed focus on creating devices users really can love, and that push the envelope on portability and battery life. For those of us who live off products like this, there is nothing bad about that. Last week I got my updated briefing on the Qualcomm Snapdragon 820, which will show up in phones and a few tablets, robots, drones and cars starting in 2016 . It just started shipping. Qualcomm Snapdragon 820 Processor Typically, we see a 5 percent performance improvement year over year in smartphones, and once in a decade we get a huge jump due to a big technology change -- and this is that part. Qualcomm is forecasting a 30 percent performance jump in processing power with this generation and a 2x or better performance jump in network data speeds. In addition, we'll get better sound (both separation and volume), better pictures (less artifacts, better color saturation, less ghosting, better color accuracy), better security (both biometric and antimalware), better charging (long-distance wireless resonance charging and faster charging), and longer battery life. Increasingly, we are living off our phones, and this will make a lot of the things that really are annoying -- like remembering passwords (or getting the damn fingerprint reader to work right), running out of power (or worrying about running out of power), and pictures that suck -- be things of the past. For folks who replace their phones next year with one that uses this part, you'll see a near day-and-night difference if the OEM turns on most of these features, though unfortunately, they often don't. This is only for top-of-the-line phones. If you buy on a budget, 2017 may be the better year for you. Because the Snapdragon 820 will be like putting a supercharger -- I'm a big fan of superchargers -- on your phone, it is my product of the week.
-
[BATTLE ] Mr.Sebby / Levan [Winner Levan]
DaNGeROuS KiLLeR replied to Julian-'s topic in GFX Battles
V2. Effects + Text + Brushes. -
What is your favorite class zombie? (Zombie Plague)
DaNGeROuS KiLLeR replied to Loading's topic in Off Topic
My 2 favorite classes ! Regenerator: Because the hp keep get full ! Hunter: Because of his jumps ! -
Enjoy
-
Best Fails Compilation || FailArmy
DaNGeROuS KiLLeR replied to DaNGeROuS KiLLeR's topic in Weekly funniest things ツ
Enjoy -
Wanna win with the ladies? Against conventional wisdom, it may help to add some garlic to your diet. Consuming a hearty dose of garlic can make men's body odors significantly more attractive to women, according to a new series of studies published in the research journal Appetite. In one of three studies, researchers from Charles University in Prague had half of a group of men eat a cheese sandwich with slathered with 12 grams of garlic (about six cloves), while the other half ate a plain cheese sandwich. They were instructed to limit other effects on their body odor, such as refraining from using deodorant or smoking. All the men then wore cotton pads in their armpits to collect sweat and body odor for 12 hours. A week later, the men each ate the opposite sandwich and repeated the experiment. Both sets of their pads were then presented to women, who smelled them and scored the pads on appeal. The pads collected after the men had eaten the hefty dose of garlic were rated to have more "attractive," "pleasant" and "masculine" body odors, according to the study. Their odors were also said to be "less intense." Mmm. These men had consumed 12 grams of garlic, the amount in about 17 garlic breadsticks. And there's a faster way to get the kick: A third study found that store-bought garlic capsules had mostly the same effect. Overall, the 12 grams of fresh garlic consumption took men from a 2.9 to a 3.1 on an attractiveness scale of one to seven. While it doesn't seem like a huge difference, it's big enough for scientists to consider it statistically significant, Psychology Today reports. But as mentioned, a man will have to consume a lot of garlic for that bump in attractiveness. The researchers say these positive results could be due to garlic's antioxidant properties, which have been shown to enhance body odor overall. Or it could be due to the full gamut of garlic health perks -- including cardiovascular and antibacterial benefits -- which women may have learned to sniff out in a healthy mate. Either way, it's one more excuse to eat garlic chicken tonight. Bon appétit!
-
Belgium will remain under the highest level of alert for another week due to an ongoing terrorism threat, but schools and the underground train system will reopen from Wednesday, Belgian Prime Minister Charles Michel said. "The crisis centre decided to maintain the alert level four, which means the threat remains serious and imminent," Michel told a press conference, adding the threat level will be reviewed again next Monday. "We want to thank the people for their calm and understanding," he added. The army and police will continue to be deployed in force and the country will reduce the number of events with large crowds, for fear of a repeat of the Paris gun and suicide bomb attacks on November 13, Michel said. But he added his government was trying to bring the country "back to normal as quickly as possible" while working with the security services. It decided to reopen schools and the underground metro from Wednesday. "For schools, that means that in the coming hours, we will guarantee a level of security everywhere on the country's territory," the prime minister said. "As for the metro, the aim is to reopen the metro gradually, but starting on Wednesday." The rest of the country will remain on alert level three, which means an attack is considered possible and the threat credible. Belgium has been locked down since Saturday with armed police and troops patrolling quiet streets.
-
Weather-related disasters in the past two decades have killed more than 600,000 people and inflicted economic losses estimated at trillions of dollars, the United Nations said on Monday, warning that the frequency and impact of such events was set to rise. The figures were released before a United Nations-backed climate meeting, starting next Monday in Paris, at which more than 120 national leaders will try to rein in greenhouse gas emissions and slow the rise in global temperatures. According to the report from the United Nations Office for Disaster Risk Reduction, the United States has suffered the highest number of weather-related disasters in the past two decades, but China and India have been the most severely affected, enduring floods that had an impact on billions of people. As well as killing hundreds of thousands, weather-related disasters wounded 4.1 billion others and inflicted economic costs well in excess of $1.9 trillion over the two decades, the report found. The United Nations office recorded an average of 335 weather-related disasters every year over the two decades, double the level in the previous 10 years. The report counted events that had killed 10 or more people, affected more than 1,000 and generated appeals for external assistance. “Predictions of more extreme weather in the [CENSORED]ure almost certainly means that we will witness a continued upward trend in weather-related disasters in the decades ahead,” the report said. In a foreword to the findings, Margareta Wahlstrom, the head of the disaster reduction office, said the findings “underline why it is so important that a new climate change agreement emerges” from the summit meeting in Paris. Citing the rising temperature of the oceans and melting glaciers as two central drivers of extreme weather, Ms. Wahlstrom said that agreement on reducing greenhouse gas emissions would help reduce the huge damage and losses inflicted by disasters linked to climate. The connection between extreme weather and climate change is not always clear. There is strong evidence that the warming climate is creating more frequent and intense heat waves, causing heavier rainstorms, worsening coastal flooding and intensifying some droughts, but for many other types of weather occurrences, the linkage is less clear. Floods accounted for close to half of all the weather-related disasters, affecting 2.3 billion people, mostly in Asia, the report found. Storms had taken the heaviest toll of lives, however, causing about 242,000 recorded deaths, including 138,000 killed by Cyclone Nargis, which struck Myanmar in 2008. Droughts, most acute in Africa, had affected more than a billion people in the past two decades, leading not only to hunger, malnutrition and disease but also to widespread agricultural failure that resulted in long-lasting underdevelopment, the report said. Heat waves had killed 148,000, mostly in Europe, and wildfires had emerged as another climate-related risk, according to the report. About 38 major wildfires in the United States were estimated to have affected more than 108,000 people and caused recorded losses of over $11 billion — numbers the report said were sure to rise when fires that were raging after August 2015, the cutoff point for data, were taken into account. The figure of $1.9 trillion for the worldwide cost of the disasters was drawn up for the United Nations by the Center for Research on the Epidemiology of Disasters, based in Belgium. The center said that figure was a minimum, however, as data was available for only a little more than a third of the recorded disasters.
-
Welcome To CsbackDevil Enjoy Your Stay Have Fun
-
Welcome To CsbackDevil Enjoy Your Stay Have Fun
-
Bye Good luck with your fu.ture. Hope to see you soon back !
-
Windows 10 makes it easy to customize the look and feel of your desktop. To access the Personalization settings, right-click anywhere on the desktop, then select Personalize from the drop-down menu. The Personalization settings will appear. Click the buttons in the interactive below to learn more about using the Personalization settings. To change the font size:If you have difficulty seeing the text on your computer, you can increase the font size. Increasing the font size will also increase the size of icons and other items on your desktop. Open the Settings app, then select System. The Display options will appear. Use the slider to select the desired item size. Note that a larger size may interfere with the way some items appear on the screen. Click Apply to save your changes. You may then need to restart your computer for these changes to take effect. To adjust ClearType settings:ClearType allows you to fine tune how the text on your computer looks, which helps improve readability. From the Display settings, select Advanced display settings. Choose ClearType text below Related Settings. The ClearType dialog box will appear. Follow the instructions, choosing the text that appears best to you.
-
We asked readers to rate their Internet Service Provider based on price, performance, reliability and support. Here are the results for one of the country's oldest communications companies: AT&T. Last month, we reached out to our community, asking readers to tell us what they thought about their Internet service providers (ISPs) in a survey rating price, performance, reliability and customer service. We have the results, and it's time to reveal your ISP ratings with our Tom's Hardware ISP Review! Our survey garnered over 3100 results, with 271 votes reviewing AT&T. We arrived at our scores by calculating an average from the total score for each individual ISP and category using a one- to five-star rating, rounding to the nearest ¼ star. However, we also provide the mathematical average of each ISP’s survey results, for the sake of comparison later. It may become a very close race to see which ISP provides the best service. The first company we're looking at is a mainstay in the communications industry with more than 130 years in the business. That makes it older than any of the other companies we are reviewing. That impressive claim is true...technically. However, the AT&T we know today isn't the same organization now as it was more than a century ago. History The American Telephone and Telegraph Co. began as the Bell Patent Association, a legal entity created in 1874 with the goal of protecting the patent rights of the telephone system's inventor, Alexander Graham Bell. The company was formalized in 1877 and dubbed the Bell Telephone Co. Five years later, a project known as "AT&T Long Lines," the first of its kind, was commissioned to create a nationwide communication network with a viable cost structure. The project was incorporated into a new company in New York state in 1885. AT&T developed and maintained a monopoly on phone services in the United States and Canada throughout most of the 20th century by buying up small communications companies and making a pact with the government to maintain that monopoly status legally. That's not the greatest (or fairest) way of doing business, and in 1984, the massive communications giant was broken up into seven regional companies, referred to as the "Baby Bells." Between 1996 and 2006, a company called SBC Communications (which itself was originally Southwestern Bell Corp., one of the seven companies created from the break-up) acquired four of the seven regional Baby Bell companies, and reincorporated as AT&T Inc. in 2005. When AT&T was considered a monopoly, the government had to step in and break it into seven companies. But rejoining five of them under a new name seems to be fine for now. The company thrives today, boasting an impressive 12.2 million "U-verse" high-speed Internet customers, warranting its spot in our review. Technology Although AT&T is starting to offer fiber-optic Internet service in some regions, it is not widely available yet, and the majority of AT&T's customer base (and our surveyed readers) seem to be using its affordable U-Verse DSL options, offering varying plans with speeds from 1.5 Mb/s to 45 Mb/s.
-
Valve this week rolled out the hardware phase of its road map from bedrooms and basements to living rooms and lounge areas with its long-awaited video game controller, PC link box and Steam boxes. Valve's road map to the living room began with Big Picture mode in 2011, continued with in-home streaming in 2014, and now includes the new Steam hardware. The Linux-based Big Picture mode formatted the Steam digital distribution platform for living room TVs, offering an experience similar to Xbox Live and allowing Steam to stream video games from a workhorse of a host PC to a client device attached to a TV. Valve's new Steam Link serves as that client device, as the set-top-box can relay in real time the video game content a WiFi-connected gaming rig is crunching in another room. The company's Steam boxes -- primarily a philosophy for low-profile, console-like PC -- can sub for Steam Link or stand alone by playing video games locally. The Steam Controller, with its concave trackpads, is meant to cap the living room experience with gamepad elements designed for the needs of PC-specific games -- the ones that require the oh-so-accurate keyboard and mouse combination. With all the pieces in place, Valve's days of reckoning have begun. Its plan for a better alternative to consoles now awaits adoption, and that will depend on the amount of seams end users see, according to Christine Arrington, senior analyst of games at IHS Technology. "As an alternative to consoles, the whole suite of hardware from Steam, including controllers, Steam Machines and HTC Vive, will all have to be highly integrated and work seamlessly," she told TechNewsWorld. "It is too early to tell whether Steam and its partners will be able to achieve that." The Reaction Gamers knew what they were getting when they ordered the Steam Link, which has its performance directly tied to the health and strength of a local network. In-home streaming allowed anyone with a second PC, even a laptop, to give the feature a go. Gamers knew what they were getting when they ordered a Steam-branded machine: a gaming PC in a microATX case. Despite Valve's limited circulation of the Steam Controller, many people were unsure about what they were getting. So far, the reaction has been mixed. After forcing himself to game exclusively with the Steam Controller, Engadget's Sean Buckley had a breakthrough. "At some point, something clicked -- I had more control over the games I was playing than I ever did with a traditional gamepad," he wrote. "My thumb's quick flicks and the subtle aiming motions I employed to the controller's gyro sensor felt natural and nuanced. I felt more immersed, I realized, and I was having more fun." For VentureBeat's Jeff Grubb, there's a lot to like about Valve's gamepad. However, the Steam Controller just seems too exotic or bizarre for him to embrace it fully. "I am not compatible with all of this change, and I think you'll need time to grow accustomed to it as well," he wrote. "I tried all kinds of big games -- Skyrim, The Witcher, Far Cry, and Fallout 4 -- and I don't see how I could ever get this pad to work in a way that I would find acceptable." The Waypoint Though some of the steam from Valve's new machines may have seeped out as Windows 10 rolled in, the company has pushed forward with its plans to push Steam OS and its console-like Steam boxes. A Steam box is what was known as a home theater PC, a low-profile computer meant to serve up movies and music to living rooms before Netflix and Spotify erased the blackboard and drew up their own equations. Valve has offered a list of best practices for third parties looking to build Steam machines, but anyone can piece together a computer and rightfully call it a Steam box. That's part of what has been making Steam machines a hard sell for consumers. "I think it is likely that Steam will realize their ambitions with a small, enthusiastic base of PC gamers," IHS Technology's Arrington said. "The initial reactions seem to be that those gamers who build their own rigs will continue to do so." While Valve may struggle to convince PC enthusiasts to put down the thermal paste and micro screwdrivers, the price of the branded Steam boxes also could keep the console-only crowd from coming over. While noting that Valve has the community and library of games to support a solid living room experience, gamers so far have been unconvinced about moving away from their traditional consoles, according to Mike Schramm, head of the qualitative analyst team at EEDAR. "If Valve can leverage its dedicated users and convince them to try a console experience, it may see some success," he told TechNewsWorld. "But it will likely have a hard time convincing console gamers to step away from the Xbox One and the PlayStation 4." The Road Ahead After a soft launch to fulfill a handful of advanced orders last month, Valve's partners released three families of Steam-branded machines that come preloaded with Steam OS. Alienware offered four Steam Machines, relatives of its Windows-based Alpha. The machines -- priced at US$449, $549, $649 and $749 -- leverage custom Nvidia GeForce GTX graphics cards, and each tier offers a step up over the previous one in GPU, RAM and hard drive space. Syber Gaming's family of Steam Machine has three members that are priced at $499, $729 and $1,419. From the low end to the flagship, Syber's machines are spec'd in clearly defined tiers that see processing power and storage stepped up from bottom to top. Zotac's lone offering, which includes a $50 Steam controller, had $100 sliced off its $999 price tag a day after launch. The premium box includes midrange CPU and storage, with an i5 and 8 GB of RAM, but goes all in on GPU with a beastly Nvidia GeForce GTX 960. The latest game consoles from Microsoft and Sony are going for $350, and Nintendo's Wii U is expected to sell for $250 this holiday season. The price points of the pioneering Steam Machines seem like barriers to "converting custom builders," noted IHS Technology's Arrington. That significantly narrows Valve's target market. "Gamers who want a simple experience are typically console gamers," she said, "so there is that small niche of people who want to play PC games, don't want the complexity of PC gaming, and are willing to spend the money for the simple experience."
-
V1. Effect + Text.
-
Best Fails Compilation || FailArmy
DaNGeROuS KiLLeR replied to DaNGeROuS KiLLeR's topic in Weekly funniest things ツ
Enjoy -
Hyundai is recalling about 305,000 of its 2011-12 Sonata models because the automatic transmission could possibly be moved out of park without pressing the brake pedal, allowing the car to roll or drive away, according to a report from the automaker posted this week on the National Highway Traffic Safety Administration’s website. Hyundai attributed the problem to the stop lamp switch. It is the third time since 2009 that Hyundai has recalled vehicles for a malfunctioning brake switch. In this recall, Hyundai said along with the ability to move the transmission out of park without pressing the brake, the brake lights might remain illuminated after the brake pedal was released. It said it was unaware of any accidents or injuries related to the problem. Hyundai told federal regulators that a switch could become stuck, but the automaker was still trying to figure out why. Hyundai said it became aware of the issue because of an unusual number of warranty claims. Continue reading the main story Related Coverage Business Briefing: Hyundai Recalls 570,000 Sonatas and Accents SEPT. 25, 2015 The automaker has had a series of problems with brake switches. In 2013, Kia and its parent company, Hyundai, recalled about 1.7 million vehicles, including the 2011 Sonata. Hyundai said the switch malfunction could cause problems such as a driver’s not being able to shut off the cruise control by touching the brake. In 2009, Hyundai recalled about 533,000 for similar problems with the brake light switch.
-
GFX -Battle AXE-, REVAN, Hunk [ Winner Hunk.-™ ]
DaNGeROuS KiLLeR replied to Ragnarok's topic in GFX Battles
V2. Effect + Text + Border + Brushes. -
[Battle] Mr.Sebby and Levan [ Winner RenzO ]
DaNGeROuS KiLLeR replied to RenzO's topic in GFX Battles
V1. Effect + Text. -
Welcome To CsBackDevil Enjoy Your Stay Have Fun