☕ Eid al-Fitr ☕
Days
Hours
Minutes
Seconds
Leaderboard
The search index is currently processing. Leaderboard results may not be complete.
Popular Content
Showing content with the highest reputation on 11/25/2015 in all areas
-
This post cannot be displayed because it is in a password protected forum. Enter Password
-
A laptop is a personal computer that can be easily carried and used in a variety of locations. Many laptops are designed to have all of the functionality of a desktop computer, which means they can generally run the same software and open the same types of files. However, some laptops, such as netbooks, sacrifice some functionality in order to be even more portable. Watch the video to learn about the basic parts of a laptop computer. How is a laptop different from a desktop?Because laptops are designed for portability, there are some important differences between them and desktop computers. A laptop has an all-in-one design, with a built-in monitor, keyboard, touchpad (which replaces the mouse), and speakers. This means it is fully functional, even when there are no peripherals attached to it. A laptop is quicker to set up, and there are fewer cables to get in the way. Some newer laptops even have touchscreens, so you may not even need to use a keyboard or mouse. There also is the option of connecting a regular mouse, larger monitor, and other peripherals. This basically turns your laptop into a desktop computer, with one main difference: You can easily disconnect the peripherals and take the laptop with you wherever you go. Here are the main differences you can expect with a laptop. Touchpad: A touchpad—also called a trackpad—is a touch-sensitive pad that lets you control the pointer by making a drawing motion with your finger. Many touchpads now include multi-touch gestures, which allow you to perform specific tasks by making gestures with more than one finger. For example, a pinch gesture is often used to zoom in or out. Battery: Every laptop has a battery, which allows you to use the laptop when it's not plugged in. Whenever you plug in the laptop, the battery recharges. Another benefit of having a battery is that it can provide backup power to the laptop if the power goes out. AC adapter: A laptop usually has a specialized power cable called an AC adapter, which is designed to be used with that specific type of laptop. Some of these cables use magnetic MagSafe connectors that will safely pull out if someone trips over the power cable. This helps to prevent damage to the cable and the laptop. Ports: Most laptops have the same types of ports desktop computers have (such as USB), although they usually have fewer ports to save space. However, some ports may be different, and you may need an adapter in order to use them. For example, the monitor port is often a Mini DisplayPort, which is a smaller version of the normal DisplayPort. Because some ports have a similar appearance, you may need to look at your manual to determine what types of ports your laptop has.1 point
-
Not long after the first personal computers started entering people’s homes, Intel fell victim to a nasty kind of memory error. The company, which had commercialized the very first dynamic random-access memory (DRAM) chip in 1971 with a 1,024-bit device, was continuing to increase data densities. A few years later, Intel’s then cutting-edge 16-kilobit DRAM chips were sometimes storing bits differently from the way they were written. Indeed, they were making these mistakes at an alarmingly high rate. The cause was ultimately traced to the ceramic packaging for these DRAM devices. Trace amounts of radioactive material that had gotten into the chip packaging were emitting alpha particles and corrupting the data. Once uncovered, this problem was easy enough to fix. But DRAM errors haven’t disappeared. As a computer user, you’re probably familiar with what can result: the infamous blue screen of death. In the middle of an important project, your machine crashes or applications grind to a halt. While there can be many reasons for such annoying glitches—including program bugs, clashing software packages, and malware—DRAM errors can also be the culprit. For personal-computer users, such episodes are mostly just an annoyance. But for large-scale commercial operators, reliability issues are becoming the limiting factor in the creation and design of their systems. Big Internet companies like Amazon, Facebook, and Google keep up with the growing demand for their services through massive parallelism, with their data centers routinely housing tens of thousands of individual computers, many of which might be working to serve just one end user. Supercomputer facilities are about as big and, if anything, run their equipment even more intensively. In computing systems built on such huge scales, even low-probability failures take place relatively frequently. If an individual computer can be expected to crash, say, three times a year, in a data center with 10,000 computers, there will be nearly 100 crashes a day. Our group at the University of Toronto has been investigating ways to prevent that. We started with the simple premise that before we could hope to make these computers work more reliably, we needed to fully understand how real systems fail. While it didn’t surprise us that DRAM errors are a big part of the problem, exactly how those memory chips were malfunctioning proved a great surprise. Flavors of Goofs: Most often DRAM errors repeatedly affect the same address—or perhaps just the same row or column—rather than being isolated occurrences. To probe how these devices sometimes fail, we collected field data from a variety of systems, including a dozen high-performance computing clusters at Argonne National Laboratory, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory. We also obtained information from operators of large data centers, including companies such as Google, about their experiences. We made two key observations: First, although most personal-computer users blame system failures on software problems (quirks of the operating system, browser, and so forth) or maybe on malware infections, hardware was the main culprit. At Los Alamos, for instance, more than 60 percent of machine outages came from hardware issues. Digging further, we found that the most common hardware problem was faulty DRAM. This meshes with the experience of people operating big data centers, DRAM modules being among the most frequently replaced components. An individual memory cell in a modern DRAM chip is made up of one capacitor and one transistor. Charging or discharging that capacitor stores one bit of information. Unlike static RAM, which keeps its data intact as long as power is applied to it, DRAM loses information, because the electric charge used to record the bits in its memory cells slowly leaks away. So circuitry must refresh the charge state of each of these memory cells many times each second—hence the appellation “dynamic.” Although having to include refresh circuitry complicates the construction of memory modules, the advantage of DRAM is that the capacitors for the memory cells can be made exceedingly small, which allows for billions of bits on a single chip. A DRAM error arises when one or more bits that are written in one way end up being read in another way. Most consumer-grade computers offer no protection against such problems, but servers typically use what is called an error-correcting code (ECC) in their DRAM. The basic strategy is that by storing more bits than are needed to hold the data, the chip can detect and possibly even correct memory errors, as long as not too many bits are flipped simultaneously. But errors that are too severe can still cause machines to crash. Although the general problem is well known, what stunned us when analyzing data from Lawrence Livermore, Argonne, and Google (and a few other sites later on) was how common these errors are. Between 12 percent and 45 percent of machines at Google experience at least one DRAM error per year. This is orders of magnitude more frequent than earlier estimates had suggested. And even though the machines at Google all employ various forms of ECC, between 0.2 percent and 4 percent of them succumb to uncorrectable DRAM errors each year, causing them to shut down unexpectedly. How can a problem whose existence has been known for more than three decades still be so poorly understood? Several reasons. First off, although these failures occur more often than anybody would like, they’re still rare enough to require very large data sets to obtain statistically significant frequency estimates: You have to study many thousands of machines for years. As a result, most reliability estimates for DRAM are produced in the lab—for example, by shooting particle beams at a device to simulate the effects of cosmic rays, which were long thought to be the main cause of these errors. The idea was that a high-energy cosmic ray would hit a gas molecule in the atmosphere, giving rise to a zoo of other particles, some of which could in turn cause bit errors when they hit a DRAM chip. Another reason it’s so hard to get data on memory failures “in the wild” is that manufacturers and companies running large systems are very reluctant to share information with the public about hardware failures. It’s just too sensitive. DRAM errors can usually be divided into two broad categories: soft errors and hard errors. Soft errors occur when the physical device is perfectly functional but some transient form of interference—say, a particle spawned by a cosmic ray—corrupts the stored data. Hard errors reflect problems with the physical device itself, where, for example, a specific bit in DRAM is permanently stuck at 0 or 1. The prevailing wisdom is that soft errors are much more common than hard errors. So nearly all previous research on this topic focused on soft errors. Curiously, before we began our investigations, there were no field studies available even to check this assumption. In the Crosshairs: Once a DRAM error occurs, the likelihood of other errors in the same row or column of memory cells increases. The likelihood of errors even grows for nearby rows or columns, as seen in this “heat map” of error correlations. To test this widely held belief—indeed, to answer many open questions about memory errors—we examined large-scale computing systems with a wide variety of workloads, DRAM technologies, and protection mechanisms. Specifically, we obtained access to a subset of Google’s data centers, two generations of IBM’s Blue Gene supercomputer (one at Lawrence Livermore and one at Argonne), as well as the largest high-performance cluster in Canada, housed at the SciNet supercomputer center at the University of Toronto. Most of these facilities were already logging the relevant data, although we needed to work with SciNet’s operators on that. In total, we analyzed more than 300 terabyte-years of DRAM usage. There was some unquestionably good news. For one, high temperatures don’t degrade memory as much as people had thought. This is valuable to know: By letting machines run somewhat hotter than usual, big data centers can save on cooling costs and also cut down on associated carbon emissions. One of the most important things we discovered was that a small minority of the machines caused a large majority of the errors. That is, the errors tended to hit the same memory modules time and again. Indeed, we calculated that the probability of having a DRAM error nearly doubles after a machine has had just two such errors. This was startling, given the prevailing assumption that soft errors are the dominant failure mode. After all, if most errors come from random events such as cosmic rays, each DRAM memory chip should have an equal chance of being struck, leading to a roughly uniform distribution of errors throughout the monitored systems. But that wasn’t happening. To understand more about our results, you need to know that a DRAM device is organized into several memory banks, each of which can be thought of as a 2-D array of memory cells laid out in neat rows and columns. A particular cell inside an array can be identified by two numbers: a row index and a column index. We found that more than half the memory banks suffering errors were failing repeatedly at the same row and column address—that is, at the location of one iffy cell. Another significant fraction of the errors cropped up in the same row or same column each time, although the exact address varied. It’s unlikely, of course, that the same location on a device would be hit twice by an errant nuclear particle. Extremely unlikely. This is clear evidence that, contrary to prevailing notions, hard DRAM errors are more common than soft ones. What are we to make of the unexpected prevalence of hard errors? The bad news is that hard errors are permanent. The good news is that they are easy to work around. If errors take place repeatedly in the same memory address, you can just blacklist that address. And you can do that well before the computer crashes. Remember, the only errors that really matter are the ones that flip too many bits for ECC to correct. So errors that corrupt fewer bits could be used as an early warning that drastic measures should be taken before a crash occurs. Our investigation showed that this strategy could indeed work well: More than half of the catastrophic multibit errors followed earlier errors that were less severe and thus correctable. So with the proper mechanisms in place, most crash-inducing DRAM glitches could be prevented. Indeed, rather than just relying on ECC or other expensive kinds of correction hardware, computer operating systems themselves could help to protect against memory errors. Rows and Columns: This image shows the internal makeup of a typical DRAM chip, in this case Micron Technology’s MT4C1024, which stores 220 bits of information (1 mebibit). Operating systems typically divide the computer’s main memory into areas known as pages (usually 4 kilobytes in size). And from the operating system’s point of view, the majority of errors come from a very small fraction of the available pages. Indeed, more than 85 percent of the errors come from just 0.0001 percent of all the pages. By removing those problematic pages from use, the operating system could prevent most of the risk of errors without giving up much memory capacity. Although others have previously proposed this very tactic, known as page retirement, it’s seldom used. None of the organizations we worked with for our DRAM study employed it. One reason might be that people just didn’t realize how often DRAM errors tend to crop up in the same page of memory. After our research brought that fact to light, Facebook adopted page retirement in its data centers. But it’s still not widely used, perhaps because there’s still some confusion about what sort of page-retirement schemes would work best. To help clarify that issue, we investigated five simple policies. Some were as conservative as “Retire a page when you see the first error on it.” Others involve trying to prevent row or column errors by retiring the entire row or column as soon as one memory cell starts to show problems. We used the data we collected to see how well such policies might protect real computer systems, and we found that almost 90 percent of the memory-access errors could have been prevented by sacrificing less than 1 megabyte of memory per computer—a tiny fraction of what is typically installed. Sure, a technician could replace the entire memory module when it starts to have errors. But it would probably have only a few bad cells. Page retirement could isolate the regions of memory prone to errors without sacrificing the other parts of an otherwise functional module. Indeed, applying sensible page-retirement policies in large data centers and supercomputing facilities would not only prevent the majority of machine crashes, it would also save the owners money. The same applies to consumer gadgets. With the growing ubiquity of smartphones, tablets, wearable electronics, and other high-tech gear, the number of devices that use DRAM memory is skyrocketing. And as DRAM technology advances, these devices will contain ever-larger quantities of memory, much of which is soldered straight onto the system board. A hard error in such DRAM would normally require replacing the entire gizmo. So having the software retire problematic parts of memory in such an environment would be especially valuable. Had we accepted the received wisdom that cosmic rays cause most DRAM errors, we would never have started looking at how these chips perform under real-world conditions. We would have continued to believe that distant events far off in the galaxy were the ultimate cause of many blue screens and server crashes, never realizing that the memory errors behind them usually stem from underlying problems in the way these chips are constructed. In that sense, DRAM chips are a little like people: Their faults are not so much in their stars as in themselves. And like so many people, they can function perfectly well once they compensate for a few small flaws. This article originally appeared in print as “Battling Borked Bits”1 point
-
Given that it is basically a big iPad with a better keyboard option and stylus support, the iPad Pro has been getting decent reviews. It can be classified as a first-generation product -- and as such, it does far better than most. However, there have been a huge number of companies in Apple's space that have gone under or stalled when they switched from focusing exclusively on the consumer and tried to do IT. In fact, I can't really point to any firm that has done that well. So, the question is, will this pivot change Apple for the worse? I don't think so. Apple may have figured out how to sidestep this problem. I'll share my thoughts on that and close with my product of the week: the Qualcomm Snapdragon 820, which heralds the biggest change in smartphones this decade. The Tech Curses Along with many other analysts, I believe there are two curses that are alive and well in the tech industry. One is the curse of the palatial new headquarters. For some time, we watched company after company put up a palatial new corporate office only to fail shortly thereafter. The computer history museum is actually in one of those buildings, which was built for Silicon Graphics. Poor Novell never even got its headquarters built -- it announced its plans, just before imploding. Apple's flying saucer is about as palatial as we've ever seen. The other -- and more pertinent to this piece -- is the flip from consumer focus to corporate focus. Back in the 1980s, the most powerful PC company wasn't Apple or Microsoft. It was Commodore, and it dominated right up until it decided to create a business-focused machine. Apple's biggest failure in that decade was the Lisa, which largely was business-focused. After having the most successful consumer-focused product in Windows 95, Microsoft decided to force-feed its enterprise product, Windows NT, which hadn't been selling that well, to the same audience. It went from nearly 100 percent share to, well, a shadow of that. One of the efforts Steve Jobs killed when he came back to Apple was its business-focused PC effort (granted, along with PDAs, printers, cameras, etc.). With both curses, I think the core problem is focus. In building a palatial headquarters, executives shift focus from product to jockeying for who will get the best office, office appointments and other perks. When shifting to the enterprise, the focus moves from features and functions the user wants to compliance, volume pricing games, and trying not to change things. For the enterprise, excitement is a bad thing. What has made me wonder about the sanity of executives who make this move is that users can be motivated to buy new and interesting things as often as every 12 months, while enterprise buyers would prefer a replacement cycle longer than five years -- often much longer. Oh, and if users want something like an iPad or Windows 95, they'll drag it in over IT's objections. If IT wants something that the users don't want -- like terminals or thin clients -- it generally doesn't sell well. From this, the "smart" technology companies decide, hey let's focus on IT and let the users pound sand, and the result pretty much sucks. I'll bet you can't tell I'm tired of watching people relearn that lesson. What Makes Apple Different Apple is doing a number of really smart things in this case. First, Tim Cook has given up his Mac and is carrying a Windows PC. No, just kidding, he is carrying an iPad Pro. I've often felt the CEO should become a personal advocate for a company's business-oriented products. CEOs who do so can ensure that their company's products meet the needs of the user, because they are users -- and their needs won't be ignored. It makes them into real believers in the product, not just extremely well-paid shills. Cook's hands-on experience should ensure the product remains focused on the user, even as the effort pivots to address IT's needs. Second, Apple is not building an enterprise channel -- it is using Cisco's and IBM's channels to move the product. That keeps Apple focused on the user while letting its distribution partners focus on making the enterprise IT shops happy. The only problem with this is that at least some of the enterprise requirements do have to make it back into the product over time to ensure the offering's success -- particularly in areas like security and management. That doesn't seem to be happening yet. However, given history, I think it is far better that some of the enterprise needs be unmet than it would be if Apple started missing user requirements. The iPad Pro There are three initial problems with the iPad Pro, though none of them are deal killers. I expect most will be addressed in the next product if Apple listens. It doesn't have a touchpad on the keyboard. Taking your hands off the keys to touch the screen is annoying when you are moving fast, and given that Apple hasn't put touchscreens on its PCs, this omission seems particularly strange. The lack of a USB port means certain required accessories, like some types of secure memory, USB RSA tokens, and lots of printers can't be used with the device, which will be a problem for a lot of shops where people are trying to use them as laptops. Finally, the feel of the stylus isn't right yet, suggesting Apple did this itself and didn't buy the tip from one of the firms developing in this space for over a decade. On this last, there is a chance that some users actually might find the Apple feel better -- for example, those who don't go back and forth between the iPad and paper, or those who just started with tablets and never developed paper muscle memory. The iPad Pro is not bad for a first shot, but how quickly Apple steps up and addresses these shortcomings will tell us how strong its relationship is with Cisco and IBM, and whether these are true partnerships or partnerships in name only. Wrapping Up The use of Cisco and IBM coupled with Tim Cook's personal engagement with the iPad Pro should ensure the product's success. It really will depend on just how much Cisco and IBM influence [CENSORED]ure products; it needs to be enough to hit IT minimums, but not enough to break the connection with users -- and the latter is more important than the former. While there are some initial issues with the product, it likely will go down as one of the best first-generation offerings from any company, and that is a strong indication of its [CENSORED]ure success. In the end, products like the iPad Pro, the Surface Pro, and the Surface Book showcase a renewed focus on creating devices users really can love, and that push the envelope on portability and battery life. For those of us who live off products like this, there is nothing bad about that. Last week I got my updated briefing on the Qualcomm Snapdragon 820, which will show up in phones and a few tablets, robots, drones and cars starting in 2016 . It just started shipping. Qualcomm Snapdragon 820 Processor Typically, we see a 5 percent performance improvement year over year in smartphones, and once in a decade we get a huge jump due to a big technology change -- and this is that part. Qualcomm is forecasting a 30 percent performance jump in processing power with this generation and a 2x or better performance jump in network data speeds. In addition, we'll get better sound (both separation and volume), better pictures (less artifacts, better color saturation, less ghosting, better color accuracy), better security (both biometric and antimalware), better charging (long-distance wireless resonance charging and faster charging), and longer battery life. Increasingly, we are living off our phones, and this will make a lot of the things that really are annoying -- like remembering passwords (or getting the damn fingerprint reader to work right), running out of power (or worrying about running out of power), and pictures that suck -- be things of the past. For folks who replace their phones next year with one that uses this part, you'll see a near day-and-night difference if the OEM turns on most of these features, though unfortunately, they often don't. This is only for top-of-the-line phones. If you buy on a budget, 2017 may be the better year for you. Because the Snapdragon 820 will be like putting a supercharger -- I'm a big fan of superchargers -- on your phone, it is my product of the week.1 point
-
Developer : Bungie Platforme : PlayStation 4 | PlayStation 3 | Xbox One | Xbox 360 Gen : FPS Mod : Multiplayer Data Lansarii : 15 septembrie 2015 Descriere : Expansion-ul The Taken King pentru Destiny vine cu o poveste nouă şi numeroase elemente noi de gameplay. Storyline-ul din The Taken King îl are în prim plan pe Oryx, sosit în sistemul nostru solar la bordul unei nave imense Hive Dreadnaught, în căutarea răzbunarii pentru uciderea fiului său Crota. Noua campanie din The Taken King îi poartă pe jucători atât pe Dreadnaught, cât şi pe Phobos, una dintre lunile lui Marte, unde în trecut nu se regăseau decât hărţi pentru multiplayer-ul competitiv din Destiny. Fiecare clasă de personaje primeşte câte o subclasă nouă, dar şi arme noi, armuri noi şi echipament proaspăt din categoriile Legendary, Exotic, Faction sau chiar Taken. Nu lipsesc nici misiunile noi de tip Strike, precum şi un nou Raid în care jucătorii au ocazia de a-l înfrunta chiar pe Oryx, The Taken King. Multiplayer-ul competitiv din Crucible primeşte la rândul său hărţi noi, precum şi două moduri de joc suplimentare – Rift şi Mayhem. The Taken King necesită Destiny şi expansion-urile The Dark Below şi House of Wolves pentru a funcţiona. Trailer Cerinte De Sistem Minimum CPU: Intel Pentium Dual-Core Processor G640 2.8 GHz RAM: 2 GB OS: Windows Vista / 7/ 8 32&64 Bit Video Card:Nvidia GeForce 7600 GT (256MB) or 256 MB RADEON HD 3450 Full Height Graphics Card Direct X 11 Sound Card Yas Free Disk Space: 7 GB Recommended CPU: Intel Core 2 Duo E4600 2.4GHz RAM: 4 GB OS: Windows 7 / 8 64 Bit Video Card: ATI Radeon HD 4550 512MB / NVIDIA EVGA GeForce 9800 GT 512MB Direct X 11 Sound Card Yas Free Disk Space: 10 GB1 point
-
M2H, Blackmill Games a lansat action-strategy-ul Verdun în aprilie anul curent, iar acum este disponibil la jumătate de preț pe Steam. Timp de 10 ore puteți descărca jocul la prețul de 10.75€, iar Verdun 4 Pack este, de asemenea, la reducere la un preț de 27.49€1 point
-
În Marea Britanie, versiunea retail a Star Wars Battlefront pentru PlayStation 4 a fost vândută cu aproape 50% mai bine decât versiunile de Xbox One și PC laolaltă, conform GFK Chart Track. 59% din vânzările totale în format retail sunt pe PlayStation 4, 39% pe Xbox One și 2% pe PC. Nu au fost încă anunțate vânzările totale, dar EA a declarat că jocul este al 4-lea cel mai bine vândut pe 2015, fiind depășit doar de FIFA 16, Call of Duty: Black Ops 3 și Fallout 4. Battlefront a avut, de asemenea și cea mai de succes financiar lansare din franciza Star Wars, fiind vândute cu peste 117% mai multe copii decât Star Wars: The Force Unleashed, un alt succes, la vremea lansării sale. În săptămâna de lansare, vânzările Star Wars Battlefront le-au depășit pe cele Batman: Arkham Knight și The Witcher 3: Wild Hunt, în același interval de timp. Star Wars Battlefront este disponibil acum pe PlayStation 4, Xbox One și PC.1 point
-
Arrowhead Game Studios, cunsocut pentru primul titlu Magicka, Gauntlet și Showdown Effect, a anunțat data de lansare a titlului Helldivers, care poate fi comparat cu Magicka, dar cu tematică militară SF. Jocul va avea suport pentru singleplayer și co-op cu până la patru jucători, va avea misiuni generate procedural în cadrul unei campanii nonliniare și va avea friendly fire. Helldivers va fi lansat pe Steam în data de 7 decembrie la prețul de 19,99€, iar pagina sa este deja activă. Au fost anunțate și cerințele de sistem, care sunt hilar de scăzute, deși titlul are o grafică plăcută ochilor. Cerințe minime: OS: Windows Vista / Windows 7 Processor: 2.4GHz Dual Core Memory: 4 GB RAM Graphics: 512 MB NVIDIA GeForce 9800 / ATI Radeon HD 2600 XT DirectX: Version 10 Hard Drive: 7 GB Cerințe recomandate: OS: Windows Vista / Windows 7 Processor: 2.4GHz Dual Core Memory: 4 GB RAM Graphics: 1 GB NVIDIA 460 / AMD Radeon 5870 DirectX: Version 10 Hard Drive: 7 GB1 point
-
Requirements: MINIMUM: OS: Windows Vista/7/8/10 Processor: AMD Athlon 64 X2 2.6 GHz / Intel Core 2 Quad 2.6 GHz Memory: 4 GB RAM Graphics: Radeon HD 4670 (512 MB) / GeForce GT 430 (1024 MB) DirectX: Version 10 Hard Drive: 6 GB available space RECOMMENDED: OS: Windows Vista/7/8/10 Processor: AMD Athlon II X4 3.1 GHz / Intel Core i5 3 GHz Memory: 6 GB RAM Graphics: Radeon R7 260X (2 GB) / GeForce GTX 550 Ti (1 GB) DirectX: Version 10 Hard Drive: 6 GB available space Name Game: Hard West Price: $19.99 $15.99 USD Link Store: HERE Offer End up after X Hours: 25 November Trailer:1 point
-
Name: Insurgency: Modern Infantry Combat Gender: Action Platform: PC / Mac Release Date: January 22, 2014 Fabricator: New World Interactive Distributor: New World Interactive Description Video game war and strong collaborative component in which we protect our strengths and attack the sources of supply of our opponents action. The title, centered on the on-line portion, use the Source game engine for visuals. Gameplay The game is primarily a team-based, multiplayer online shooter focused on tactical, objective-based gameplay. The player can join one of two teams, the US Marines, or their adversaries, the Insurgents. Teams are structured around two squads, for a total of 16 maximum players per team. Within this team structure are limited player classes, such as the Rifleman, Support Gunner, Engineer, or Marksman. The game has a pseudo-realistic portrayal of the weaponry used. There is no on-screen crosshair and the players must use the iron sights of the game's weapon model to accurately aim the weapon. Shooting "from the hip" is still possible; however, the free-aim system makes this difficult. Weapons are also deadly, with most rifles capable of taking out players with one or two shots to the torso. According to their class, players can also use fragmentation grenades, smoke grenades, and RPGs. Maps are generally focused on urban warfare, even though there are suburban and outdoor settings. Trailer System Requirements Windows® 7 (32 y 64 bits) / Vista / XP Procesador: Pentium 4 (3GHz o más) RAM: 1GB graphical DirectX ® 9 support Internet connection Other images1 point