Jump to content

Destroid™

Members
  • Posts

    605
  • Joined

  • Last visited

Everything posted by Destroid™

  1. Spy send me a pm !
  2. Welcome Back ! Have Fun
  3. Welcome To Csblackdevil Community ! Have Fun !
  4. Winner Revan v1 - 0 votes v2 - 9 votes Congratulations Revan
  5. v2 Text , sharpen , effects
  6. Welcome To CSBD
  7. Welcome To CsBlackDevil Have Fun .
  8. Name of the oponent: Revan Theme of work: Type of work (signature, banner, avatar, Userbar, logo, Large Piece) : Signature Size: 400x250 *Text: Scorpion Watermark: CsBlackDevil Working time: 24h Note : Just Me and Revan
  9. v1 text , blur , brush
  10. Need For Speed Carbon
  11. 1. Gran Turismo (1997) 2. Gran Turismo 2 (1999) 3. Gran Turismo 3 A-Spec (2001) 4. Gran Turismo Concept:( 2002 Tokyo-Seoul,2002 Tokyo-Geneva,2001 Tokyo) (2002) 5. Gran Turismo 4: Prologue (2003) 6. Gran Turismo 4 (2004) 7. Gran Turismo HD (2006) 8. Gran Turismo 5 Prologue (2007) 9. Gran Turismo (2009) 10. Gran Turismo 5 (2010) 11. Gran Turismo 6 (2013) 12. Gran Turismo 7 (2014)
  12. 1. James Bond 007 (1983) 2. A View to a Kill (1985) 3. The Living Daylights (1987) 4. Live and Let Die (1988) 5. 007: Licence to Kill (1989) 6. The Spy Who Loved Me (1990) 7. James Bond: The Stealth Affair (1990) 8. James Bond Jr. (1992) 9. James Bond: The Duel (1992) 10. GoldenEye 007 (N64) (1997) 11. 007: Tomorrow Never Dies (1999) 12. 007: The World is Not Enough (2000) 13. 007: Racing (2000) 14. 007: Agent Under Fire (2001) 15. 007: Nightfire (2002) 16. 007: Everything or Nothing (2003) 17. GoldenEye: Rogue Agent (2004) 18. 007: From Russia with Love (2005) 19. GoldenEye: Source (2005) 20. Quantum of Solace (2008) 21. James Bond 007: Blood Stone (2010) 22. GoldenEye 007 Reloaded (2011) 23. 007 Legends (2012)
  13. 1999 Driver 2000 Driver 2 2004 Driv3r 2006 Driver Parallel Lines 2007 Driver 76 2011 Driver San Francisco 2011 Driver Renegate 3D
  14. Welcome To CsBlackDevil Community . Have Fun
  15. The Video Computer System (VCS) ROM cartridge-based console, later renamed the Atari 2600, was released in 1977 by Atari. Nine games were designed and released for the holiday season. While the console had a slow start, its port of the arcade game Space Invaders would become the first "killer app" and quadruple the console's sales.[12] Soon after, the Atari 2600 would quickly become the most po[CENSORED]r of all the early consoles prior to the North American video game crash of 1983. Notably, the VCS did this with only an 8-bit 6507 CPU,[13] 128 bytes (i.e. 0.125 KB) of RAM, and at most 4 KB of ROM in each "Game Program" cartridge. The ColecoVision, an even more powerful machine, appeared in 1982. With its port of arcade game Donkey Kong included as a pack-in, sales for this console also took off. However, the presence of three major consoles in the marketplace and a glut of poor quality games began to overcrowd retail shelves and erode consumers' interest in video games. Within a year, this overcrowded market would crash. The Intellivision, introduced by Mattel in 1980. Though chronologically part of what is called the "8-bit era", the Intellivision had a unique processor with instructions that were 10 bits wide (allowing more instruction variety and potential speed), and registers 16 bits wide. The system, which featured graphics superior to the older Atari 2600, rocketed to po[CENSORED]rity. In 1979, Activision was created by disgruntled former Atari programmers "who realized that the games they had anonymously programmed on their $20K salaries were responsible for 60 percent of the company's $100 million in cartridge sales for one year". It was the first third-party developer of video games. By 1982, approximately 8 million American homes owned a video game console, and the home video game industry was generating an annual revenue of $3.8 billion, which was nearly half the $8 billion revenue in quarters generated from the arcade video game industry at the time.
  16. Bushnell and Dabney founded Atari lnc in 1972, before releasing their next game: Pong. Pong was the first arcade video game with widespread success. The game is loosely based on table tennis: a ball is "served" from the center of the court and as the ball moves towards their side of the court each player must maneuver their paddle to hit the ball back to their opponent. Allan Alcorn created Pong as a training exercise assigned to him by Atari co-founder Nolan Bushnell. Bushnell based the idea on an electronic ping-pong game included in the Magnavox Odyssey, which later resulted in a lawsuit against Atari. Surprised by the quality of Alcorn's work, Bushnell and Dabney decided to manufacture the game. Atari sold over 19,000 Pong machines, spawning many imitators. Another significant game was Gun Fight, an on-foot multi-Directional shooter, designed by Tomohiro Nishikado and released by Taito in 1975. It depicted game characters, game violence, and human-to-human combat, controlled using dual-stick controls. The original Japanese version was based on discrete logic,which Dave Nutting adapted for Midway's American release using the Intel 8080, making it the first video game to use a microprocessor. This later inspired original creator Nishikado to use a microprocessor for his 1978 blockbuster hit, Space Invaders.
  17. With DirectX 12 coming soon with Windows 10, VR technology ramping up from multiple vendors, and the Vulkan API already debuted, it’s an exceedingly interesting time to be in PC gaming. AMD’s GCN architecture is three years old at this point, but certain features baked into the chips at launch (and expanded with Hawaii in 2013) are only now coming into their own, thanks to the improvements ushered in by next-generation APIs. One of the critical technologies underpinning this argument is the Asynchronous Command Engine (ACEs) that are part of every GCN-class video card. The original HD 7900 family had two ACE’s per GPU, while AMD’s Hawaii-class hardware bumped that even further, to eight. AMD’s Hawaii, Kaveri, and at least the PS4 have eight ACE’s. The Xbox One may be limited to just two, but does retain the capability. AMD’s Graphics Core Next (GCN) GPUs are capable of asynchronous execution to some degree, as are Nvidia GPUs based on the GTX 900 “Maxwell” family. Previous Nvidia cards like Kepler and even the GTX Titan were not. What’s an Asynchronous Command Engine? The ACE units inside AMD’s GCN architecture are designed for flexibility. The chart below explains the difference — instead of being forced to execute a single queue via pre-determined order, even when it makes no sense to do so, tasks from different queues can be scheduled and completed independently. This gives the GPU some limited ability to execute tasks out-of-order — if the GPU knows that a time-sensitive operation that only needs 10ns of compute time is in the queue alongside a long memory copy that isn’t particularly time sensitive, but will take 100,000ns, it can pull the short task, complete it, and then run the longer operation. Asynchronous vs. synchronous threading The point of using ACE’s is that they allow the GPU to process and execute multiple command streams in parallel. In DirectX11, this capability wasn’t really accessible — the API was heavily abstracted, and multiple developers have told us that multi-threading support in DX11 was essentially broken from Day 1. As a result, there’s been no real way to tell the graphics card to handle graphics and compute in the same workload. GPU pipelines in DX11 vs. DX12 AMD’s original GCN hardware may have debuted with just two ACEs, but AMD claims that it added six ACE units to Hawaii as part of a forward-looking plan, knowing that the hardware would one day be useful. That’s precisely the sort of thing you’d expect a company to say, but there’s some objective evidence that Team Red is being honest. Back when GCN and Nvidia’s Kepler were going head to head, it quickly became apparent that while the two companies were neck and neck in gaming, AMD’s GCN was far more powerful than Nvidia’s GK104 and GK110 in many GPGPU workloads. The comparison was particularly lopsided incryptocurrency mining, where AMD cards were able to shred Nvidia hardware thanks to a more powerful compute engine and support for some SHA-1 functions in hardware. When AMD built Kaveri and the SoCs for the PS4 and Xbox One, it included eight ACEs in the first two (the Xbox One may have just two). The thinking behind that move was that adding more asynchronous compute capability would allow programmers to use the GPU’s computational horsepower more effectively. Physics and certain other types of in-game calculations, including some of the work that’s done in virtual reality simulation, can be handled in the background. Asynchronous shader performance in a simulated demo. AMD’s argument is that with DX12 (and Mantle / Vulkan), developers can finally use these engines to their full potential. In the image above, the top pipeline is the DX11 method of doing things, in which work is mostly being handled serially. The bottom image is the DX12 methodology. Whether programmers will take advantage of these specific AMD capabilities is an open question, but the fact that both the PS4 and Xbox one have a full set of ACEs to work with suggests that they may. If developers are writing the code to execute on GCN hardware already, moving that support over to DX12 and Windows 10 is no big deal. A few PS4 titles and just one PC game use asynchronous shaders now, but that could change. Right now, AMD has only released information on the PS4’s use of asynchronous shaders, but that doesn’t mean the Xbox One can’t. It’s possible that the DX12 API push that Microsoft is planning for that console will add the capability. AMD is also pushing ACE’s as a major feature for its LiquidVR platform — a fundamental capability that it claims will give Radeon cards an edge over their Nvidia counterparts. We’ll need to see final hardware and drivers before making any such conclusions, of course, but the compute capabilities of the company’s cards are well established. It’s worth noting that while AMD did have an advantage in this area over Kepler, which had only one compute and one graphics pipeline, Maxwell has one graphics pipeline and 32 compute pipes, compared to just 8 AMD ACEs. Whether this impacts performance or not in shipping titles is something we’ll only be able to answer once DX12 games that specifically use these features are in-market. The question, from the end-user perspective, obviously boils down to which company is going to offer better performance (or price/performance ratio) in the next-generation DX12 API. It’s far too early to make a determination on that front — recent 3DMark 12 benchmarks put AMD’s R9 290X out in front of Nvidia’s GTX 980, while Star Swarm results from earlier this year reversed that result. What is clear is that DX12 and Vulkan are reinventing 3D APIs and, by extension, game development in ways we haven’t seen in years. The new capabilities of these frameworks are set to improve everything from multi-GPU configurations to VR displays. Toss in features like 4K monitors and FreeSync / G-Sync support, and it’s an exciting time for the PC gaming industry.
  18. Even if you pay attention to the CPU industry, Atmel isn’t likely to be a company you’re familiar with. But its low-power processors could change the way we interact with devices and the burgeoning Internet of Things. Founded in 1984, the company focuses on embedded computing, microcontrollers, and automotive processors — precisely the kind of hardware that powers the equipment we interact with on a daily basis, without ever realizing it contains a microprocessor or three. Atmel is making waves at present for its new Smart SAM L21 family of processors, which draw so little power they can reportedly run for decades and be powered by energy harvested from body motion. First, the basics: The L21 family is based on ARM’s Cortex-M0+ microprocessor series. The M0+ is an embedded chip and a fairly modest one — it’s an optimized version of the Cortex-M0, with one fewer pipeline stages to reduce power consumption and a few features of the more capable Cortex-M3 and M4 families. What sets the Atmel SAM L21 family apart is that they’ve been designed to use ridiculously low amounts of power — just 35 microamps per MHz when active, and 200 nanoamps of electricity when in sleep mode. With power consumption that low, an Atmel L21 core that didn’t wake up very often could conceivably run for decades off a battery. Even more interestingly, Atmel claims the microcontroller can be powered simply by human energy capture. “Atmel is committed to providing the industry’s lowest power technologies for the rapidly growing IoT market and beyond for battery-powered devices,” said Reza Kazerounian, senior vice president and general manager for the company’s microcontroller business unit. “Developers for IoT edge nodes are no longer just interested in expanding the life of a battery to one year, but are looking for technologies that will increase the life of a battery to a decade or longer. Doing just that, the new 32-bit MCU platform in the Atmel | SMART family integrating our proprietary picoPower technologies are the perfect MCUs for IoT edge nodes.” Atmel isn’t revealing which process technology its L21 core uses, possibly because these types of processors tend to be built on older nodes and focus on minimum cost rather than top-notch performance. Instead of relying on a cutting-edge 14nm or 16nm process, the company has emphasized sophisticated power gating methods that aren’t much different from what we’ve seen companies like Intel and AMD adopt. Each area of the chip is designed to be power gated, and the core aggressively shuts off segments of the die that aren’t in use. In larger chips, we’ve seen this approach adopted to avoid blowing power budgets and ensure that mobile battery life is maximized when the CPU is doing relatively simple tasks. The Cortex-M0+ isn’t powerful enough to run even a device like a smartwatch today. But the fact that Atmel adopted such sophisticated power gating methods shows how technologies adopted to preserve battery life at the high end of the market trickle down into much cheaper, simpler parts. The ability to charge electronics via human power is an old dream, and partly limited by battery technology as much as by circuit design. Simply advancing microcontroller design won’t solve all those problems, but it does simplify one key technological challenge.
  19. Intel has launched its new high-end PCI Express SSD, the SSD 750. The new drive will be offered in two sizes — 400GB for $389 and 1.2TB for $1,029 — and it’s designed to bring a number of new enterprise features down to the client market. Intel solid state drives have a reputation for quality, but also tend to carry much higher prices than other solutions, so one major question will be whether or not the SSD can live up to its price tag. The 750 uses 20nm NAND from Intel and Micron — the two companies have announced their own 3D NAND efforts, but that technology is still under development. These new drives bring two new features to the client side — NVMe support and the SFF-8639 connection bracket. Here, a bit of explanation may be in order. NVMe (Non-Volatile Memory, the “e” stands for express) is a communication standard that replaces AHCI and is designed explicitly for solid state drives as opposed to being meant primarily for spinning discs. The performance benefits of NVMe are something that we’ve covered before; the interface has a number of improvements that will boost drive performance in the long run. The other new feature of the 750 family is the SFF-8639 adapter hooked up to a 2.5-inch SSD. Why an adapter? Because the SATA Express standard doesn’t offer enough bandwidth. SATA Express tops out at two lanes of PCIe, and while that’s still an improvement over the SATA 6G standard, it’s not enough for truly high-end SATA drives. The SFF-8639 connector bridges the gap between SATA Express and PCI-Express drives with an x4 connector or above. It offers four lanes of PCIe connectivity to drives in a standard 2.5-inch form factor, and is expected to be po[CENSORED]r in desktops as a high-end interface going forward. Your motherboard will still need to support the SATA Express standard in order to use the new 4GBps solution. If the motherboard in question backhauls the connection over PCIe 2.0 instead of 3.0, then you’re limited to half speed — 2GBps, not 4GBps. How’s the new drive perform? Anandtech has a comprehensive review of the new SSDs performance in client workloads and the results are truly impressive. Performance between Intel’s 1.2TB SSD 750 (that’s the full PCIe card version) and the Samsung SM951, which uses the older AHCI standard instead of NVMe, was quite close overall, with the Intel drive offering significantly better performance in some areas (typically write performance and some latency tests) while Samsung took the lead in others. Tellingly, however, the Intel and Samsung drives were both often 2-3x faster than the fastest desktop-class hardware currently available, like the Samsung 850 Pro. Latency is one area where the Intel 750 shines These gains demonstrate how faster interfaces can still drive higher-performing SSDs, even when NAND flash itself isn’t getting any faster — if anything, the advent of TLC (triple-level cells) and lower process nodes, which require higher amounts of ECC, may have made NAND slightly slower. Memory controller and bandwidth improvements still have the ability to yield great gains. That said, support for the SFF-8639 connector and SATA Express is all quite early. Right now, the only way to get full PCIe 3.0 bandwidth is to drop a PCIe card into a slot normally reserved for a graphics processor. This could limit [CENSORED]ure scalability, unless you buy a Haswell-E system, which offers 40 lanes of PCIe 3.0. These features should become more standard in [CENSORED]ure platforms from Intel and AMD, though neither company has offered a hard and fast roadmap for [CENSORED]ure integration. The other thing to be aware of is that while current top-end SSDs are much faster than the drives from 2010-2011, and [CENSORED]ure drives promise to be 2-3x faster than the mainstream products of today, there’s a definite diminishing marginal return to adding higher-end storage. The great benefit of SSDs — and to be clear, a modest modern SSD is much faster than a hard drive — feels like a huge improvement, because it accelerates many common tasks by a factor of 10-100x compared to a spinning disk. If you’ve ever run a virus scan and played a game simultaneously, you’re familiar with how something as simple as a background file-scanning process can send performance cratering. Thanks to SSDs, that’s not an issue. Once you move from HDDs to SSDs, the relative difference drops dramatically. Slashing system copy times from 8 minutes to 2 minutes is still a huge gain — but it doesn’t feel the same as cutting them from, say, 32 minutes to eight. Drives are going to continue to accelerate, and those of you who work with huge datasets or copy a great deal of information will absolutely benefit, but the gains are going to be less visible if you aren’t a relatively heavy user to start with.
  20. Now that Windows 10 development is in full swing, with the new Spartan Browser and new Technical Preview builds appearing on a regular basis, let’s take a step back for a moment and address one of the most confusing things about the next version of Windows. When Microsoft announced its newest operating system, the surprise was not that it was coming, but that Windows would be skipping 9 and heading straight to 10. When asked about Windows 10’s name, Microsoft never gave a clear answer. So why, exactly, is Windows 10 getting the nod instead of 9? Version numbers, schmersion numbers You may remember that between Windows 3 and Windows 7, versions of Windows were designated by a name rather than a number: 95, 98, NT, Me, 2000, Vista, and so on. When Microsoft announced Windows 7, there was actually a similar amount of disbelief; after a series of named versions of Windows, it seemed odd to jump back to numbers. Windows 8.1: Actually version 6.3, build 9600. There’s also the fact that the name of each Windows release doesn’t actually match thereal version number. For example, Windows 8.1 is actually version 6.3 of Windows. Windows 10 is version 6.4. The last time the release name actually matched the version number was the enterprise-focused Windows NT 4.0, which was released back in 1996. Windows 2000, which was called NT 5.0 during development, was actually version 5.0. Windows XP was version 5.1. Windows Vista was 6.0, Windows 7 was 6.1, Windows 8 was 6.2, and Windows 8.1 is version 6.3. Windows RT, which only ran Metro apps, was a new and separate beast, but it still sat on top of the core Windows NT kernel. It’s dead now. Technically, modern versions of Windows are still based on the Vista kernel and code base — including Windows 10, which is actually Windows 6.4. There will be some confusion if (or when) we eventually reach internal version 7.0, but we’ll cross that bridge when we get there. Alternative theories for skipping Windows 9 First, an ExtremeTech reader called Benny sent an email to say that the number 9 is considered unlucky in Japan. Microsoft has a big enough presence in Japan that it may have skipped Windows 9 to avoid any weirdness or ill will. Benny says that Trend Micro — a Japanese company — did the same thing a few years ago when it skipped version 9 of its antivirus software. Second, someone purporting to be a Microsoft developer posted this comment on Reddit As dumb and yet amazing as this sounds, it is actually quite feasible that there are still a lot of legacy Desktop apps that use this method (or something similar) to check for Windows 95 or 98. Bear in mind that this is just an example piece of code — some developers will check for the OS name (“Windows…”), some will check for the version number (as discussed in the previous section of this story), and some may use other methods entirely to find out what OS the app is running on.
  21. Just weeks after Samsung introduced five new curved monitors to the international crowd, Samsung Electronics America revealed this week that four of those panels will be heading to North America in May 2015. These include the 31.5-inch SE590C Series, the 27-inch SE591C Series and the SE510C Series, which arrives in 23.6-inch and 27-inch flavors. For starters, the SE590C series monitor is the only one in the batch with a 3000R [CENSORED]ture; all of the others sport a [CENSORED]ture of 4000R. All of these monitors have a resolution of 1920 x 1080, a response time of 4 ms, a refresh rate of 60 Hz and support for 16.7 million colors. Other similar traits include a 16:9 aspect ratio, 178-degree viewing angles, HDMI input and a headphone jack. As for individual specifications, the two SE510C Series monitors have a typical brightness of 250 cd/m2, a static contrast ratio of 3,000:1 and a VGA port. Both come in black and sport a curved T-shaped stand. Samsung proprietary tech includes MagicBright, MagicUpscale, Game Mode, Flicker Free technology and more. This panel does not include speakers. The SE590C Series curved monitor has a glossy black and metallic finish. The panel also includes a typical brightness of 350 cd/m2, a static contrast ratio of 5,000:1, one DisplayPort jack, two 5-watt speakers, two HDMI ports, a headphone jack and a curved T-shape stand. VGA, DVI, and dual-link DVI jacks are not included. Note that this is the only panel in the group that doesn't include Samsung's MagicUpscale technology. Finally, we have the SE591C Series monitor. This panel has a typical brightness of 350 cd/m2, a static contrast ratio of 3,000:1, one VGA port, one DisplayPort jack, one audio port and a headphones jack. Also included are two 5-watt speakers, a curved T-shaped stand and Samsung's proprietary Game Mode, Flicker Free tech and so on. This panel ships in a glossy white finish. "We've seen a lot of excitement around curved displays, first with our TV line and again with the SE790C curved monitor we introduced at CES 2015. We're excited to be expanding our curved monitor line to give consumers more options," said Dave Das, Senior Vice President, Samsung Electronics America. Samsung’s new monitors will be made available sometime next month. The SE510C Series will cost $299.99 and $379.99 for the 23.6-inch and 27-inch monitors respectively. The SE591C Series monitor will cost $399.99, and the SE590C Series monitor will cost a heftier $599.99.
  22. Need For Speed Carbon Need For Speed Most Wanted (2005) Need For Speed Underground 1 & 2 Need For Speed Pro Street Need For Speed Undercover Call of Cthulhu Dark Corners of The Earth Call of Duty 1 Call of Duty 2 Call of Duty Modern Warfare Call of Duty Modern Warfare 2 Assassin's Creed Assassin's Creed 2 Assassin's Creed Rogue Counter-Strike 1.6 Counter-Strike Source Counter-Strike Nexon Zombies Counter-Strike Global Offensive Grand Theft Auto 3 Grand Theft Auto San Andreas Grand Theft Auto 4 Driv3r Driver San Francisco Half-Life Half-Life 2
  23. v2 Text , Border , Blur .
  24. Winner Dev ! dev v1 8 votes Me v2 2 votes
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.