Everything posted by Derouiche™
-
welcome in csbd
-
welcome in csbd
-
[Battle - ŦŘǤŦ DΛSŦIN-™ vs InfiNiTy-] [winner DASTIN]
Derouiche™ replied to Mr.FijiWiJi's topic in GFX Battles
ŦŘǤŦ DΛSŦIN-™ win -
Last week, Intel announced its new Apollo Lake low-cost series of Pentium, Celeron, and Atom products due to launch in the back half of 2016. Goldmont, the CPU core that powers Apollo Lake, is the next iteration of the Silvermont architecture that replaced Intel’s original Bonnell architecture, which powered Atom from 2008 – 2012. 2-in-1’s have been the PC industry’s sole bright spot in recent years, and while they haven’t grown quickly enough to offset the general decline in desktop and laptop sales, they have picked up steam in retail at a reasonable clip. Intel’s low-cost Apollo Lake platform is meant to accelerate that transition by pushing a variety of ultra thin-and-light devices. What’s interesting is, Intel is explicitly targeting “Cloudbooks,” like Google’s Chromebook, and calling them out as an important segment in the $169 – $269 price band. The entire point of Apollo Lake is to cut developer costs compared with previous generations, a fact made obvious by the slide below: If you add up all the cost savings Intel is promising on this slide, it comes to $7.45. This should tell you something about just how thin PC profit margins are at the bottom end of the market — not only did Intel feel it was important to break these savings out by sector, but the total amount comes to the approximate total of a combo meal at a fast food chain. On the other hand, if you’re selling a device for $169, $7.45 is 4.4% of your total — and as we’ve seen, PC manufacturers often have net profit margins in the 3-4% range. What’s inside Goldmont? Intel isn’t talking at all about what kind of performance Goldmont will offer, and that’s a bit concerning. Prior to the launch of Silvermont / Bay Trail, Intel was more than happy to talk up the new chip’s core counts, performance, and anticipated per-core efficiency increases. Bay Trail offered a huge performance jump over Atom, competed well against AMD’s Kabini, and gave Intel a foothold in tablets and low-cost 2-in-1’s. Cherry Trail, which debuted last year in Microsoft’s Surface 3, took the Silvermont architecture to 14nm (Airmont), but didn’t really offer much in the way of battery life improvements. GPU performance increased significantly, but battery life was a wash. This highlights a problem with modern devices that Intel may have a hard time fixing. The rush to include high-end displays at every price point can be good for consumers, but it can also shove OEMs towards devices that really shouldn’t carry such ridiculous resolutions. These resolutions require higher backlight power and there’s a baseline per-pixel power consumption rate on top of that. In short, it’s easy for laptops to soak up all the power improvements that Intel can deliver by equipping them with more powerful displays. This creates a feedback loop in which Intel builds higher-end GPUs (that can drive the higher resolution displays), which requires more power, which eats away at the CPU’s power budget. At the same time, however, it’s hard not to see Goldmont as critical if Intel has any aspirations left for the tablet or smartphone market. Silvermont was a good chip — it hit all Intel’s internal targets and it offered a much better x86 tablet experience than Clover Trail ever could. Ultimately, however, it mostly anchored low-end Android and x86 tablets, and was outperformed by some of ARM’s higher-end offerings (Apple’s own cores remain in a class by themselves). Goldmont could change that if Intel builds a core with substantially better IPC, but it has to do so at a price point that makes sense and at a net cost that allows it to compete against Qualcomm, Samsung, and other ARM vendors. The company’s silence on the base architecture is concerning. The GPU specs should be excellent, but GPU alone won’t win Intel mobile business.
-
[Battle - ŦŘǤŦ DΛSŦIN-™ vs InfiNiTy-] [winner DASTIN]
Derouiche™ replied to Mr.FijiWiJi's topic in GFX Battles
accept -
Nintendo insider Emily Rogers has debunked a few of the rumors recently coming out about NX, stating the console won't be significantly more powerful than PlayStation 4 and Xbox One, but will be good enough to represent a step forward in comparison with Wii U. The latest rumors claimed Nintendo NX would be equipped with a custom AMD 14nm Polaris-like GPU and Vulkan support. The GPU was said to be "2x as powerful as the OG PS4", and "be close to the PS4K rumored specs". "Here is what multiple sources close to Nintendo are telling me about 10k's hardware rumors: The gimmick is made up. GPU is wrong. Power level is wrong." "The specs on NX are good, but a lot of the information being shared in this thread is incorrect, is instead the latest report from Rogers. "I was told that NX has good specs, but the info in this thread on the GPU and power level is just not correct. Sorry to burst everyone's hype." Whatever it is, Nintendo NX is still a mystery and in terms of specs we are not even so convinced about the fact the Japanese platform holder is going to completely unveil it at E3 2016. The wait could be even longer to know what's behind this new console.
-
-
[BATTEL GFX] ŦŘǤŦ DΛSŦIN-™ vs GIOVΛNNY- [WINNER GIOVANNY]
Derouiche™ replied to Derouiche™'s topic in GFX Battles
GIOVANNY WIN -
[BATTEL GFX] ŦŘǤŦ DΛSŦIN-™ vs GIOVΛNNY- [WINNER GIOVANNY]
Derouiche™ replied to Derouiche™'s topic in GFX Battles
V1 V2 -
Name of the oponent:Battle DASTIN VS GIOVANNY Theme of work: Type of work (signature, banner, avatar, Userbar, logo, Large Piece): Size:150,250 *Text: Watermark:csbd/csblackdevil Working time:3/2houres
-
[Battle] WereN. vs Hellwalks vs DASTIN [Winner ŦŘǤŦ DΛSŦIN-™]
Derouiche™ replied to Hellwalks's topic in GFX Battles
Battle Over V1 [DASTIN] WINNER -
Battle [The Ga[M]er Vs FANTASSY [Winner The GaMer]
Derouiche™ replied to The Ga[M]er.'s topic in GFX Battles
V2 SIMPLE blur/text/border -
Toyota sees big benefits from “guardian angel” autonomous driving: The driver is always in control until he or she messes up and a crash looms, and then the car takes over. Toyota will continue working on traditional, fully autonomous (Google- and Tesla-style) vehicles as well. But as-needed assistive driving has more promise for the near [CENSORED]ure because of the challenge of switching quickly from fully autonomous to driver-back-in-control. Even a 10-second switchover might be nine seconds too long if the car can’t handle a dangerous situation. Gill Pratt, CEO of Toyota Research Institute, said, “In the same way that antilock braking and emergency braking work, there is a virtual driver that is trying to make sure you don’t have an accident by temporarily taking control from you.” Pratt was speaking at an Nvidia industry conference last week in San Jose. There Toyota announced the guardian-angel project, reiterated its long-term commitment to fully autonomous driving, and said Toyota will use three R&D facilities in the US, including a new facility in Ann Arbor, Michigan. Odds don’t favor zero-accidents self driving for a huge automaker As Toyota sees it, the math doesn’t hold up for the world’s largest automaker to head straight to the highest level of self-driving. Toyota builds 10 million cars and trucks a year. Imagine they each drive 10,000 miles a year. That’s 1 trillion (1,000,000,000,000) miles, perhaps 10 trillion miles counting all Toyotas on the road. Even with just a small fraction driving autonomously, Pratt envisions an “existential crisis” for Toyota and for the reputation of autonomous driving if only a handful of the self-driving vehicles suffered a component failure, or made a bad judgment that led to an accident, as the Google car did thinking a Silicon Valley city bus would yield the right of way. Pratt has said Toyota would want a trillion miles of test driving to be confident a fully autonomous car worked. (Even in a Prius, that would be something like 20 billion gallons of gasoline.) Toyota’s cautious and practical island-hopping path to full autonomy has at least two intermediate steps: Driver assistance technology. It’s on Toyota and Lexus vehicles today: stop-and-go adaptive cruise control, blind spot detection (and lane change prevention), lane departure warning and lane keep assist, forward collision warning and braking when ACC is off, city / pedestrian detection and braking, rear cross traffic alert and braking. All this costs less than $2,000. Guardian angel assistance. The driver would always be in control (although with all the driver assists engaged on an interstate, he or she might not be fully alert) and the car’s sensors would be alert for a sudden dangerous situation. It would determine whether full-pedal emergency braking (something ACC doesn’t do) is the proper course, evasive steering into the next lane, or possibly using the breakdown lane. The angel assist might be pulling in DSRC signals (dedicated short range communication) from cars just ahead to get advance notice of icy roads or sudden panic braking. It would only take control temporarily and then, when the danger passes, the driver would resume command. Three US research facilities Toyota Research Institute comprises two sites in the US with a third being established in Ann Arbor, Michigan, in close proximity to the University of Michigan. The TRI-ANN facility will have a staff 50. It will specialize in fully autonomous or chauffeured driving, artificial intelligence, robotics, and materials science. It’s in close proximity to Mcity, a people-free cityscape for testing self-driving cars without accidentally running down real pedestrians and baby carriages. The TRI facility in Palo Alto, California (TRI-PAL) is near Stanford University. It will be the facility working on guardian angel driving. The TRI facility in Cambridge, Masachusetts (TRI-CAM) is close to MIT and work on simulation and deep learning. Other Toyota facilities include a simulator in Japan near Mount Fuji that can move in three dimensions, much like an aircraft simulator, to give test drivers a better feel for road conditions. The simulator will also be used to test guardian angel applications. According to MIT Technology Review, Toyota will attempt to reduce in-power energy consumption for sensors and processors down from multiple kilowatts using neuromorphic chips, an architecture that processes data in parallel. Typical computers process data sequentially.
-
Running out of disk space on your PS4 with those monthly PlayStation Plus releases? Maybe those long load times are slowly eating away at your sanity. That tiny, slow drive that comes standard with the PS4 leads to nothing but heartbreak, but you do have options at your disposal. It’s easy to swap out the default hard drive for something much better, but what about all the cool stuff already on your drive? Today, we’ll walk through the process of backing up your files and how you can upgrade your console with little more than a screwdriver, a new internal drive, and an external backup. And if you’re worried about losing your copy of P.T., this process will keep your game installations safe even if you suffer a drive failure. Begin the backup process To get the ball rolling, you’ll need to plug in your external drive over USB. It needs to be formatted using FAT32 or exFAT, and if you want to back up a full drive, the external drive should have at least the same capacity as the internal drive. Launch the Settings app from the PS4’s main menu. Select the System option, go down to Back Up and Restore, and then go into the Back Up PS4 sub-menu. At this point, you may be notified that your trophies can’t be backed up. Of course, trophies sync over PSN, so that’s not a problem. If you’re sure that everything is already synced, just select “OK.” If you want to make sure that all of your trophies are properly synced, you should back out, launch the Trophies app, and make sure that everything is copacetic before continuing. Decide what gets backed up Once you get to this menu, you’ll likely have to wait a few minutes while the total data usage is tallied. Once it’s done scrutinizing your drives, you’ll see how much space is being taken up by captures, saved data, settings, and installed applications. On the right-hand side, you’ll see how much space you’ll have left on the external drive after the back up is finished. See that checkbox next to the applications section? Unchecking it will skip the back up process for your game installations. Since almost all apps and games can be downloaded again from PSN at any point, you can skip this part of the back up if you’d like. However, it’s possible that some titles will eventually become inaccessible (like P.T.), so you’re better off safe than sorry. After you’ve chosen what you want to back up, hit the “Next” button. Make the back up From here, you’ll be able to give the back up a descriptive name. When you’re ready to proceed, press the “Back Up” button. You’ll be greeted with a screen that says “Preparing to back up,” and you’ll see a progress bar. Once finished, the screen will go blank, and your PS4 will reboot. After a small wait, you’ll be told that you’re in the process of backing up your data, and that you shouldn’t turn off your PS4. This part of the process will most certainly take up the most amount of time. Depending on the size of your back up, and the speed of your external drive, it could even take hours to finish. Once it’s done backing up your data, the screen will say “Backup complete. The PS4 will restart.” Make sure your controller is on by pressing the “PS” button on your DualShock 4, and then select “OK” on the screen. Once the reboot is finished, you’re all set. You can use your PS4 as normal, or follow the rest of this guide to upgrade your drive, and restore your data. Buy a replacement drive If you’re going to bother swapping out your hard drive, you should definitely pick a replacement that’s both faster and higher capacity. When I bought my PS4 at launch, I snagged this 1TB 7200RPM drive from HGST. It’s not the biggest or fastest, but it’s a nice step up from the default 500GB 5400RPM drive, and it’s quite affordable. You can go with an SSHD or SSD if you’re willing to spend the extra money, but the performance improvements will vary wildly depending on which games you play and how you use your PS4. Just make sure you buy a 2.5-inch SATA II-compatible drive that’s 9.5mm thick or smaller, and everything should work out fine. If you want to use a bigger drive on your PS4, the Nyko Data Bank is an affordable way to make that happen. There have been some issues in the past with drives with a capacity over 2TB, but the latest firmware seems to have solved the issue. Your milage may vary. Open your PS4 Once you have your drive ready to go, power down your PS4, and unplug everything. Move it over to a large open surface, and slide off the shiny part of the PS4’s case. Set it aside. Replace the drive With the top of the case off, you’ll see where the drive sits right away. With a phillips screwdriver, remove the single screw holding the drive in place. Slide out the tray, swap the drives, replace the tray, and secure the screw. Slap on the case once more, and plug everything back in. Initialize When you boot up the PS4 with a new drive, you’ll need to initialize it. Button through the menu, and follow the on-screen prompts. If everything goes as planned, the PS4 should boot to the main menu once the initialization is finished. Now you can log into your PSN account again. Navigate to the restore menu When you’re ready to restore from your external drive, launch the Settings app from the PS4 main menu. Go to System, and then button through to the Back Up and Restoremenu. Now, go to Restore PS4. Restore from a backup From here, select which backup you want, and then press the “Restore” button. Depending on the drive and size of the backup, this process could take a very long time to complete. Follow the prompts, don’t turn off your PS4 during the process, and have patience. Once everything is finished, everything will be back where it belongs. Mission accomplished Finally, your new hard drive is in place, and you have room to keep your library installed. And if you’re looking to give your drive a workout, check out some of the free-to-play titles available on the PS4. This is the perfect opportunity to expand your horizons without having to uninstall your favorite games.
-
[BATTLE] d3v0uTT™ vs WereN. [Winner d3v0uTT™]
Derouiche™ replied to Adrian P.'s topic in GFX Battles
V1 effect -
i win good luck next time TC
-
[Battle] CaNaByS* vs WereN. [Winner CaNabyS*]
Derouiche™ replied to Adrian P.'s topic in GFX Battles
V2 text/C4D -
In principle, communicating with light is much, much easier than communicating with electricity. We’ve been doing it for much longer, in technologies ranging from signal fires to fiber-optic networks, because photons have the capacity to move data far more quickly than electrons. Yet light also has many frustrating problems that electrons don’t — problems that have kept light from displacing electricity on the nanometer scales of modern computing. For a long time, the major impediment to a photonic revolution in computing, and an exponential increase in computer speed, has been a sort of zero sum game between three major players: size, power, and heat. The thing about light is that by atomic standards it’s really very large. In general, the smallest useful wavelength of light for computing has been in the infrared range, around 1000 nm in size, while improvements in silicon transistors have seen them reach and even pass the 10 nm threshold. Lithography has come up with incredibly clever and complex ways of diffracting light to etch silicon wafers with details smaller than the wavelength of the light doing the etching — pretty incredible stuff — but that’s child’s play compared to the sorts of super-fast, super-complex communication that we would require inside a modern computer processor. Existing techniques in bending light waves just won’t do the job. To get around the size problem and make light useful on the scales we require for next-gen computer performance, engineers have turned to something called “surface plasmons.” These are essentially electrons that have been excited so that they dance along the surface of a material, exploiting quantum weirdness to behave and travel more like a photon than an electron. It’s a bit of a halfway point between electricity and light, using many of light’s behaviors, but staying physically confined to a much, much smaller space right at the surface of the wire. If created on a regular copper wire, these surface plasmons can travel much faster than a normal electron in the same medium, and even closely approach the speed of light. The speed at which we can communicate over a distance matters more when we have more distance over which to communicate, so the first assumed computing application for photonics is in the relatively long-distance communication between processor cores. Right now, copper wire connects these super-fast components to allow them to work together — but the communication between cores is starting to lag further and further behind the speed of any one of those cores individually. So, if we want to utilize all the potential power of, say, a 64-core processor, we’ll need to keep those cores coordinated with something much faster than electrons moving through copper wire — something as fast as light would be good. The problem when you switch from light waves to surface plasmons, though, is that plasmons very quickly lose their power — they move real fast, but tend to peter out long before they reach their destination. To get them to maintain enough of their power all the way from source to destination, engineers can “pump” the wire into an active plasmonic component — essentially expend a bit of energy on keeping the wire in a state where the surface plasmons won’t lose a ton of energy as they travel. But that creates its own problem: heat. Surface plasmons solve the wavelength problem, and active plasmonics solve the surface plasmon power problem, but now we’ve got to keep all these actively pumped components from overheating due to all the excess energy we’re adding. This has been a tough problem to crack, and it’s led to the assumption that any photonic computing system would need to be either cooled with some super-advanced cooling system, or made of some exotic wiring material that’s much better at maintaining surface plasmon signals without significant help. Both areas of research are well underway, but a recent study from the Moscow Institute of Physics and Technology (MIPT) has shown that with a good enough regimen of existing cooling technologies, actively pumped copper wire could give us both the plasmon-slipperiness and the heat dissipation we need to realistically run a consumer device. That means that as conventional computer architecture gets more complex and adds more processing cores, we may actually see the associated speed increase we’d want and expect. Of course, the idea of photonic computing goes beyond just maintaining coordination between processing cores made of electronic transistors. Not only is it very time- and energy-inefficient to be switching your signals back and forth between photons and electrons, but so-called optical transistors could have much higher bandwidth than electronic ones. It will require a number of additional breakthroughs, but research is underway — like this recent study looking for an affordable material that could do accurate, thin-film polarization of light signals. Graphene and carbon nanotubes have a ton of possible utility for optical computing, since they could transport surface plasmons and make the advantages of photonics work on the nano-scale. A real optical computer is much further out than a hybrid, which uses optical tech to coordinate conventional electronic cores. Once created though, a fully optical computer could possibly allow us to restart Moore’s Law. It won’t hold a candle to some [CENSORED]ure, comprehensive quantum computer, but until we get such a thing an optical computer is one of our best bets to restart exponential growth in computing power.
-
The Oculus Rift is facing serious shipping issues: Day one preorder customers have been told their hardware may not ship until late May or the end of June. That’s the news reported by multiple individuals following a shipping update from Oculus on April 12. New orders placed as of this writing are still appearing with an August ship date, but whether the company can meet this is anyone’s guess. RoadtoVR has screenshots from individuals who ordered headsets within 30 minutes of the pre-order open who are now being told to expect their hardware by early June, while a customer who preordered within two hours has been told to expect hardware between June 10-23. If the vast majority of Oculus’ preorder sales were reserved in the first few hours, the company can probably clear its backlog relatively quickly and meet its new August ship date for current orders. If the majority of orders weren’t front-loaded, it could take additional months for the company to clear its backlog. Oculus hasn’t commented on its problems beyond noting that it was facing an “unexpected component shortage” late last March, but these problems would be easier to understand if the company was still a scrappy Kickstarter project rather than a self-described industry titan that wants to upend PC gaming, reinvent how we interact with computers, and backed by Facebook’s cash. Facebook has designed its own servers for at least three years and contracts with Chinese and Taiwanese ODMs (original design manufacturers) to build them. It’s not clear if these delays will impact Kickstarter backers. It seems that at least some KS backers are getting devices from a different allocation pool than other customers, but Oculus’ remarks on this have been somewhat contradictory and apparently some shipping estimates have been revised or removed as the company tweaks its schedule. Issues like this are one reason we advised against pre-ordering an Oculus Rift, not because we thought the hardware would be bad, but because Oculus has no experience coordinating the launch of a major product and there’s no guarantee of when hardware will actually be available. The company’s Terms of Service have also raised eyebrows on Capitol Hill, with Senator Al Franken requesting the company provide additional detail on its privacy policies and practices by mid-May. What about the Vive? HTC is still claiming that all Vive preorders will be fulfilled in the same month as shown on the order confirmation email. While that’s still open to considerable interpretation, the company isn’t backing down from these promises. Orders supposedly began shipping on April 5 and the company claims to be using a first-in, first-out order fulfillment policy. Oculus has captured much of the mindshare around VR, but if Vive can keep its shipping schedule that could quickly change. Given how new VR is, there’s not much in the way of an incumbent advantage for Oculus to take advantage of. If Oculus’ ship date continues to slip, impatient customers may transfer their preorders to Vive or choose to wait and see which alternate VR solutions are plausible (possibly at lower price tags).
-
New Nintendo NX rumors leak every day now and we'll better get used to this until the Japanese platform holder goes out and states what's true and what's not about the still mysterious console. Today we have a couple more info pointing at Wii U ports and hardware capabilities. A NeoGaf user has discovered through multiple, and according to him pretty reliable, sources that Nintendo NX is going to be more powerful than PlayStation 4 and that games like Super Mario Maker and Splatoon, apart from the already heavily rumored Zelda U and Super Smash Bros., are launching on NX as well. It seems Nintendo is struggling not to lose all the existing user-created Mario Maker levels at this moment and then will be ready to make a public announcement, presumibly in time for the NX official presentation. Dev kits would also be already out there but NDAs are pretty strict and developer wouldn't be likely to risk anything at this point yet. Of course this is a rumor, again, so take it with the usual amount of doubts and chances that all the info you can read above are a fake.
-
accept
-
V1 text;brush on side are perfect
-
look at video it well help good luck
-
V1 C4D/BRUSHE/EFFECTS
-
[Battle] WereN. vs Hellwalks vs DASTIN [Winner ŦŘǤŦ DΛSŦIN-™]
Derouiche™ replied to Hellwalks's topic in GFX Battles
can i join :v ?