Everything posted by XAMI
-
What just happened? There are few events PC gamers look forward to more than Steam's annual Summer Sales. In addition to offering ridiculous savings, the Sales tend to include fun meta-events, activities, minigames, and rewards for users to sink their teeth into. This year's Steam Summer Sale has a "Grand Prix" racing theme, and it just kicked off today. Once you login to the client and visit the store page (which you may have trouble doing -- Steam's servers are getting slammed at the moment), you'll see a giant animated banner at the top, which will periodically show a few cars racing by. If you click on the banner, you'll get all of the details and rules you'll need to know about this year's Summer Sale. We'll discuss the events of 2019's Summer Sale in a moment, but first, we'll briefly go over a few of the more notable games that are discounted. Assassin's Creed: Odyssey and Monster Hunter World are both available for $29.99 (50 percent off), while the newly-released Devil May Cry 5 can be had for $39.59 (34 percent off). If simulation games are more your speed, Two Point Hospital is now available for $17.49; significantly less than its ordinary $34.99 price tag. In short, just about any game you could want is available at a discount, and probably a steep one. Moving on to the Sale's theme, Steam is letting users join one of five racing teams this year: Team Pig, Team Hare, Team Cockatiel, Team Tortoise, and Team Corgi -- naturally, I joined the last one. By purchasing various discounted games during this year's Sale, you'll expand your team's "Boost Meter" capacity. To fill the meter itself, though, you won't need to spend a dime -- just complete the event's various Quests and earn achievements in your games. Some quests include killing a rare monster in Starbound, opening up a container in Stardew Valley, or capturing a settlement in Total War: Warhammer 2. Once you've filled your team's Boost Meter, you can, well, boost. Boosting your team gives you extra distance over competing teams, while also adding a few Nitro Boost Points, which provides your team with a seemingly-permanent speed multiplier. As previously stated, Valve is known for giving its players rewards for participating in these meta events, and this year's Grand Prix Sale is no different. The more you aid your team, the more Grand Prix tokens you'll earn -- these can be spent at the Pit Stop for rewards like chat emoticons, profile backgrounds, and "more." If that's not enough, random members of teams in first, second, and third place will get the top item in their Steam Wishlist for free. Once the Grand Prix ends, random members of the "overall" winning teams will be given "up to" three of the top items from their Wishlists. Steam's 2019 Summer Sale will run from today (June 25) to July 9, so you should have plenty of time to take advantage of all it has to offer.
-
I'm everyday working, 8-10 hours per day, i arrive to home tired, adding that i have study (virtual university) with large jobs to do with limited time, and also i can apart some small time to stay in forum and ts3, FOR GIVE A REAL HELP, don't for listen bullshits like "follow me please", be serius!
-
Through the looking glass: More security is always welcomed although in reality, it’s only going to marginally slow down dedicated hackers and make them come up with new attack vectors. Phishing and account theft will continue to be a problem and odds are, some Instagram victims will still turn to white-hat hackers for help when Instagram can’t help them quickly enough. Instagram is trialing new security techniques that could make it easier for users to regain access to accounts that have been hacked and harder for nefarious individuals to get their hands on accounts to begin with. In an e-mailed statement to Motherboard, an Instagram spokesperson said they heard from the community that existing security measures aren’t enough and that people are struggling to regain access to hacked accounts. As Motherboard outlines, after repeatedly entering an incorrect password or clicking the “Need more help” option, Instagram will prompt you to enter the e-mail address or phone number linked to your account or the ones used when you signed up for Instagram. The service will then send a six-digit code that’ll allow you to regain access to your account. What happens if the hacker also has access to your e-mail account and control of your phone number? “When you re-gain access to your account, we will take additional measures to ensure a hacker cannot use codes sent to your e-mail address [or] phone number to access your account from a different device.” Taking it a step further, the Instagram spokesperson said the company will ensure that usernames are safe “for a period of time after any account changes” as to thwart someone from taking ownership of it immediately after a hack. This feature is currently only available to Android users but is rolling out on iOS, we’re told. Shawn K. (2019, June 17). Instagram is testing new ways to recover accounts stolen by hackers. Retrieved from https://www.techspot.com/news/80544-instagram-testing-new-ways-recover-accounts-stolen-hackers.html
-
Welcome
-
In brief: Samsung isn’t the only company whose foldable phone has been delayed. Huawei’s rival Mate X device is also being pushed back as the company wants to spend more time making sure the hardware is faultless. Following the disastrous Galaxy Fold launch, which saw the handset delayed from its April launch and with no new release date in sight, Huawei wants to avoid a similar situation occurring with the Mate X. The Chinese firm told CNBC and The Wall Street Journal that the phone’s July launch date has now been pushed back to September to ensure the folding mechanism works as intended and that apps are compatible with the device when it's opened out. “We don’t want to launch a product to destroy our reputation,” a spokesperson for Huawei told CNBC. Tech reviewers found the gap between the Galaxy Fold's display and body at the hinge was allowing debris to enter, causing damage to the display. Additionally, some people were removing the screen membrane after mistaking it for a regular screen protector. The phone’s launch was subsequently postponed, and over 45 days later, we've only just heard that a new release date would be announced in the “coming weeks.” While the Mate X has a different, arguably better out-folding design that turns the single, 8-inch display into a smartphone, Huawei still wants to ensure it ships with no Galaxy Fold-like problems. With Huawei currently on a US entity list preventing US firms doing business with the firm without a license, there are questions over whether the Mate X will ship with Android. A spokesperson told CNBC it would have Google’s OS because the handset was launched before the ban, though the Wall Street Journal says this isn’t set in stone and the Mate X might ship with Huawei’s own operating system. Earlier this week, it was reported that Huawei had indefinitely postponed the launch of a new MateBook laptop due to its US troubles. Rob T. (2019, June 14). Huawei delays launch of Mate X to avoid Galaxy Fold-style issues. Retrieved from https://www.techspot.com/news/80520-huawei-delays-launch-mate-x-avoid-any-galaxy.html
-
As game streaming services continue to make waves in the tech industry, many companies are starting to lean into the concept more heavily. Steam, for example, has just decided to overhaul its "In-Home Streaming" service, which allowed you to stream games from your main PC to other devices in your house. Now, the service has been rebranded to "Steam Remote Play" and it's "experimentally" available to the general public starting today. Whereas before you were limited to streaming your games from your PC to other devices in your home on the same network, that restriction has been lifted now. As Valve puts it, your Steam clients can now stream games to each other "wherever they are," as long as you have a decent network connection on both sides (the company didn't offer specifics) and are reasonably "close" to a Steam datacenter. However, even with those restrictions in mind, this is a pretty nice update for those who tend to travel frequently. ...your Steam clients can now stream games to each other "wherever they are," as long as you have a decent network connection on both sides... It should be pointed out that Steam Remote Play is not the same thing as Google's Stadia cloud gaming service. Your host system, and the machine you'll be streaming to both need to be switched on and paired. Additionally, you can only play games you actually own and have installed on your host system; you aren't playing the games from Valve's servers. To try out Remote Play, all you'll need to do is log into your Steam account from a non-primary computer and open up the "Remote Play" settings menu. You'll be given an option to add and pair the device with your main machine. There are also "Client options" that let you tweak things like resolution and audio quality. Cohen C. (2019, June 14). Steam's 'Remote Play' feature lets you access your game library away from home. Retrieved from https://www.techspot.com/news/80525-steam-remote-play-feature-you-access-game-library.html
-
Why some people keep using fake accounts for give to themselves "likes" and comments in topics? ??♂️
-
Highly anticipated: On Sunday, CD Projekt Red showed up at Microsoft's pre-E3 conference to tease Cyberpunk 2077 once again. In addition to finally giving us a release date for the game, the studio revealed that Keanu Reeves will appear in-game as an important NPC. However, that was only the start -- far more information has surfaced about the game now, and we've done our best to summarize it below. First, some brief background for those who have never heard of this game before: Cyberpunk 2077 is a true first-person, open-world RPG set in a sprawling, California-based futuristic metropolis known as Night City. Players will create their own character at the start (complete with gender, attribute, and appearance customization) and progress through the game's many main missions and side quests in whatever way they choose; not unlike an Immersive Sim like Deus Ex or Dishonored. You can play as a sneaky hacking-centric Netrunner, a gadget-loving Techie, a battle-ready Solo, or a mixture of all three. Moving on to actual E3 details, let's start with one from Nvidia. The company confirmed in a blog post today that Cyberpunk 2077 will feature full real-time ray tracing support via RTX technology. This isn't a surprise given CDPR's long relationship with Nvidia, but it's still nice to hear that more titles will support the GPU maker's expensive hardware. If you've been following TechSpot for a while, you may recall that one of our main complaints regarding Nvidia's latest GPUs is the lack of games that feature robust (and performance-friendly) RTX support. Moving on to gameplay information, we've been given quite a bit to chew on thanks to a breakdown written by IGN after they saw a private, behind-closed-doors demo. Unlike last year's public and private Cyberpunk 2077 gameplay videos, which emphasized full-fledged combat and location variety, CD Projekt Red decided to treat journalists and influencers to a more focused experience this time around. The new demo is set in a single portion of Night City known as "Pacifica." Though the district was once an ambitious tourist destination, it's now fallen into disrepair -- the police have abandoned it, and gangs run the show. As a result, Pacifica is easily the most dangerous district of Night City, so regular citizens don't tend to visit unless they have no other choice. Perhaps as a result of the danger Pacifica poses to the unprepared, CDPR opted to go for a less-explosive gameplay approach in the new demo. Thanks to that choice, we know that the game will feature a full stealth system: you can use "Netrunning" (hacking) skills to remotely distract, incapacitate, and otherwise disrupt enemies, or you can get in close for brutal (lethal or non-lethal) stealth takedowns with a variety of melee weapons (some implants, some not) and even your own two fists. On a similar note, CDPR revealed that you can complete the entire game without killing a single soul. Though this means non-lethal playthroughs are possible, they will be quite difficult, according to CDPR's Miles Tost. Night City is a dangerous place, after all, and enemies won't necessarily live by the same code you do. Regardless, expect the game to react to the way you play and understand that a non-lethal approach will not always lead to a "good" outcome. Stealth aside, CDPR elaborated further on the RPG systems of Cyberpunk 2077, including the "Lifepath" mechanics that have been carried over from the game's pen-and-paper source material, Cyberpunk 2020. The Lifepath system lets you create your character's backstory, choosing things like their childhood hero, where they came from (were they a street kid or a former Corporate employee?), and various other important details. These choices don't just exist for the fun of it: they have a real impact on dialogue and missions as a whole. If you decide to cooperate with a gang at some point, for example, it may be easier for you to earn their trust if you can empathize with them based on your own troubled past. In terms of other RPG systems, we now have a much better idea of how Cyberpunk 2077's progression systems will work. There's the "Street Cred" leveling system, which unlocks shops, missions, and other features as it goes up, and throughout the game you'll earn "perk points" that you can spend on a number of skill categories. These categories include blades, rifles, handguns, assassination, engineering, hacking, and more. Though we don't know precisely what all of the available perks are, we do know that the interface resembles a "motherboard," with various perks being laid out along cables "going out from the center." Perks will let improve your effectiveness with various weapon types, unlock new abilities, or merely allow you to equip new weapons. Of course, given that Cyberpunk 2077 is set in the future, implants and "Cyberware" will be a major focus of the game. As we saw in last year's public demo, players will be able to visit legal or illegal "Ripperdocs" throughout Night City to get cybernetic eyes, arms, legs, and even deadly weapons installed into their body. Be careful, though -- each piece of Cyberware has a "humanity cost" associated with it (we don't know what that affects just yet, however). The RPG systems of 2077 bleed over into its dialogue mechanics, as well. Your perk choices, Lifepath, attributes, and overall specialization will unlock new dialogue options throughout the game. A skilled Techie, for example, might be able to "talk shop" with an engineer and a Netrunner will be able to better keep up with a fast-talking hacker. Furthermore, thanks to Cyberpunk's "Interactive Scene System," you can move around freely during discussions, looking at and commenting on various objects in the environment to prompt new responses from NPCs. Apparently, NPCs will even react to the clothes on your back in some cases -- a nice touch, particularly if the final game features even half as many clothing options as CDPR claims it will. A major focus of this year's private Cyberpunk demo was hacking. As stated before, CDPR went for a quieter approach this time, so journalists got to learn quite a bit more about how the game's Netrunning system works. You can jack into computer systems to find additional information about your mission or NPCs (think Deus Ex), upload viruses into enemies to cause them to commit suicide, and hijack cameras for a bit of Watch Dogs-like remote hacking. Naturally, with the immense size of Night City firmly in mind, Cyberpunk 2077 will feature various vehicles to make traversing the landscape a bit faster. Last year, we saw the player drive a "Quadra" supercar, but this time, journalists got to see a "Yaibi Kuzanaki" motorcycle in action. Unfortunately, IGN says the driving looked "fairly arcadey" and "sort of stiff"; two major complaints many people had regarding 2018's public gameplay demo. On the bright side, there is an in-game radio that players can adjust based on their music preferences, and you'll be able to swap between first and third person while driving (but only while driving). We've covered a lot of ground so far, but E3 is far from over. Expect us to come back and update this article with new information about Cyberpunk 2077 as it gets revealed. If you're already sold on the game based on what you've heard so far, feel free to drop by CD Projekt Red's official website to pre-order the Standard or Collector's Edition for your platform of choice. Cohen Coberly. (2019, June 11). Cyberpunk 2077: Everything we've learned at E3. Retrieved from https://www.techspot.com/news/80475-cyberpunk-2077-everything-weve-learned-e3-far.html
-
What just happened? Google just confirmed the Pixel 4 and the square camera housing of the rear facing cameras. While we don't know what the internal specs are, we can be fairly certain that the camera will be top notch as usual. Well, that was...unexpected. It seems with all the Pixel 4 leaks, Google has just outright confirmed that the Pixel 4 exists and the external look. Just yesterday, we reported about the leaked Pixel 4 renders that were going around. Google's photo confirms that those renders were actually correct. The Pixel 4 will indeed change from a single rear facing camera to a multi-camera setup. Entire enclosure will be in a square enclosure similar to the rumored camera bump of the new iPhones. That said, the picture that Google tweeted out doesn't show the front of the device so we don't know exactly what it will look like. However, we can surmise that since the rear mounted fingerprint sensor is missing, Google is either using a side button as the sensor or an in-display fingerprint sensor. With the proliferation of in-display fingerprint readers as of late from the likes of Samsung and OnePlus, I'd be fairly certain of the latter. Internally, the Pixel 4 will likely have the requisite high-end specs: Snapdragon 855 processor, 6 to 8GB of RAM and at least 64GB of storage. Pixels usually don't offer storage expansion via microSD. Ars Technica reports that Google's Project Soli may be supported on the Pixel 4. Project Soli is basically a tiny chip that uses radar to detect hand motions above the device. It will be interesting to see if those air gestures will actually be useful unlike Samsung's attempt with Airview on the Galaxy S4. Regardless, Google is clearly trying to go next level with the camera and teasing a lot more for the future. David M. (2019, June 12). Google just confirmed the Pixel 4. Retrieved from https://www.techspot.com/news/80493-google-confirmed-pixel-4.html
-
<05:59:50> "#ahmad AFK" ahora es conocido como "#ahmad I Love Mr.Love"
-
A little kid who just know make critics without fundament, as i can see your answer here is nothing smart, be careful with your words, you are not talking with your family. ----- Ahora, @Mr. Goofy? ♔♔♔, ya encontraste la solución al problema? te sugiero tal vez cambiar la contraseña, por si las dudas, de igual manera, si el problema sigue insistiendo me mandas un pm y ahí trato de buscar el problema mas a fondo
-
You’re playing the latest Call of Mario: Deathduty Battleyard on your perfect gaming PC. You’re looking at a beautiful 4K ultra widescreen monitor, admiring the glorious scenery and intricate detail. Ever wondered just how those graphics got there? Curious about what the game made your PC do to make them? Welcome to our 101 in 3D game rendering: a beginner's guide to how one basic frame of gaming goodness is made. Each year, hundreds of new games are released around the globe -- some are designed for mobile phones, some for consoles, some for PCs. The range of formats and genres covered is just as comprehensive, but there is one type that is possibly explored by game developers more than any other kind: 3D. The first ever of its ilk is somewhat open to debate and a quick scan of the Guinness World Records database produces various answers. We could pick Knight Lore by Ultimate, launched in 1984, as a worthy starter but the images created in that game were strictly speaking 2D -- no part of the information used is ever truly 3 dimensional. So if we’re going to understand how a 3D game of today makes its images, we need a different starting example: Winning Run by Namco, around 1988. It was perhaps the first of its kind to work out everything in 3 dimensions from the start, using techniques that aren’t a million miles away from what’s going on now. Of course, any game over 30 years old isn’t going to truly be the same as, say, Codemaster’s F1 2018, but the basic scheme of doing it all isn’t vastly different. In this article, we'll walk through the process a 3D game takes to produce a basic image for a monitor or TV to display. We’ll start with the end result and ask ourselves: “what am I looking at?” From there, we will analyze each step performed to get that picture we see. Along the way, we’ll cover neat things like vertices and pixels, textures and passes, buffers and shading, as well as software and instructions. We’ll also take a look at where the graphics card fits into all of this and why it’s needed. With this 101, you’ll look at your games and PC in a new light, and appreciate those graphics with a little more admiration. Aspects of a frame: pixels and colors Let’s fire up a 3D game, so we have something to start with, and for no reason other than it’s probably the most meme-worthy game of all time, we’ll use Crytek’s 2007 release Crysis. In the image below, we’re looking a camera shot of the monitor displaying the game. This picture is typically called a frame, but what exactly is it that we’re looking at? Well, by using a camera with a macro lens, rather than an in-game screenshot, we can do a spot of CSI: TechSpot and demand someone enhances it! Unfortunately screen glare and background lighting is getting in the way of the image detail, but if we enhance it just a bit more... We can see that the frame on the monitor is made up of a grid of individually colored elements and if we look really close, the blocks themselves are built out of 3 smaller bits. Each triplet is called a pixel (short for picture element) and the majority of monitors paint them using three colors: red, green, and blue (aka RGB). For every new frame displayed by the monitor, a list of thousands, if not millions, of RGB values need to be worked out and stored in a portion of memory that the monitor can access. Such blocks of memory are called buffers, so naturally the monitor is given the contents of something known as a frame buffer. The building blocks needed: models and textures The fundamental building blocks to any 3D game are the visual assets that will populate the world to be rendered. Movies, TV shows, theatre productions and the like, all need actors, costumes, props, backdrops, lights - the list is pretty big. 3D games are no different and everything seen in a generated frame will have been designed by artists and modellers. To help visualise this, let’s go old-school and take a look at a model from id Software’s Quake II: Launched over 20 years ago, Quake II was a technological tour de force, although it’s fair to say that, like any 3D game two decades old, the models look somewhat blocky. But this allows us to more easily see what this asset is made from. In the first image, we can see that the chunky fella is built out connected triangles - the corners of each are called vertices or vertex for one of them. Each vertex acts as a point in space, so will have at least 3 numbers to describe it, namely x,y,z-coordinates. However, a 3D game needs more than this, and every vertex will have some additional values, such as the color of the vertex, the direction it’s facing in (yes, points can’t actually face anywhere... just roll with it!), how shiny it is, whether it is translucent or not, and so on. One specific set of values that vertices always have are to do with texture maps. These are a picture of the ‘clothes’ the model has to wear, but since it is a flat image, the map has to contain a view for every possible direction we may end up looking at the model from. In our Quake II example, we can see that it is just a pretty basic approach: front, back, and sides (of the arms). A modern 3D game will actually have multiple texture maps for the models, each packed full of detail, with no wasted blank space in them; some of the maps won't look like materials or feature, but instead provide information about how light will bounce off the surface. Each vertex will have a set of coordinates in the model's associated texture map, so that it can be 'stitched' on the vertex - this means that if the vertex is ever moved, the texture moves with it. So in a 3D rendered world, everything seen will start as a collection of vertices and texture maps. They are collated into memory buffers that link together -- a vertex buffer contains the information about the vertices; an index buffer tells us how the vertices connect to form shapes; a resource buffer contains the textures and portions of memory set aside to be used later in the rendering process; a command buffer the list of instructions of what to do with it all. This all forms the required framework that will be used to create the final grid of colored pixels. For some games, it can be a huge amount of data because it would be very slow to recreate the buffers for every new frame. Games either store all of the information needed, to form the entire world that could potentially be viewed, in the buffers or store enough to cover a wide range of views, and then update it as required. For example, a racing game like F1 2018 will have everything in one large collection of buffers, whereas an open world game, such as Bethesda's Skyrim, will move data in and out of the buffers, as the camera moves across the world. Setting out the scene: The vertex stage With all the visual information to hand, a game will then commence the process to get it visually displayed. To begin with, the scene starts in a default position, with models, lights, etc, all positioned in a basic manner. This would be frame 'zero' -- the starting point of the graphics and often isn’t displayed, just processed to get things going. To help demonstrate what is going on with the first stage of the rendering process, we’ll use an online tool at the Real-Time Rendering website. Let’s open up with a very basic 'game': one cuboid on the ground. This particular shape contains 8 vertices, each one described via a list of numbers, and between them they make a model comprising 12 triangles. One triangle or even one whole object is known as a primitive. As these primitives are moved, rotated, and scaled, the numbers are run through a sequence of math operations and update accordingly. Note that the model’s point numbers haven’t changed, just the values that indicate where it is in the world. Covering the math involved is beyond the scope of this 101, but the important part of this process is that it’s all about moving everything to where it needs to be first. Then, it’s time for a spot of coloring. Let's use a different model, with more than 10 times the amount of vertices the previous cuboid had. The most basic type of color processing takes the colour of each vertex and then calculates how the surface of surface changes between them; this is known as interpolation. Having more vertices in a model not only helps to have a more realistic asset, but it also produces better results with the color interpolation. In this stage of the rendering sequence, the effect of lights in the scene can be explored in detail; for example, how the model’s materials reflect the light, can be introduced. Such calculations need to take into account the position and direction of the camera viewing the world, as well as the position and direction of the lights. There is a whole array of different math techniques that can be employed here; some simple, some very complicated. In the above image, we can see that the process on the right produces nicer looking and more realistic results but, not surprisingly, it takes longer to work out. It’s worth noting at this point that we’re looking at objects with a low number of vertices compared to a cutting-edge 3D game. Go back a bit in this article and look carefully at the image of Crysis: there is over a million triangles in that one scene alone. We can get a visual sense of how many triangles are being pushed around in a modern game by using Unigine’s Valley benchmark. Every object in this image is modelled by vertices connected together, so they make primitives consisting of triangles. The benchmark allows us to run a wireframe mode that makes the program render the edges of each triangle with a bright white line. The trees, plants, rocks, ground, mountains -- all of them built out of triangles, and every single one of them has been calculated for its position, direction, and color - all taking into account the position of the light source, and the position and direction of the camera. All of the changes done to the vertices has to be fed back to the game, so that it knows where everything is for the next frame to be rendered; this is done by updating the vertex buffer. Astonishingly though, this isn’t the hard part of the rendering process and with the right hardware, it's all finished in just a few thousandths of a second! Onto the next stage. Losing a dimension: Rasterization After all the vertices have been worked through and our 3D scene is finalised in terms of where everything is supposed to be, the rendering process moves onto a very significant stage. Up to now, the game has been truly 3 dimensional but the final frame isn’t - that means a sequence of changes must take place to convert the viewed world from a 3D space containing thousands of connected points into a 2D canvas of separate colored pixels. For most games, this process involves at least two steps: screen space projection and rasterization. Using the web rendering tool again, we can force it to show how the world volume is initially turned into a flat image. The position of the camera, viewing the 3D scene, is at the far left; the lines extended from this point create what is called a frustum (kind of like a pyramid on its side) and everything within the frustum could potentially appear in the final frame. A little way into the frustum is the viewport - this is essentially what the monitor will show, and a whole stack of math is used to project everything within the frustum onto the viewport, from the perspective of the camera. Even though the graphics on the viewport appear 2D, the data within is still actually 3D and this information is then used to work out which primitives will be visible or overlap. This can be surprisingly hard to do because a primitive might cast a shadow in the game that can be seen, even if the primitive can't. The removing of primitives is called culling and can make a significant difference to how quickly the whole frame is rendered. Once this has all been done - sorting the visible and non-visible primitives, binning triangles that lie outside of the frustum, and so on -- the last stage of 3D is closed down and the frame becomes fully 2D through rasterization. The above image shows a very simple example of a frame containing one primitive. The grid that the frame’s pixels make is compared to the edges of the shape underneath, and where they overlap, a pixel is marked for processing. Granted the end result in the example shown doesn’t look much like the original triangle but that’s because we’re not using enough pixels. This has resulted in a problem called aliasing, although there are plenty of ways of dealing with this. This is why changing the resolution (the total number of pixels used in the frame) of a game has such a big impact on how it looks: not only do the pixels better represent the shape of the primitives but it reduces the impact of the unwanted aliasing. Once this part of the rendering sequence is done, it’s onto to the big one: the final coloring of all the pixels in the frame. Bring in the lights: The pixel stage So now we come to the most challenging of all the steps in the rendering chain. Years ago, this was nothing more than the wrapping of the model’s clothes (aka the textures) onto the objects in the world, using the information in the pixels (originally from the vertices). The problem here is that while the textures and the frame are all 2D, the world to which they were attached has been twisted, moved, and reshaped in the vertex stage. Yet more math is employed to account for this, but the results can generate some weird problems. In this image, a simple checker board texture map is being applied to a flat surface that stretches off into the distance. The result is a jarring mess, with aliasing rearing its ugly head again. The solution involves smaller versions of the texture maps (known as mipmaps), the repeated use of data taken from these textures (called filtering), and even more math, to bring it all together. The effect of this is quite pronounced: This used to be really hard work for any game to do but that’s no longer the case, because the liberal use of other visual effects, such as reflections and shadows, means that the processing of the textures just becomes a relatively small part of the pixel processing stage. Playing games at higher resolutions also generates a higher workload in the rasterization and pixel stages of the rendering process, but has relatively little impact in the vertex stage. Although the initial coloring due to lights is done in the vertex stage, fancier lighting effects can also be employed here. In the above image, we can no longer easily see the color changes between the triangles, giving us the impression that this is a smooth, seamless object. In this particular example, the sphere is actually made up from the same number of triangles that we saw in the green sphere earlier in this article, but the pixel coloring routine gives the impression that it is has considerably more triangles. In lots of games, the pixel stage needs to be run a few times. For example, a mirror or lake surface reflecting the world, as it looks from the camera, needs to have the world rendered to begin with. Each run through is called a pass and one frame can easily involve 4 or more passes to produce the final image. Sometimes the vertex stage needs to be done again, too, to redraw the world from a different perspective and use that view as part of the scene viewed by the game player. This requires the use of render targets -- buffers that act as the final store for the frame but can be used as textures in another pass. To get a deeper understanding of the potential complexity of the pixel stage, read Adrian Courrèges’ frame analysis of Doom 2016 and marvel at the incredible amount of steps required to make a single frame in that game. All of this work on the frame needs to be saved to a buffer, whether as a finished result or a temporary store, and in general, a game will have at least two buffers on the go for the final view: one will be “work in progress” and the other is either waiting for the monitor to access it or is in the process of being displayed. There always needs to be a frame buffer available to render into, so once they're all full, an action needs to take place to move things along and start a fresh buffer. The last part in signing off a frame is a simple command (e.g. present) and with this, the final frame buffers are swapped about, the monitor gets the last frame rendered and the next one can be started. In this image, from Ubisoft's Assassin's Creed Odyssey, we are looking at the contents of a finished frame buffer. Think of it being like a spreadsheet, with rows and columns of cells, containing nothing more than a number. These values are sent to the monitor or TV in the form of an electric signal, and color of the screen's pixels are altered to the required values. Because we can't do CSI: TechSpot with our eyes, we see a flat, continuous picture but our brains interpret it as having depth - i.e. 3D. One frame of gaming goodness, but with so much going on behind the scenes (pardon the pun), it's worth having a look at how programmers handle it all. Managing the process: APIs and instructions Figuring out how to make a game perform and manage all of this work (the math, vertices, textures, lights, buffers, you name it…) is a mammoth task. Fortunately, there is help in the form of what is called an application programming interface or API for short. APIs for rendering reduce the overall complexity by offering structures, rules, and libraries of code, that allow programmers to use simplified instructions that are independent of any hardware involved. Pick any 3D game, released in past 3 years for the PC, and it will have been created using one of three famous APIs: Direct3D, OpenGL, or Vulkan. There are others, especially in the mobile scene, but we’ll stick with these ones for this article. While there are differences in terms of the wording of instructions and operations (e.g. a block of code to process pixels in DirectX is called a pixel shader; in Vulkan, it’s called a fragment shader), the end result of the rendered frame isn’t, or more rather, shouldn’t be different. Where there will be a difference comes to down to what hardware is used to do all the rendering. This is because the instructions issued using the API need to be translated for the hardware to perform -- this is handled by the device’s drivers and hardware manufacturers have to dedicate lots of resources and time to ensuring the drivers do the conversion as quickly and correctly as possible. Let’s use an earlier beta version of Croteam’s 2014 game The Talos Principle to demonstrate this, as it supports the 3 APIs we’ve mentioned. To amplify the differences that the combination of drivers and interfaces can sometimes produce, we ran the standard built-in benchmark on maximum visual settings at a resolution of 1080p. The PC used ran at default clocks and sported an Intel Core i7-9700K, Nvidia Titan X (Pascal) and 32 GB of DDR4 RAM. DirectX 9 = 188.4 fps average DirectX 11 = 202.3 fps average OpenGL = 87.9 fps average Vulkan = 189.4 fps average A full analysis of the implications behind these figures isn’t within the aim of this article, and they certainly do not mean that one API is 'better' than another (this was a beta version, don't forget), so we’ll leave matters with the remark that programming for different APIs present various challenges and, for the moment, there will always be some variation in performance. Generally speaking, game developers will choose the API they are most experienced in working with and optimize their code on that basis. Sometimes the word engine is used to describe the rendering code, but technically an engine is the full package that handles all of the aspects in a game, not just its graphics. Creating a complete program, from scratch, to render a 3D game is no simple thing, which is why so many games today licence full systems from other developers (e.g. Unreal Engine); you can get a sense of the scale by viewing the open source engine for id Software's Quake and browse through the gl_draw.c file - this single item contains the instructions for various rendering operations performed in the game, and represents just a small part of the whole engine. Quake is over 20 years old, and the entire game (including all of the assets, sounds, music, etc) is 55 MB in size; for contrast, Ubisoft's Far Cry 5 keeps just the shaders used by the game in a file that's 62 MB in size. Time is everything: Using the right hardware Everything that we have described so far can be calculated and processed by the CPU of any computer system; modern x86-64 processors easily support all of the math required and have dedicated parts in them for such things. However, doing this work to render one frame involves a lot repetitive calculations and demands a significant amount of parallel processing. CPUs aren’t ultimately designed for this, as they’re far too general by required design. Specialised chips for this kind of work are, of course, called GPUs (graphics processing units), and they are built to do the math needed by the likes DirectX, OpenGL, and Vulkan very quickly and hugely in parallel. One way of demonstrating this is by using a benchmark that allows us to render a frame using a CPU and then using specialised hardware. We’ll use V-ray NEXT by Chaos Group; this tool actually does ray-tracing rather than the rendering we’ve been looking at in this article, but much of the number crunching requires similar hardware aspects. To gain a sense of the difference between what a CPU can do and what the right, custom-designed hardware can achieve, we ran the V-ray GPU benchmark in 3 modes: CPU only, GPU only, and then CPU+GPU together. The results are markedly different: CPU only test = 53 mpaths GPU only test = 251 mpaths CPU+GPU test = 299 mpaths We can ignore the units of measurement in this benchmark, as a 5x difference in output is no trivial matter. But this isn’t a very game-like test, so let’s try something else and go a bit old-school with Futuremark’s 3DMark03. Running the simple Wings of Fury test, we can force it to do all of the vertex shaders (i.e. all of the routines done to move and color triangles) using the CPU. The outcome shouldn't really come as a surprise but nevertheless, it's far more pronounced than we saw in the V-ray test: CPU vertex shaders = 77 fps on average GPU vertex shaders = 1580 fps on average With the CPU handling all of the vertex calculations, each frame was taking 13 milliseconds on average to be rendered and displayed; pushing that math onto the GPU drops this time right down to 0.6 milliseconds. In other words, it was more than 20 times faster. The difference is even more remarkable if we try the most complex test, Mother Nature, in the benchmark. With CPU processed vertex shaders, the average result was a paltry 3.1 fps! Bring in the GPU and the average frame rate rises to 1388 fps: nearly 450 times quicker. Now don’t forget that 3DMark03 is 16 years old, and the test only processed the vertices on the CPU -- rasterization and the pixel stage was still done via the GPU. What would it be like if it was modern and the whole lot was done in software? Let’s try Unigine’s Valley benchmark tool again -- it’s relatively new, the graphics it processes are very much like those seen in games such as Ubisoft’s Far Cry 5; it also provides a full software-based renderer, in addition to the standard DirectX 11 GPU route. The results don’t need much of an analysis but running the lowest quality version of the DirectX 11 test on the GPU gave an average result of 196 frames per second. The software version? A couple of crashes aside, the mighty test PC ground out an average of 0.1 frames per second - almost two thousand times slower. The reason for such a difference lies in the math and data format that 3D rendering uses. In a CPU, it is the floating point units (FPUs) within each core that perform the calculations; the test PC's i7-9700K has 8 cores, each with two FPUs. While the units in the Titan X are different in design, they can both do the same fundamental math, on the same data format. This particular GPU has over 3500 units to do a comparable calculation and even though they're not clocked anywhere near the same as the CPU (1.5 GHz vs 4.7 GHz), the GPU outperforms the central processor through sheer unit count. While a Titan X isn’t a mainstream graphics card, even a budget model would outperform any CPU, which is why all 3D games and APIs are designed for dedicated, specialized hardware. Feel free to download V-ray, 3DMark, or any Unigine benchmark, and test your own system -- post the results in the forum, so we can see just how well designed GPUs are for rendering graphics in games. Some final words on our 101 This was a short run through of how one frame in a 3D game is created, from dots in space to colored pixels in a monitor. At its most fundamental level, the whole process is nothing more than working with numbers, because that's all computer do anyway. However, a great deal has been left out in this article, to keep it focused on the basics (we'll likely follow up later with deeper dives into how computer graphics are made). We didn't include any of the actual math used, such as the Euclidean linear algebra, trigonometry, and differential calculus performed by vertex and pixel shaders; we glossed over how textures are processed through statistical sampling, and left aside cool visual effects like screen space ambient occlusion, ray trace de-noising, high dynamic range imaging, or temporal anti-aliasing. But when you next fire up a round of Call of Mario: Deathduty Battleyard, we hope that not only will you see the graphics with a new sense of wonder, but you’ll be itching to find out more. Nick E. (2019, June 03). 3D Game Rendering 101. Retrieved from https://www.techspot.com/article/1851-3d-game-rendering-explained/
-
Nice try!
<10:30:00> "Mr.SekA": bro see
<10:30:01> "Mr.SekA": <15:19:45> "Mr.SekA": hello mr.love
<15:25:59> "Mr.Love": hello
<15:19:45> "Mr.SekA": i want back bro in ts3 and csblackdevil
<15:26:22> "Mr.Love": welcome seka i remmber you
<15:19:45> "Mr.SekA": bro i want helper here
<15:26:39> "Mr.Love": go talk hnk add him helper
<10:30:31> "Mr.SekA": give me helper mr.love talk me go hnk add him helper -
Why it matters: The data collection scandal that struck Facebook last year is having a wide-reaching impact on the tech industry as a whole, and in a positive way. People are more conscious about protecting their personal data and that is forcing companies to rethink the level of access that developers have to customer information. Google last year announced Project Strobe, a root-and-branch review of third-party developer access to Android device and Google account data. The search giant has already implemented multiple new policies and is now turning its attention to Chrome. Moving forward, Google is requiring that extensions only request access to data needed to implement their features. In the event that more than one permission could be used to implement a feature, the dev will need to use the permission that accesses the least amount of data. This behavior has always been encouraged by developers, said Google fellow and VP of engineering, Ben Smith, but now, it is a requirement for all extensions. Google is also mandating that extensions which handle user-provided content and personal communications post a privacy policy and handle data in a secure fashion. This is an expansion of existing policies that require extensions dealing in personal and sensitive user data to do the same. Browser extensions have become a growing attack vector for phishing and social engineering. As Atif Mushtaq, CEO of cybersecurity firm SlashNext, highlights, many attacks are born out of legitimate extensions that are later updated with malicious code. As Mushtaq recounts, the new developer converted the extension to adware a couple of days after the purchase and sent out an update requesting two intrusive permissions to access data that the extension didn’t need or have any reason to use. Thanks to changes brought about by Project Strobe, it should be harder for extensions – both legitimate ones and those that go rogue – to collect such personal data. Shawn K. (2019, May 31). New Google policies make Chrome extensions more trustworthy. Retrieved from https://www.techspot.com/news/80321-new-google-policies-make-chrome-extensions-more-trustworthy.html
-
Tech gadgets come in all different shapes, sizes and price points. It’s a common misconception that cost is directly related to how useful a device is. This article focuses on supporting accessories – devices that improve the use of other gadgets, enhance your quality of life or minimize inconveniences. In some instances, they’re outright essential. Best yet, it proves that you don’t have to spend a fortune to score some seriously helpful tech as nearly everything featured here can be had for under $50. JBL Clip 3 Portable Bluetooth Speaker ($40) Music is exponentially more enjoyable when shared with others. Whether it’s an impromptu pool party or a planned gathering to celebrate that new promotion, music can help set the mood. JBL’s Clip 3 portable Bluetooth speaker is perfect for a pool party thanks to its IPX7 rating. It offers 10 hours of playtime on a single charge and even doubles as a speakerphone for your mobile device. The integrated carabiner makes it easy to attach to a backpack or belt loop so you won’t forget it. At $39.95, it’s also quite affordable. Logitech Ultimate Ears 500 Noise-isolating Earphones ($37) Few things are more enjoyable than putting in a quality pair of headphones and getting lost in your favorite playlist. Most solutions that ship with modern smartphones offer subpar sound quality or worse, a poor fit. The latter is especially true with earbuds as it can be next to impossible to find something that is comfortable for extended listening sessions. My top recommendation in this category is the Logitech Ultimate Ears 500 noise-isolating earphones. They feature a 90-degree 3.5mm jack, a tangle-resistant flat ribbon cable and come with five sizes of soft silicone ear cushions and a set of excellent foam tips. I’ve personally been using the Ultimate Ears 500 as my sole set of earphones for close to seven years and have nothing but great things to say about them. At $36.99, you simply can’t go wrong here. Soundcore Liberty Neo Earbuds by Anker ($45) If being physically tethered isn’t your cup of tea (or perhaps your device doesn’t offer a 3.5mm headphone jack), the Soundcore Liberty Neo earbuds by Anker are a solid wireless alternative. Currently just $44.99 at Amazon, these wireless buds offer 3.5 hours of playtime from a single charge and an extra nine hours of juice when charged with the carrying case. They carry an IPX5 rating against liquids, use graphene-enhanced drivers, support hands-free calling and pair using Bluetooth 5 technology. Tile Pro Hardware Tracker ($26) If you’re the type to constantly misplace your keys, luggage or other gadgets, the Tile Pro could really make your life a lot less stressful or add peace of mind. Offered in your choice or black or white color schemes, it packs a 1-year replaceable battery with a range of 300 feet and is said to be three times as loud as earlier models. A two-pack of Tile’s Pro tracker in white and black with replaceable battery is available from Amazon for only $46.95, a single one is $26. Anker PowerCore 20100 Power Bank ($50) Removable batteries were once the common cure for low battery anxiety but now that manufacturers have moved away from that feature, the next best option is a portable power bank. Anker’s PowerCore 20100 offers a massive 20100mAh battery with 4.8A output that can recharge an iPhone XR or Galaxy S10 more than four times. It features dual outputs so you can charge two devices simultaneously and comes backed by an 18-month warranty – yours for just $49.99. Amazon Echo Dot ($40+) Amazon’s unassuming Echo debuted in late 2014 as an invite-only purchase. It quickly became apparent, however, that Echo wouldn’t flounder like the Fire Phone did before it. Before long, we saw a whole industry built around the concept that Echo helped popularize. Now in its third iteration, Amazon's Echo Dot is the most affordable standalone Alexa device on the market at less than $50 -- if you're not in a rush, Echo Dots are usually discounted throughout the year and can be had for as little as $30 standalone, or even less when bundled with other electronics. Google Home Mini ($49) Smart speakers are in the homes of millions but they're not all Amazon devices. The Google Home Mini featuring the Google Assistant is an excellent alternative to the Echo Dot (some would even say it's better). At $49, you really can't go wrong either way... assuming of course that you aren't averse to using a smart speaker due to privacy concerns. Seagate Portable 1TB External Hard Drive ($45) Work with computers for any meaningful period of time and you’re bound to experience data loss, either through hardware failure, accidental deletion, corruption, malware, physical theft, natural disaster or otherwise. Cloud providers, streaming solutions and social media may minimize the impact of a data loss event but they’re no substitute for a multi-tiered plan that involves an on-site, physical backup. Seagate's 1TB external hard drive with USB 3.0 connectivity is an ideal companion to help protect against data loss. Of all the products featured here today, it is arguably the most important. Everything else is intended to improve usability or make life less of a hassle but with the Seagate drive, you’re protecting against actual loss. Once a memory is gone, it’s gone. Don’t let it come to that. The 1TB drive starts at $45, $60 for 2TB and less than $100 for 4TB. Anker Soundcore 2 Bluetooth Speaker ($40) Should portability not be the most important aspect when shopping for a Bluetooth speaker, Anker's Soundcore 2 is worth a look. It provides 12W stereo sound over Bluetooth 5 with an IPX7 water resistant rating and 24 hours of playtime for only $40. It's still small enough to toss in a backpack, mind you, but has a slightly bigger footprint than some of the other smaller devices on the market. Roku Premiere+ 4K HDR Streaming Player ($49) Set-top boxes are so commonplace these days that their functionality now comes baked into many new televisions. While these built-in "smart TVs" are usually enough to suffice, some are outright crap. What’s more, plenty of older televisions persist that lack streaming smarts. To get exactly what you desire out of your media experience, you’ll need to throw down for a standalone device that best suits your needs like the Roku Premiere+. This particular Roku streamer features 4K & HDR support and even includes a microphone for voice search and control. You also get a free 30-day trial of Sling with cloud DVR, a $40+ value. Amazon Fire TV Stick 4K with Alexa Voice Remote ($40+) Amazon's Fire TV is one of the better streaming media platforms on the market. The dongle format is especially attractive for those with wall-mounted televisions or who simply don't want another box consuming space on their entertainment stand. For $50 (usually discounted with proper timing, $40 as of writing), you get a 4K-enabled dongle that grants access to nearly every major streaming platform as well as Amazon's Alexa-powered voice remote. HDR10+ and Dolby Atmos are accessible with select Prime Video titles, and the new Alexa remote is great to control all the basics including power and volume from a single tiny remote. Samsung 256GB USB 3.1 Flash Drive ($50) Flash drives may not be as ubiquitous as they once were but they're still incredibly handy to have in a pinch. Samsung's MUF-256AB/AM FIT Plus 256GB drive is especially attractive given its sub-$50 price point, USB 3.1 interface, five-year limited warranty and keychain-ready footprint. Everyday use aside, the drive could also make for a great backup solution if you don't have a ton of mission-critical data that needs safeguarding. Philips Hue White A19 LED Smart Bulb Starter Kit ($48) Philips is a leader in smart home lighting and their white A19 LED bulbs are a great place to start, especially if you don't care about colored lighting. This starter kit includes two 60W equivalent bulbs that are compatible with Amazon Alexa, Apple Home Kit and Google Assistant. You also get the requisite Philips Hue Bridge that can control up to 50 Hue lights, all of which is backed by a three-year warranty. Wyze Cam Pan 1080p Smart Home Camera ($38) Quality, affordable camera technology is having an impact outside of the mobile sector. Case in point is the Wyze Cam Pan, a 1080p smart home camera that affords decent image quality, night vision and two-way audio. It's compatible with Amazon's Alexa and IFTTT and although it is marketed as an indoor camera, many are using them as outdoor security cams by mounting them in weatherproof cases. At less than $40 each, this is a great way to add some peace of mind to your home security setup. Logitech MX Anywhere 2S Wireless Mouse ($50) Top PC peripheral maker Logitech has dabbled in a variety of product lines over the years but is arguably known for turning out some of the best mice ever created. The Logitech MX Anywhere 2S is one such example. This wireless pointer features a 4000 DPI Darkfield sensor that tracks across multiple surfaces, a rechargeable battery that's good for up to 70 days per charge, hyper-fast scrolling and the ability to control multiple computers. Raspberry Pi 3 Model 3+ ($35) The Raspberry Pi made coding fun and accessible for beginners when it was introduced in 2012 but it didn't take long for enthusiasts to latch on to the single-board computer. Several years in, the hobby board is more popular than ever and still hasn't lost its edge - or its attractive price point. For around $35, you can pick up the latest Raspberry Pi 3 Model B+ featuring a quad-core processor clocked at 1.4GHz, 1GB of RAM and dual-band Wi-Fi. What you do with it from that point is largely limited only by your creativity. Microsoft Xbox Wireless Controller ($49) There's no substitute for greatness and if it's gamepads you are discussing, Microsoft's Xbox One controller is one of the best modern options on the market. Its familiar look and feel, wireless connectivity and respectable design make it a natural choice for PC gaming and at under $50 with a bundled wireless adapter, it's hard to go wrong with this keyboard and mouse alternative. Shawn K. (2019, May 29). Hardware Essentials for $50 or Less. Retrieved from https://www.techspot.com/article/1846-50-dollar-tech-essentials/
-
WTF?! Who knew that an obsolete netbook loaded with malware could be worth so much? As a PC repair technician, I've been dealing in "rare art" for well over a decade and had no idea. SMH. Take something ordinary, slap the term “art” on it and people will pay a fortune. Or at least, that’s the lesson garnered from this story. The Persistence of Chaos is an art project commissioned by cybersecurity firm Deep Instinct. Get this – it’s literally a 10-year-old Samsung NC10 netbook loaded with some of the world’s most malicious pieces of software that recently sold for more than $1.3 million at auction. The piece was created by contemporary Internet artist Guo O Dong to “make physical the abstract threats posed by the digital world.” It contains six pieces of malware – ILOVEYOU, MyDoom, SoBig, WannaCry, DarkTequila and BlackEnergy – which have collectively caused financial damage totaling $95 billion. Image credit: Virus malware by Yuttanas Dong told The Verge that we have this fantasy that things that happen in computers can’t actually affect us, but this is absurd. “Weaponized viruses that affect power grids or public infrastructure can cause direct harm,” Dong added. The laptop is “isolated and airgapped” to help prevent the malware from spreading. Engadget likens it to a grenade – so long as you don’t pull the pin out (or in this case, connect it to Wi-Fi or plug a drive into it), it should be safe. Shawn K. (2019, May 28). Someone paid more than $1.3 million for a netbook full of malware. Retrieved from https://www.techspot.com/news/80259-someone-paid-more-than-13-million-netbook-full.html
-
Why it matters: Microsoft's inability to deliver solid updates as of late has led to an eroded faith in the operating system -- especially when the updates break security features. Many Windows 10 users are beginning to avoid updating at all to protect both their PCs and their sanity, which is precisely the opposite effect cumulative updates are supposed to have. One of Microsoft's more anticipated features coming to Windows 10 with the May 2019 (1903) update was the Windows Sandbox. Unfortunately, Microsoft admitted the recent update has left the new security feature unable to launch for many users. If that wasn't enough, Windows Defender Application Guard is also down. Windows Sandbox was announced last year as a feature for Windows 10 Pro and Enterprise users, and would allow them to run a virtualized, sandboxed environment. This virtualization is designed to run less like a virtual machine, and more like a separate application. There's no need to install any third party VM software, as is the case with setting up a traditional virtual machine. Windows Defender Application Guard is an enterprise-specific feature, allowing untrusted sites to be opened in a Windows Hyper-V container. This is designed to protect enterprise machines and data while employees use the network and internet. However, Microsoft has acknowledged that both features are failing to launch with the latest KB4497936 update. In support documents, Microsoft explains a potential workaround by creating new registry keys using admin credentials, then restarting the host. Outside of that, Microsoft is working on a more permanent resolution, estimated to be available in late June. The update also provided some protections against the newest speculative execution side-channel vulnerabilities, known collectively as MDS. It's also important to note that the KB4497936 update was only delivered to the preview rings of the Windows Insider program. That said, Microsoft hasn't exactly hit the mark with previous updates. Hopefully Microsoft can iron out the kinks, or this will be another in a long string of problematic updates. Eric H. (2019, May 28). Windows 10 Insider update botched Windows Sandbox and Application Guard. Retrieved from: https://www.techspot.com/news/80267-latest-windows-10-update-botched-windows-sandbox-application.html
-
Something to look forward to: Nvidia has used Computex to announce its Nvidia Studio initiative, which will see the release of several RTX-powered laptops and studio driver updates designed to boost the performance of creative applications. There will be seventeen new “RTX Studio” laptops from seven manufacturers at launch, all featuring RTX cards ranging from the RTX 2060 up to the professional Quadro RTX 5000 card. Nvidia says they will offer “desktop-class performance” for users on the move and perform seven times faster than an equivalent MacBook Pro with 32GB RAM and an AMD Pro Vega 20 GPU. The first of these laptops come from Razer in the form of upgraded versions of its Blade 15 Advanced and Blade 17 Pro machines. The Blade 15 Studio Edition and Blade Pro 17 Studio Edition both feature a Quadro RTX 5000, 32GB of RAM, 1TB of storage, and come in mercury white. The 15-inch model gets an OLED touchscreen and an Intel Core i7-9750H CPU, while the 17-inch variant boasts a 120Hz 4K display (not OLED) and a Core i9-9880H CPU. No word yet on price or exactly when they’ll be available. Acer is also updating it professional-focused Concept D line, adding an RTX 5000 option to the Concept D 7. The rest of the manufacturers are made up of Asus, Dell, Gigabyte, HP, and MSI. Pricing starts at $1,599 and the first laptop will arrive in June. The laptops come with Nvidia’s new Studio Drivers, formerly called Creator Ready drivers, which offer the best possible performance and reliability when working with creative apps. This is achieved by testing them extensively against multiple revisions of popular applications from the likes of Adobe, Autodesk, Avid, Blackmagic Design, Maxon, Unity, and Epic. Nvidia also carries out multi-app testing to help professionals who jump from one application to another as part of their workflow. The Nvidia Studio drivers—the latest version is available today—aren’t restricted to RTX Studio laptops. Providing you have a computer or laptop with an RTX card that meets the Nvidia Studio system requirements, it will be possible to switch between Game Ready and Nvidia Studio drivers. Rob T. (2019, May 27). Nvidia announces RTX Studio laptops featuring Quadro RTX graphics. Retrieved from https://www.techspot.com/news/80247-nvidia-announces-rtx-studio-laptops-featuring-quadro-rtx.html