Search the Community
Showing results for tags 'gpu'.
-
A few days ago the AMD RX 5500 graphics card, a 1080p gaming graphics card, was introduced. This new graphics card based on silicon Navi 14 is intended to replace the RX 5080 based on Polaris 20. The new AMD solution still has no official price but aims to be highly affordable. Now we have known that XFX will launch the RX 5500 THICC II graphics card. The XFX heatsink reminds of the grilles of US vehicles in the 60s and 70s. But this graphics card has a major cooling problem according to GamerNexus. This design seems to have been slightly modified and improved. The backside between the backplate and the frontplate has been modified and improved. The XFX RX 5500 THICC II is shown The graphics card has been filtered by Videocardz, who has obtained the stock images of it. The design is similar to that of the RX 5700 model, with a small modification. We see how this graphics card has an 8-pin PCIe connector. We see how this graphics card will carry two fans on this heatsink. The trim, as well as the fans, are black. Copper heatpipes are visible and the XFX logo on the center of the fans has a coppery touch. The backplate has a design that complements the front and, of course, is also black. AMD has endowed this silicon with 1 408 Stream Processors and a Boost frequency of 1 845MHz. We do not know if this graphics card will have overclocking, something possible since the AMD reference model only has a large central fan. We assume that in the next days or weeks we will see the full specifications as well as its price.
-
For some generations now, NVIDIA uses the same desktop GPUs on laptops, although with some differences to reduce its consumption. In essence they are the same GPU, but NVIDIA drivers continue to differentiate those dedicated to dedicated graphics on PCs and laptops. Why do they do it and what are the differences? In this article we explain everything. What is the difference between a laptop and desktop GPU? First of all, it should be clarified that, some time ago, the GPUs of the laptop and those of the dedicated graphics of NVIDIA were completely different, and in fact those of the laptops had the distinctive "M" at the end of their name for better differentiation. However, for a long time the GPU is essentially the same, but with some reduced attributes in order to reduce its consumption. Thus, for example, an RTX 2080 in its reference model for desktop, has 2,944 shader units, with 8 GB of GDDR6 memory and a 256-bit interface that makes it work at 14 GHz effective, delivering a bandwidth of 448 GB / s. The RTX 2080 notebook is exactly the same, with the same parameters, but nevertheless its speed is slightly lower, around 9% slower in its base speed and 7% in its Boost speed. The performance is also slightly lower on the notebook model, but instead of consuming 215 watts, it consumes only 150W. We do not speak, however, of the Max-Q variants, which are graphics with a much lower consumption (90 watts in this case) and with much lower operating speeds, so the performance also plummets. If the GPU is the same, why are the NVIDIA drivers different? When we go to the NVIDIA driver download website, when choosing the graphics model for which we want to download the drivers, the normal models are clearly different from those of the notebook (Notebooks). But there is a difference that we can clearly see: size. The desktop drivers, to the right of the image, occupy 570.94 MB compared to 523.27 MB of the desktop graphics version. This difference in size is because the laptop drivers have a lower level of customization, although the firmware and the driver itself is exactly the same, as well as the same technologies because, remember, the GPU is the same. Why if the driver is the same occupy different size? If you have read this far, surely you are asking yourself precisely this question: why do they have a different level of personalization? This is because, with the "NVIDIA Notebook Driver" program, the company ensures the correct functioning of the reference model - we speak in this case only of laptops. However, most laptop manufacturers modify these GPUs to offer higher performance, lower consumption or lower temperature, and therefore the operation may be different and not work perfectly with the NVIDIA driver.
-
To date, the NVIDIA ‘SUPER’ designation was reserved for GTX 20 Series graphics cards. It seems that the company will finally extend this denomination to the GTX 16 Series. The company's simplest graphics with RT Cores and Tensor Cores will receive at least one version. During the last weeks there has been a lot of talk about a GTX 1660 SUPER, which has just been made official. This new NVIDIA GTX 1660 SUPER graphics card will be the most powerful in the company's mid-range. There will be no reference model, the models customized by the manufacturers will be seen directly. Possibly they come with lower prices even to the model they arrive to replace and combat the AMD RX 5500. NVIDIA GTX 1660 SUPER is confirmed Zotac has been the first of the manufacturers to have these charts ready. The company will have two models, the simple model and an AMP model. This second model, with a better cooling system will have factory overclocking and therefore, better performance. Another of the outstanding novelties of this graphic is that it will not arrive with GDDR5 memories, but will arrive with GDDR6 memories. The change of memories does not seem important, but it gives us extra performance. Obtaining more bandwidth improves the final performance of the graph. The frequencies have not yet been confirmed, which will be revealed shortly before launch or when NVIDIA officially presents them. We see how this graphic has an 8-pin PCIe connector and completely lacks an NVLink port. On its performance, we must expect 10-15% more than the GTX 1660 with a fairly similar consumption. We assume that in the coming days NVIDIA will reveal the launch date and Zotac the market launch price of its two models.
-
Manufacturers of custom graphics card models work to offer interesting solutions. There is a lot of offer and everyone is looking to offer something that differentiates them from the competition and invites the user to buy their product. XFX a few weeks ago introduced the RX 5700 XT THICC II. The company has taken another step and presented the XFX RX 5700 XT THICC III Ultra. This new graphics card differs from the previous model in that it has three fans. The THICC II as the name implies, has only two fans, while the THICC III has a third fan. The heatsink block in this graphic is also a bit thicker because the GPU has factory overclocking. XFX introduces the RX 5700 XT THICC III Ultra As with the THICC II, this model is completely without RGB and the aluminum fins cover is completely black. The backplate of this graphics card merges with the front bezel in an elegant way. The back has a rack that reminds us of the most classic American muscle cars. Regarding the characteristics of this graphics card, it will work at a frequency of 1 810MHz. The ‘Game’ mode will be in this case at 1 835MHz and the Boost mode has been taken to 2 025MHz. We talk about an important overclocking, hence the heatsink is bigger and the fan extra. The 8GB GDDR6 of this graphics card continue to operate at 14Gbps. These memories offer us a 256-bit memory interface and a bandwidth of 448GB / s. This THICC III Ultra model has a year of 2.7 PCIe slots (therefore 3 effective PCIe slots). The edge of the backplate and the front bezel has a silver border that is a pass. Two 90mm fans and a 100mm center fan have been integrated. This dissipation system has ZeroDB fan technology, so that the fans do not work if there is no temperature.
-
There is a lot of expectation around the new Intel graphics cards. During the IDC 2019 (Intel Developer Conference) held in Tokyo, benchmarks of these graphics have been shown. The table compares the current UHD 620 integrated in the 9th Generation Core with the Iris Plus that are integrated in the Ice Lake. The performance jump between Gen 9.5 (UHD 620) and Gen11 (Iris Plus) is important. Although the performance jump is notorious, it would still be far from the AMD RX Vega integrated in the Ryzen APUs. It is true that Intel graphics have always been characterized for being good for video and nothing else. With these new graphics you can play modestly, but you can. Major performance leap of Intel iGPUs Most games that did not reach 30FPS before in 1080p resolution now pass that frame rate. Who jumps the most is the CS: GO, that if around 45-50FPS would now go to 75-80FPS. And it is that the integrated graphics have always been characterized for being optimal for eSports games, which are quite light graphically. Kenichiro Yasu, Intel Director commented that Gen12 graphics double the performance of Gen11. We talked about a brutal performance jump, since according to the image, for example, the CS: GO could go to 150FPS quietly. Moving any game at 1080p resolution> 60FPS would be a luxury. But of course, for this to happen we must wait for the Intel Tiger Lake that will arrive, possibly, at the end of 2020. Another interesting fact is that the Intel Xe will have support for RayTracing in 2020. AMD currently has no plans to support this technology. Although it is said that the PlayStation 5 and Project Scarlett will allow 4K @ 60FPS gaming with RayTracing, as we have learned, they will be based on an RX 5500.
-
Graphics enthusiasts are salivating for news around AMD's pending Navi graphics cards, and new Credit: CompuBenchtest results from a mysterious AMD GPU listed in the CompuBenchdatabase have spurred a new round of speculation. Several media outlets have reported that the GPU is an AMD Navi engineering sample, but after closer inspection, it appears this sample likely isn't of the Navi variety. The new graphics card has poor compute results compared to Vega 64 and Vega 56, but its graphics performance isn't too far behind the Vega 64. It even beats the Vega 56 in some tests. As Navi is expected to be a low-end or mid-range GPU, on the surface, this seems like a good sign the GPU is indeed Navi. Eagle-eyed Redditors noticed that the Vega 56 fell far behind both the alleged Navi GPU and Vega 64 in CompuBench's ocean surface simulation benchmark. This ocean simulation benchmark, for example, is very memory bandwidth heavy; Vega 56 only has about 85 percent of the bandwidth of Vega 64, and not even half that of the Vega VII. The alleged Navi GPU's performance may come down to memory bandwidth and not graphical prowess, which is the first indication that this may not be a Navi GPU at all, but a Vega 20 GPU instead. The "66AF" GPU, thought to have been Navi, is actually registered under Linux AMD GPU drivers as "Vega 20," making the Navi conclusion even more suspect. Also, comparing this 66AF:F1 GPU to Radeon VII on CompuBench, nearly everything in the OpenGL information is identical. A notable difference is that Vega VII has an additional tag under "GL_EXTENSIONS" called "GL_AMD_gpu_shader_half_float2," which may be the tag that specifies Vega VII's reduced floating point performance compared to other Vega 20 GPUs, like the Radeon Instinct MI60. While it is hard to tell what exactly this GPU is, if Linux's driver IDs can be trusted, it doesn't appear to be Navi. Even if the GPU is from the Navi lineup, it's hard to glean useful performance data and GPU specifications due to the nature of the CompuBench benchmark. For now, it appears more likely this is just another Vega 20 GPU, perhaps even a new WX Pro series model. Article created by „tomshardware”.