Jump to content
Facebook Twitter Youtube

AMD RADEON RX 5700 XT Review


#DEXTER
 Share

Recommended Posts

FjXLLe5tdcKwpe2LcdxJ8C-1024-80.jpg

After months of waiting and speculation, AMD's Radeon RX 5700 XT is finally here, joining the ranks of the best graphics cards. Sporting a new RDNA architecture that boosts performance while reducing power requirements, and with a price drop to keep AMD competitive with Nvidia's new RTX 2060 Superthat launched last week, expectations from the faithful have been high. This is AMD's first new GPU architecture since GCN (Graphics Card Next) came out all the way back in 2012. Our primary purpose today is to show how it performs, check out the new features, and determine how it stacks up to the competition—both from Nvidia as well as AMD's existing portfolio.

I've previously discussed AMD's new Navi / RDNA architecture in detail, so if you want to find out exactly what makes the Radeon RX 5700 XT tick that's a good place to start. The short summary is that AMD has focused on improving efficiency, reworked the Compute Units (CUs) to boost utilization and performance, and shifted to TSMC's 7nm process technology. Here's a quick look at the specs, comparing the new RX 5700 models with several of AMD's previous generation GPUs:

AMD GPU specs comparison

All three RX 5700 models use the same Navi 10 GPU, with the 5700 XT cards sporting fully enabled chips while the RX 5700 disables four CUs and 256 cores. Clockspeeds also vary for the GPU, but the GDDR6 memory in all cases runs at 14Gbps (14 GT/s). The new Navi 10 GPUs end up with a die size slightly larger than the previous Polaris GPUs, with only a few extra CUs. However, each CU has been reworked with the new RDNA architecture, and the result as we'll see shortly is that even with fewer cores / CUs the RX 5700 XT easily outperforms the Vega 64—and it does so while using substantially less power.

I mentioned this previously, but AMD's addition of a new 'Game Clock' muddies the waters, with AMD reporting maximum performance using the boost clock instead of the game clock. The game clock is a relatively conservative estimate of the actual in-game clockspeeds the RX 5700 family will see, meaning games will usually run at higher clockspeeds. That's the same approach Nvidia takes with its boost clock, while AMD's boost clock is (sort of) the maximum clock the GPUs will see (there's actually still some 'silicon lottery' luck involved where some GPUs may actually exceed the stated boost clock).

Anyway, don't get too hung up on comparing GFLOPS as there are architectural differences and in the end it's gaming performance that matters. For example, the RX 5700 XT has theoretical performance of 9754 GFLOPS while the Vega 64 has theoretical performance of 12665 GFLOPS. But the Navi / RDNA architectural changes more than make up for that deficit—just like Nvidia's RTX 2060 Super and its 7181 GFLOPS ends up delivering far better results in most games than the numbers would otherwise suggest. Basically, GFLOPS comparisons across architectures usually end up as apples and oranges.

AMD Radeon RX 5700 XT rendering

AMD provided reference models of the RX 5700 XT and RX 5700 (but not the 50th anniversary limited edition) for testing. While the circuit boards, memory, and GPU are the same, there are some design differences for the cooling shroud. Specifically, the RX 5700 XT has a grooved shroud with a curved section that has resulted in some "dented" jokes. The idea is that the indent provides for additional airflow, particularly in confined spaces—like if you're running CrossFire. I'm not sure how much it really matters, and often the curviness looks more like a weird manufacturing defect than something truly necessary, but it ends up mostly as an aesthetic opinion. Some may like it, some won't, and there will inevitably be custom cards from AMD's partners that ditch the blower cooling and go with two or three fans.

AMD says the new cooling is better than its previous generation cards, though I'm not fully convinced. It's not extremely loud—a far cry from the cooling on the Vega cards—but that probably has more to do with the lower power requirements than any massive change in the cooling design. Idle noise and thermals tend to be about the same for most GPUs, but playing a game is a different matter. Compared to the RTX 2060 Super, the RX 5700 XT runs just as quiet, but uses about 25W more power at the outlet and runs about 11C hotter. It's not the end of the world, but objectively Nvidia wins by comparison.

Speaking of Nvidia, it muddied the waters last week with its launch of the RTX 2060 Super and RTX 2070 Super—models with more cores than the non-Super variants, plus 2GB extra VRAM for the 2060 Super. AMD initially targeted pricing and performance that would put the RX 5700 XT ahead of the RTX 2070, but the RTX 2070 Super changed the GPU landscape. Rather than going after the 2070 Super, AMD responded by announcing new pricing for the RX 5700 family, dropping the price on the 5700 XT by $50 and matching Nvidia's price on the RTX 2060 Super. It was a smart move, because while $449 would have been difficult to justify, $399 is a good fit for the 5700 XT.

The reduction in price was also absolutely necessary (and perhaps even premeditated). I'll look at the value proposition later, but $400 is less of a mental barrier than $450 or $500. With competition between AMD and Nvidia heating up, anyone looking to buy a new graphics card will benefit. The lack of a viable RTX competitor last year is arguably a major factor in the high launch prices for the RTX line. Now that Navi is here, Nvidia has dropped the price you'll pay for relatively similar performance—e.g., the 2060 Super is nearly the same performance as the 2070, and the 2070 Super lands relatively close to the 2080.

Just because pricing is the same on the RX 5700 XT and RTX 2060 Super doesn't mean performance, features, and API support are the same. AMD has opted to not support ray tracing (either DirectX Raytracing, aka DXR, or Vulkan-RT) with its Navi GPUs. There's no hardware level RT acceleration and no Tensor processing clusters for AI and machine learning. And just as critically, even though it's possible to support DXR via drivers and shader calculations, AMD isn't doing that either (at least not yet). If you want an AMD GPU with hardware DXR support, you'll need to wait for Navi 20, which will also feature in the next generation Xbox and PlayStation consoles.

Rendering of AMD Navi 10 GPU used in RX 5700 XT

Let's get to the testing, where we're using our standard GPU test bed—full specs are to the right. The overclocked Core i7-8700K running at 5.0GHz helps ensure the CPU isn't a bottleneck, along with DDR4-3200 CL14 memory and fast SSD storage for the same reason. CPU bottlenecks are less of a concern with midrange and budget GPUs, but for the RX 5700 and Nvidia's RTX cards, CPU performance it can be a factor, particularly at 1080p.

We've benchmarked using the latest drivers available at the time of testing, including retesting older GPUs to ensure our results are up to date. All Nvidia GPUs were tested with the 430.86 drivers, except for the two Super cards which used 431.16. For AMD, we used the 19.16.2 drivers on the previous generation GPUs, with new drivers for the RX 5700 cards. Ideally, we'd retest all GPUs with the latest drivers, but that requires about one day per GPU. Sadly, there simply isn't enough time to do that, especially when new drivers arrive every few weeks.

The selection of games we're using for GPU testing has been updated, and we don't run DXR or DLSS for any of the benchmarks. That allows for meaningful comparisons between the various GPUs, since AMD has no support for DXR at present. The 11 games we're using consist of a pretty even mix of AMD and Nvidia promoted titles—The Division 2, Far Cry 5, Strange Brigade, and Total War: Warhammer 2 sport AMD branding, while Assassin's Creed Odyssey, Metro Exodus, and Shadow of the Tomb Raider are promoted by Nvidia. DirectX 12 is utilized in most cases where available, with the exception of Total War: Warhammer 2 where the "DX12 Beta" performance is particularly weak on Nvidia GPUs. We also checked Vulkan performance (in Strange Brigade), and found that the DX12 implementation is currently a bit faster so we stuck with that.

Each card is tested at four settings: 1080p medium (or equivalent) and 1080p/1440p/4k ultra (unless otherwise noted). Every setting is tested multiple times to ensure the consistency of the results, and we use the best score. Minimum FPS is calculated by summing all frametimes above the 97 percentile and dividing by the number of frames, so it's the "average minimum fps" rather than an absolute minimum. That makes it a reasonable representation of the lower end of the performance scale, rather than looking only at the single worst framerate from a benchmark run.

Here are the results, starting with 1080p. You might think that's not the primary target for a $400 graphics card, but if you're hoping to max out the capabilities of a 144Hz monitor, 1080p makes that easier than 1440p.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share

WHO WE ARE?

CsBlackDevil Community [www.csblackdevil.com], a virtual world from May 1, 2012, which continues to grow in the gaming world. CSBD has over 70k members in continuous expansion, coming from different parts of the world.

 

 

Important Links