Jump to content
Facebook Twitter Youtube

[Hardware] Nvidia RTX 4070 vs AMD RX 6950 XT: There can be only one winner


Blackfire
 Share

Recommended Posts

  • 37fc2EhsGnb5va2avUCBKN-1920-80.jpg.webp
  • Nvidia has rolled out its $600 killer, the GeForce RTX 4070(opens in new tab). And wouldn't you know it, prices of AMD's last-gen Radeon RX 6950 XT(opens in new tab) have tumbled to just over $600. Which immediately begs the question, which is the best graphics card for your roughly 600 bucks?

    We'll skip over the question of where the hell the rest of AMD's RX 7000-series graphics cards are, because right now its strategy seems to be relying on its last-gen options to take the fight to Nvidia outside of the $999 RX 7900 XTX(opens in new tab).

    This then is a classic contest between a last-gen GPU with traditional enthusiast specs including a big memory bus and loads of VRAM versus the young upstart with more advanced technology and features but hailing from lower down the model line.

  •  

    Just enough. That's Nvidia's MO when it comes to graphics memory. And, I'm sorry, but 'just enough' isn't actually good enough for $600. Especially when just enough applies today and can't be relied on tomorrow.

    AMD's Radeon RX 6950 XT, you see, is a proper enthusiast-spec card. You get a 256-bit memory bus and 16GB of VRAM and that means plenty of bandwidth and indeed sufficient VRAM for the latest games.

    You could argue that the increased cache of newer GPUs like the RTX 4070 makes memory bandwidth less critical. I'm sure Dave will. And it's true. But the problem with the RTX 4070's stingy 192-bit bus isn't bandwidth, it's the limitations it makes for graphics memory allocation. 

  •  

    Long story short and without getting into the technicalities, a 192-bit bus means Nvidia had to go for 12GB of VRAM. 16GB isn't an option with a 192-bit bus. And for better or worse, 12GB is marginal when it comes to the most demanding current games.

    That is only going to get worse. Sure, if you're only planning to keep your new card for a year, 12GB versus 16GB may not prove much of an issue. But I'd say within around 18 months to two years 12GB is going to be a real barrier.

  •  

    Already, numerous games can exceed 12GB depending on the settings used, including The Last of Us, Resident Evil 4, Forspoken and a fair few others. That roster will only increase.

    The real irony is that turning on ray tracing, which the RTX 4070 is inherently better at, only makes matters worse. It tends to bump up VRAM usage significantly, only making it more likely that the RTX 4070 runs into problems as more and more games use ray tracing in future.

  •  

    Of course, I haven't even mentioned the fact that the 6950 XT is around 15% faster than the RTX 4070 at pure raster rendering even when running out of VRAM isn't an issue.

    All of which isn't to say that I don't value the RTX 4070's feature set. In my view, DLSS scaling is definitely that little bit sharper and more effective than AMD's FSR. And frame generation in DLSS 3 is a nice feature, even if it's absolutely not the same thing as adding more frames and will do nothing to help the RTX 4070 in those instances when it runs out of VRAM.

    But the point is that I do think the RTX 4070 has a better feature set. The problem is that it's been unnecessarily hobbled by Nvidia's stinginess. With 16GB of VRAM, I'd overlook the RTX 4070's raster performance deficit happily. But I ain't paying $600 for a card that's already flirting with obsolescence

     

     

    oJt6KQsr8k23iBR7mMDohJ-1200-80.jpg.webp
  • oJt6KQsr8k23iBR7mMDohJ-1200-80.jpg.webp

     

    Part of it is going to be a bit of pride, but I don't think I could reasonably spend $600 on an RX 6950 XT over an RTX 4070. Even now I'm typing that, it doesn't seem right, considering the AMD Radeon card was a $1,100 GPU at launch. So, is it just an innate struggle on my behalf to drop such a significant amount of money on a last-gen piece of tech?

    I'm going to say, no.

    If you're looking at straight raster performance the AMD card has a fairly significant edge, especially for two cards that now cost the same amount of cash with the red team dropping its pants on pricing for its once flagship GPU. And a few years ago that would have absolutely been enough for me to say 'stuff the RTX 4070, imma get me some AMD goodness.' 

  •  

    But we no longer live in a world where the old rules of graphics card performance apply. Native resolution and rasterised rendering aren't the only things to consider anymore. Since Nvidia dropped real-time ray tracing as a genuine ting in 2018 it's gained ever more traction in the intervening years.

    Sure, at the outset it was only in a handful of games, and such a performance hog that it felt more a proof of concept feature than anything people would actually use on a day-to-day basis. And Nvidia's own data bears this out(opens in new tab), with it reporting that only 37% of RTX 20-series gamers enabled the feature back in 2018.

  •  

    Less so the emergent upscaling magic of DLSS, where only 26% of gamers who could use it would.

    Both technologies have significantly improved, and so have the GPUs which run them, making ray tracing, this top end of the graphical feature stack actually playable on a wide range of Nvidia cards.

  •  

     

    xCpVbP5cMiEhkc3FoatZ8K-1200-80.jpg.webp
  •  

     

    And that name dropping of the green team there is deliberate, because however much more prevalent ray tracing has become in the gaming consciousness, and in actually released games, AMD is still not great at the ol' tracing of rays. Though it's improved in the RDNA 3 architecture, any game that uses even a smidge of ray tracing effects puts the RX 6950 XT behind the RTX 4070 in our benchmarks.

    Jeremy's argument that the difference between 16GB and 12GB of VRAM is going to make the real difference going forward is fine, but I'm yet to be convinced that is going to be a real battle outside of some very specific instances. 

    Then there's upscaling, which lowers the demands on VRAM, and the extra L2 cache Nvidia's slapped onto its Ada GPUs means it has the bandwidth, too. While AMD's FidelityFX Super Resolution is an impressive upscaler in its own right, the company is still playing catchup with DLSS and even more so now that Nvidia has dropped the RTX 40-series' not-so-secret weapon: Frame Generation.

    AMD's got its own frame interpolation feature(opens in new tab) reportedly incoming, but Nvidia's Frame Generation is here, and already gifting RTX 40-series gamers free frame rate upgrades with no visible fidelity hit and super smooth gaming even at 4K.

    It's not enabled in every game, and needs to be coded in by the devs, but it's being baked into game engines now and will certainly become more widely used. I don't want to find myself with an RX 6950 XT, proud of its old school raster performance, when I really want to be running some shiny new lighting effects at 4K in a new game.

    Granted, there aren't many new games these days, but I still think I'd find it hard taking a step back in GPU generational terms. Especially given the performance gap between the RTX 4070 and RX 6950 XT could well end up getting wider over time.

  •  

    [https://www.pcgamer.com/nvidia-rtx-4070-vs-amd-rx-6950-xt/

     

     

     

     


     
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share

WHO WE ARE?

CsBlackDevil Community [www.csblackdevil.com], a virtual world from May 1, 2012, which continues to grow in the gaming world. CSBD has over 70k members in continuous expansion, coming from different parts of the world.

 

 

Important Links