AMD Radeon RX 6800, 6800XT review: The 1440p GPU beasts you’ve been craving

 AMD Radeon RX 6800, 6800XT review: The 1440p GPU beasts you’ve been craving

This post was originally published on this site

As 2020 draws to a close, the matter of “which new $500+ GPU is better” has become moot for most prospective PC gamers. Nvidia had a go at it starting in September with the impressive RTX 3000 series, offering on-paper value that blew its RTX 2000 series straight into obsolescence. But so far, its three RTX 3000 models have suffered from a mix of low supply and savvy scalpers scooping up such scant inventory.

This changes the tenor of any conversation about AMD’s new RDNA 2 line of GPUs. In a more stable supply-and-demand universe, I’d be careful to warn buyers about the pros and cons of $500+ GPUs made by either manufacturer this year—each GPU has its own clear victories. That’s already good news for AMD’s two new cards going on sale this week, since it has been years since the “Red Team” has been this competitive with Nvidia.

With no clear indication that AMD will handle Radeon RX 6800 ($579) and Radeon RX 6800XT ($649) supplies any better than Nvidia and its RTX 3000 series, the verdict is a bit wacky. Your best option for the rest of 2020, honestly, is whatever you can actually purchase at a reasonable retail price. These cards are duking it out enough to give either side something in the way of future-proofed gaming performance at high-but-fair prices. If you’re incredibly eager to upgrade to this tier and see anything in stock this year, outside of the worst resellers, close your eyes and buy.

Should both sides be sold out for the rest of the year, of course, then picking through the differences may adjust how you set up on-sale notifications for GPUs in the next few months. In that case, AMD may charm you. Across the board, Red Team’s cards have speed to spare, with the $649 RX 6800XT in particular contending well against, if not outright surpassing, the $699 RTX 3080. And either new AMD card might be your no-question champ if you’re looking for VRAM-intensive workloads and 1440p gaming—and if you are unmoved by ray tracing.

I’m sorry, how much L3 cache?!

AMD Radeon RX 6800XT AMD Radeon RX 6800 Nvidia GeForce RTX 3080 FE Nvidia GeForce RTX 3070 FE Nvidia GeForce RTX 2080 Ti FE
Stream Processors 4,608 3,840 n/a n/a n/a
CUDA Cores n/a n/a 8,704 5,888 4,352
Texture Units 288 240 272 184 272
ROPs 128 96 96 96 88
Tensor Cores n/a n/a 272 (3rd-gen) 184 (3rd-gen) 544
Ray Accelerators 72 60 n/a n/a n/a
RT Cores n/a n/a 68 (2nd-gen) 46 (2nd-gen) 68
Core Clock 1,825MHz 1,700MHz 1,440MHz 1,500MHz 1,350MHz
Boost Clock 2,250MHz 2,105MHz 1,710MHz 1,725MHz 1,635MHz
Memory Bus Width 256-bit 256-bit 320-bit 256-bit 352-bit
Memory Speed 16GHz 16GHz 19GHz 14GHz 14GHz
Memory Bandwidth 512GB/s 512GB/s 760GB/s 448GB/s 616GB/s
Memory Size 16GB GDDR6 16GB GDDR6 10GB GDDR6X 8GB GDDR6 11GB GDDR6
TDP 300W 250W 320W 220W 260W
MSRP at launch $649 $579 $699 $499 $1,199

The breakdown between Nvidia’s RTX 3000 series and AMD’s RDNA 2 series sees each side emphasize certain specs. VRAM is the most obvious differentiator, with AMD getting a 16GB pool of GDDR6 VRAM into users’ hands at every tier of this year’s line. Nvidia made its own funky wager: less of the same-specced VRAM (8GB) at its $499 RTX 3070, and a little more (10GB) for its $699 RTX 3080 though that’s bumped to a blistering GDDR6X configuration.

Otherwise, the spec showdown between AMD and Nvidia differs on a basis of compute units and clock speeds. AMD’s RX 6000-series cards exceed 2.1GHz, ahead of the 1.7GHz range on the RTX 3000 series, while their stream processor counts pale compared to Nvidia’s own CUDA core counts. That’s not an equivalent comparison, I admit, but in terms of teraflop determination, both of those values are used.

Like the RTX 2000 series before it, Nvidia’s RTX 3000 cards include two types of proprietary cores: one pool dedicated to ray tracing, and the other dedicated to mathematical computation and juggling of Nvidia’s own machine-learning models. AMD says that it has added its own “ray accelerators” to the RX 6000 series’ boards (one per compute unit). In terms of ray tracing-specific computation, AMD describes them as able to calculate “up to four ray/box intersections or one ray/triangle intersection every clock.” This is similar to how Nvidia’s RT cores work, though their RTX 3000 series has upgraded the cores to additionally handle the interpolation of triangle positions (used to efficiently handle curved light trajectories) and to double the prior generations’ handling of triangle intersection rates.

AMD currently has no answer to Nvidia’s tensor cores—and their effectiveness at intelligently upscaling a moving, lower-res image. Instead, AMD reps hinted loudly to something called FidelityFX Super Resolution coming to their latest GPUs at some undetermined point in the future. For now, nothing in the company’s press materials clarifies exactly how this system will work, how it might differ from its existing FidelityFX Sharpening system, or what in the RDNA 2 architecture will be leveraged by Super Resolution.

Weirdly, AMD had little specific to say about one of the more incredible spec figures on its RX 6000-series GPUs unmatched by Nvidia: a whopping 128MB of L3 cache (surpassing the already staggering 64MB of L3 cache found on various AMD Ryzen CPUs). Exactly how current or future games leverage that high-bandwidth chunk of cache, particularly with measures like tile-based or deferred rendering in a game’s juggle of GPU and CPU resources, remains unclear.

A tale of two Creeds

And the result: AMD is zigging, while Nvidia is zagging, and ultimately each gets to what they advertise in terms of blistering 3D performance in games. The test results I’ve seen comparing each company’s latest generation of cards are certainly fascinating—and they’re mostly good news for AMD, though not an across-the-board victory.

When pressed about the giant chunk of L3 cache, AMD representatives pointed to how well the RX 6000 series manages game performance at 1440p resolution, a testing scenario that is typically considered CPU-limited. Meaning, different GPUs’ 1440p testing results tend to offer milder performance differences. But all of that sweet, sweet L3 cache (which AMD dubs “Infinity Cache”) can alleviate CPU bottlenecks in lower-resolution scenarios, AMD says, so I ran a few 1440p tests as a result. Indeed, they paid off in AMD’s favor—though, again, not as an across-the-board victory.

One interesting comparison comes from the two latest Assassin’s Creed games, each a PC pulverizer in its own right. Both games include similar benchmark sequences, which see a camera fly through and over a populated village and thus push a mix of CPU-intensive tasks (particularly managing crowds of city denizens) and the usual GPU-intensive range of particle effects, large view distances, level-of-detail computation, and light-and-shadow rendering.

The newer game, this month’s Assassin’s Creed Valhalla, sees AMD’s two new cards compete quite well against Nvidia’s comparable models, with the $650 RX 6800XT surpassing the $700 RTX 3080 by a few frames per second. Turn the resolution down to 1440p, however, and AMD screams ahead of the RTX 3080—18 percent more for the RX 6800XT, and 2.6 percent more for the cheaper RX 6800.

Rewind two years, meanwhile, and Assassin’s Creed Odyssey tells a slightly different story. At 4K, the RX 6800XT still holds its own by barely beating the RTX 3080. But at 1440p, the numbers break down a little more traditionally—with the $499 RTX 3070 matching the $650 RX 6800XT, and the RTX 3080 pulling ahead of AMD’s 6800XT at 9.3 percent.

I present both of those side by side because AMD’s ability to blaze at 1440p varies. Gears 5 and Red Dead Redemption 2 are utterly impressive examples of AMD’s newest cards kicking 1440p butt, while The Witcher 3 missed that memo, with its test results, again, lining up with the cards’ price points.

It’s a shame about ray (tracing)

At its worst, the RX 6800XT falls slightly beneath the RTX 3080, but usually in the power-per-dollar range you’d expect from something $50 cheaper. The same equation works for the $579 RX 6800, whose gains on the $499 3070, at their worst, are a bit scant for an $80 premium.

The trouble, unsurprisingly, comes from the very things Nvidia has been banking on all along: its RTX-specific cores.

As of press time, some of my favorite DirectX Ray Tracing (DXR) games to test simply do not work with RDNA 2 GPUs, perhaps due to those games rejecting any GPU IDs outside of Nvidia’s list. I look forward to testing and comparing some of those games in the future, since Shadow of the Tomb Raider leverages a hearty ray-traced shadow pipeline, while Quake II RTX leans on ray tracing for its very handsome, and very demanding, global illumination model.

In the meantime, the most ray tracing-intensive game I currently have, Minecraft RTX Beta, spells out the pound-for-pound difference between both GPU flavors’ take on ray tracing—in more ways than one. On a sheer ray-tracing level, RTX 3080 more than doubles the RX 6800XT’s performance across the board, while the RTX 3070 exceeds the base RX 6800’s ray-tracing performance by 50 percent in one case and 74 percent in another. (Though nobody’s saying exactly why, I have a sneaking suspicion Nvidia’s updated RT cores, with certain mathematical calculations added thanks to two years of seeing RT performance in the wild, drive Nvidia’s lead to some extent here.)

But that gap only grows wider when the game leverages DLSS, an Nvidia-exclusive upscaling technology that turns Minecraft RTX Beta‘s 1440p version into a near-identical 4K signal. Remember, intelligent upscaling matters when the rendering and translation of rays of light are also in the mix, since those rays’ inherent resolution affects what they look like when they’re reflected. “Image sharpening” tools, used to cheaply fine-tune a 3D image, can’t account for this. Thus, with Nvidia’s latest DLSS version enabled, RTX cards blow further past anything AMD can currently muster.

The performance gap narrows somewhat when testing Control, our pick for 2019’s game of the year, in its own 4K ray-tracing benchmark—which is unsurprising, given that it uses less of a global-illumination system and more of a selective-yet-punishing series of RT perks. Even so, Control is among the best games to exhibit how impressive DLSS 2.1’s implementation looks as an alternative to TAA, complete with 4K upscales of fine details like signs and posters while only rendering 1440p. With DLSS turned on, AMD once again eats Nvidia’s ray-tracing dust. Watch Dogs Legion does even more to narrow the gap between Nvidia and AMD, with that game proving particularly tricky for the RTX 3070 to handle… until it once again uses DLSS to jump ahead.

Ray tracing may not be your cup of tea, and PC gaming will continue to let players punt the feature in favor of more frames. But AMD has dedicated significant silicon to the feature in this generation, and that, combined with RT features in every next-gen console this year, means we should expect more ray-tracing support going forward. Raw performance in Watch Dogs Legion is a sign of efficient rendering working in AMD’s favor in future games, but until AMD pulls that FidelityFX Super Resolution card out of its sleeve, it’s lagging behind in a feature that comes with its latest GPUs, whether you want it or not.

A quick word on VRS, latency

The usual gamut of 3DMark benchmarks.

Enlarge / The usual gamut of 3DMark benchmarks.

Two other parts of AMD’s RDNA 2 presentation to the press sounded very similar to features that Nvidia has previously advertised: variable rate shading (VRS) and reduced latency. The former will likely pick up significant steam as more next-gen console games take advantage of the feature, and it works by intelligently deciding where on a moving image to draw fewer texture details in ways that users likely won’t notice in order to speed up rendering. The latter schedules CPU workloads to overlap a bit more with GPU workloads, instead of waiting an extra frame for a CPU computation to complete so that a frame can be drawn.

For now, very few PC games have emphasized VRS in their 3D engines, but as a part of DirectX 12 Ultimate, the feature will likely soon roll out in more games—and, quite honestly, not really stand out between either AMD or Nvidia. And on the input latency front, I’m admittedly a freak about responsive gameplay and frame rates, but even I struggle to perceive the impact of AMD’s supposed tweaks when playing twitchy games that easily run in the 70-80fps range on both AMD and Nvidia hardware. AMD suggested I test the feature while playing something closer to 30fps, in order to perceive the AMD latency difference, but I’ll be honest: I’m not sure “twitchy game,” “high-performing AMD GPU,” and “only 30fps” will land in the same gaming sentence any time soon.

Should you consider building an entirely new system, AMD has quite the cheeky sales pitch: AMD Smart Memory. AMD insists that, should your motherboard, CPU, and GPU all line up, you can expand the usual IO exchange between CPUs and GPUs, which AMD says is traditionally limited to roughly 256MB of mapped memory. AMD Smart Memory ensures that this bandwidth is opened up to “all” of the GPU’s memory, AMD says, but I don’t have a Ryzen 5000-series CPU or an AMD 500 series motherboard to match any of the new AMD GPUs with.

Instead, I’m left looking at charts provided to the press that claim performance jumps anywhere from 0 to 15 percent, as estimated across 10 games running primarily via the DirectX 12 API, along with two in Vulkan. In nearly every example given by AMD, that performance boost drops as in-game resolution goes up, and while it’s hard to determine an “average” boost across various games and resolutions, it seems to land somewhere in the 6 to 8 percent mark—not a shabby performance kick if you’re already drooling over AMD’s latest CPUs, but also not necessarily as blazing as “access to all of the GPU’s memory” might lead you to believe.

More VRAM, more expectations?

Though AMD advertises a built-in feature called "Rage Mode" for its RX 6800XT, meant to ramp up fans and emphasize boost clocks across the board while gaming, I found this mode didn't affect performance. On the other hand, both the 6800 and 6800XT can be tweaked with either manual or automatic touches to things like core clocks, memory clocks, and voltage. In my testing, I found I could carve an additional 0-2 frames per second across the board by toggling the automatic VRAM overclock setting for both cards, though AMD makes you click through a somewhat scary agreement before its built-in tools open up for such tweaking. (Rage Mode doesn't require such a click-through, which might be why it has so little impact on perf.)

Enlarge / Though AMD advertises a built-in feature called “Rage Mode” for its RX 6800XT, meant to ramp up fans and emphasize boost clocks across the board while gaming, I found this mode didn’t affect performance. On the other hand, both the 6800 and 6800XT can be tweaked with either manual or automatic touches to things like core clocks, memory clocks, and voltage. In my testing, I found I could carve an additional 0-2 frames per second across the board by toggling the automatic VRAM overclock setting for both cards, though AMD makes you click through a somewhat scary agreement before its built-in tools open up for such tweaking. (Rage Mode doesn’t require such a click-through, which might be why it has so little impact on perf.)

AMD’s other avenue to guaranteeing the future of its RX 6000 series is its generous VRAM pool, which currently clocks in at 16GB of GDDR6 memory for every GPU. As of press time, debates have raged about whether a massive VRAM pool is a difference maker for next-gen 3D games and applications, particularly since Nvidia’s comparable 2020 cards have opted for lower amounts.

As an example, Gears 5, which launched in 2019 with an optional 4K texture pack, tells users that it needs no less than 8GB of VRAM to toggle those higher-res textures. The $499 RTX 3070 barely clears that threshold, but admittedly, it doesn’t struggle with Gears 5 as a result. Its 4K benchmark results are right there with the RX 6800, exceeding that price-to-performance comparison.

Does that mean buying a GPU in the 8GB to 10GB VRAM range is a fine bet in the short term? Honestly, we’re not sure. Nvidia has talked quite a bit about its RTX I/O system, designed to supercharge data throughput in ways that remove CPU bottlenecks and drive more crucial GPU data directly to where it’s supposed to go. That doesn’t work exactly the same as AMD’s Infinity Cache feature, but both relieve pressure on dumping data into VRAM… though we’ve yet to see this functionality in PC gaming (even though it’s marked as an officially supported offshoot of DirectX 12 Ultimate’s DirectStorage API).

If your PC’s on-board solid-state memory can in any way work as figurative RAM, particularly when clocked at PCI-e 4.0 rates—much like the PS5 and Xbox Series are doing—then that takes VRAM anxiety out of the equation somewhat. So, too, do tricks like VRS, which reduce the burden of exactly how much texture data is being streamed onto your screen at any given moment. And there’s an argument to be made about the RTX 3080 having much faster VRAM than AMD’s latest rivals—though the above test results don’t show that pool of GDDR6X memory giving that card a clear lead.

In the meantime, current high-end PC games report certain amounts of VRAM usage, and typical system utilities report the allocation that games demand from a GPU, as opposed to the actual MB-by-MB breakdown of what is and isn’t being used. Gears 5, thankfully, is an exception to that rule, as the reporting tool inside its benchmark shows a maximum of 5.75GB of VRAM being used by its 4K mode, well short of its 8GB demands. Otherwise, using a community-made injection tool like Special K, we can find VRAM demands in games generally falling beneath 6GB in 4K-minded games currently on the market.

While content creation apps have become more GPU intensive over the years, and typically require large amounts of VRAM to function, I don’t have any of those in my usual testing suite. But with news of DaVinci Resolve Studio releasing a free beta of its 17.0 version this week, I ran the numbers on both RTX 3080 and RX 6800XT by using Puget Systems’ benchmark suite. My preliminary results lean in RTX 3080’s favor, with a 9.9-percent higher “overall” score in its 4K content benchmark. That deserves a few grains of salt, considering this is a test of beta software that could drastically change, but I wanted to offer that as a data point outside of 3D game rendering.

AMD has been reading your comments

When all else fails, of course, more VRAM outside the purview of any API-development requirements is good baseline stuff for however you want to use your GPU (or how indie developers brute-force their games into performative levels). We may very well run into GPU testing scenarios in the next year or two showing that 8 or 10GB of VRAM could leave 4K gamers stuck min-maxing their settings menus to get to performance parity with AMD’s suite of 16GB cards. So far, however, that hasn’t been the case.

Even so, AMD isn’t charging users a particular premium to enjoy a hefty pool of VRAM on top of everything else the RX 6000 series offers. It’s not a ray-tracing champ by any stretch, and its 4K capabilities vary on a game-by-game basis, but I’ve seen a lot of people say they don’t care about either of those PC gaming scenarios now or in the near future. AMD was clearly reading those comment sections when putting the RX 6000 series together, and the results on both of this week’s new cards are good enough for competitive 4K gaming and overkill 1440p performance.

Now, it just remains to be seen how many of the comment sections about card availability Red Team has read.

Listing image by Sam Machkovech