The RTX 3080 is the most powerful GPU we've ever seen at this price point. But, we've said that before. And in fact we say it every time there's a new graphics card generation. So what makes this one so darn special? I mean, the hype surrounding it was so huge that even we got caught up in it. But as we all know, hype disappoints and absolute hype disappoints absolutely.
While you read over our test platform, let me get this off of my chest. NVIDIA, man you guys really went out of your way to set yourselves up here, didn't you? Like I don't get it. Spoiler alert, you're shipping the fastest graphics card on the market at half the price of the previous kin, why don't you just leave it there? Why push your luck with claims like, twice as fast as RTX 2080? Because it's not.
To be very clear, it's not like I'm saying, you know, boo, 3080 biggest disappointment ever because, right out of the gate at 1440p we're looking at a performance lead over, not just the price comparable last gen 2080 but even the 2080 Ti and even with RTX disabled, I mean, this is a freakin' awesome leap in raw horsepower. Overall, we saw anywhere from a 20 to 75% uplift over the 2080 with the biggest benefits coming in DirectX 12 and Vulcan titles. So then, the only reason I'm disappointed at all is because NVIDIA told me to expect double the performance of the 2080. Which as it turned out was, anywhere from a small embellishment to a fish this big whopper. But wait, okay, because perhaps there's more to this. could our lackluster CS:GO performance numbers hold the key?Look how CPU bound this game ends up even at 1440p. So let's kick it up a notch to 4K where we're gonna leave out our RX 5,700 XT so we can test out DLSS, NVIDIA's deep learning up sampling technology. Is that 41 frames per second in Microsoft Flight Simulator? At 4K Ultra, no less? I mean, not only that, but the minimum of 34 FPS, that is perfectly playable in a game that's more focused on visuals versus fast movement which means that we can definitively say yes, it can run Flight Sim 2020, like damn.
Now in general, we're still not seeing that 2X performance bump with results closer to 75%, but we're also still outpacing our RTX 2080 Ti by anywhere from 10 to 30%. And at 4K, CS:GO really lets the 3080 flex its big gains in memory bandwidth, thanks to its 10 gigs of GDDR 6X memory. That is a very small drop in performance compared to 1440p.
So all right then, while it may not be exactly what NVIDIA promised us, the 3080 is still a gaming monster. Maybe productivity will give us a clearer understanding of what they meant by that whole double the performance thing. And yeah, okay. That's double and well ahead of the RTX 2080 Ti. In fact, in Blender, the RTX 3080 CUDA score is so good that it rivals, excuse me, scratch that, beats the RTX 2080's RTX optimized optics render time, like, excuse me, pardon? And don't even get me started on the 3080's optics times. I mean this, this is game changing for students and prosumers and even professionals looking to build affordable and powerful 3D rendering stations. SPECviewperf does bring us back down to earth somewhat, with performance that's more in line with our gaming numbers, which is a bummer, but at least it gives us a hint as to where NVIDIA may have been focusing when they came up with that double figure. Interestingly though, team red pulled off a couple of wins here too. So, while the 3080 tops the charts in most cases, it's not all powerful. And AMD's Big Navi announcement next month could extend some of those leads.
Part of the rationale for choosing a Ryzen based GPU test bench was so that we could test PCI Express Gen 3 versus Gen 4 starting with our review of the RX 5,700 XT. Unfortunately, we had some issues to exercise, so that'll have to wait for another day.
For now, let's look at power draw. Using NVIDIA's PCAT tool, we captured this data while running SPECviewperf on our NVIDIA cards here, and we can see that the RTX 3080 is sitting roughly 100 Watts higher than the RTX 2080 Ti. Impressively hitting a lofty maximum above 350 Watts. That is lot of juice, ladies and gentlemen. And if the trend continues, it definitely explains why NVIDIA felt they needed this new 12 pin connector.
This power draw translates into a significant increase in heap output as you might expect as well. But it turns out the new Founders Edition cooler design is nothing short of incredible. In spite of it drawing 100 Watts more, our 3080 maintained lower temperatures in a closed chassis than our 2080 Ti and boosted to roughly two gigahertz under full load throughout our testing. I mean, hey, it's unconventional, you know, blowing air up here and sucking air up through there, but gosh darn it, it works. At least for the GPU itself.
One small detail NVIDIA glossed over in their marketing is that all of this heat has to go somewhere. And when we dug a little deeper, well by jove, we found it. As it turns out, dumping heat directly into your CPU and Ram makes them run hotter. We even made this little graph where you can see System Thermals climbing by 10 degrees in a gentle arc across the board, indicating that it's not just SPECviewperf putting a load on these components, or we'd be seeing a spiky graph similar to our CPU and GPU temperatures.
There is good news though, when we ran the same test with a Founder's Edition RTX 2080 Ti we actually got even worse results indicating that the more efficient system airflow that NVIDIA boasted about for their cooler, actually manages to extend to the rest of the system as well. So good job guys.
And there's more to the story here than performance and power alone. NVIDIA has become laser focused on system latency as a way to market their products, even going as far as to seed press with the internal tools that they use to measure it. And as it turns out, that's for good reason. More frames per second does result in smoother animations, but since most gamers would probably say 100 FPS looks butter smooth. It's clear that this smooth animation's benefit has already hit a wall of diminishing returns. What doesn't seem to have a limit yet, however, is the effect of higher frame rates on a gamer's actual performance.
Last year, we demonstrated a benefit in competitive play all the way up to 240 Hertz. So then if the reason for this improvement in gaming acumen isn't smoother animations, well what is it? It's the responsiveness of the system. The more often the image updates on screen, the less delay a gamer's gonna feel between the movement of their mouse and the corresponding crosshair, making it easier to track down opponents.
Well NVIDIA Reflex, which is launching today as well, takes this research NVIDIA's done and goes a step further than even the ultra low latency mode that's been enabled in both NVIDIA and AMD's drivers since last year. And NVIDIA has worked with developers to improve rendering latency directly in the game itself. With a CPU bound scenario, it's not exactly a big deal. You're getting a very slight improvement in response time with a bit more consistency. We're not sure what that plus boost option is all about, but in Fortnite anyway, it doesn't seem to do much, at least at high frame rates. Other games implementations may be better. NVIDIA says that Reflex works best in GPU bound loads anyway though, and in that scenario, latency goes from 90 to 120 milliseconds to around 45 to 80 with Reflex enabled. That's huge. One of the biggest problems with a low frame rate is low responsiveness. So the small sacrifice of one to two FPS ends up being a really good trade.
The best part is that this feature works all the way back to 900 series Maxwell GPUs. Great stuff.
Also revolutionary is RTX IO, which uses the same direct storage technology that Microsoft is integrating into the Xbox Series X and Series X to allow the GPU to stream and decompress data directly from an NVMe SSD using the CUDA course from an RTX 2000 series card or newer. We can't test this feature yet because there are no games on the market to support it yet, but we would expect gamers to see similar benefits to what Sony showed off with their analogous technology, with improvements to everything from texture resolution in game, to even level design, eliminating things like unnecessary elevator rides while the assets load in.
But wait, there's more. What we also can't test is the reports that all AMPERE based cards support SRIO V at a hardware level. For the uninitiated, SRIO V allows a single GPU to be shared between multiple virtual machines, nevermind two gamers, one GPU. With performance like this, you could run a whole freaking land center off a single Threadripper 3990X and like four of these.
I mean, this would be a monumental shift from NVIDIA, considering that they've locked down G-Force and virtual machines since basically forever, or at least it would be. NVIDIA clarified with us that the reports of their change of hardware sadly overstated.
Sorry gang, bad news on the GeForce SR-IOV front. NVIDIA got back to me this afternoon and let me know that they erred on answering my SR-IOV question. SR-IOV support is not coming to GeForce cards. The hardware supports it, but it won't be enabled in the GeForce software (cont)
— Ryan Smith (@RyanSmithAT) September 12, 2020
And as it turns out, just because they can support the feature, it doesn't mean that it's enabled. And sure enough, when we fired up our 3080 in Linux, it didn't report any SRIO V capabilities. NVIDIA does say though that the final decision on whether to enable it or not has yet to be made. So it is still possible they'll enable it. Maybe if enough people tweeted them, Jensen will get the hint.
In summary then, just like last time we have no way of testing some of the headline features and no time to test others like NVIDIA broadcast, which we'll be sure to do a follow up reveiw on, but unlike last time, the raw performance and pricing are compelling enough story on their own. Even if NVIDIA overplayed their hand a little. And unlike last time, the big feature, you know, the one that's right in the name, the hardware support for real time ray tracing has had a couple of years to mature with a much longer list of supported games to go along with graphics cards that are now actually powerful enough to turn the feature on without tanking the frame rate. To be clear, it's still not the kind of thing that I would personally buy a new card over. But as far as added bonuses go, Ray Trace Minecraft is pretty sweet.
Comments (2)