This week’s Digital Foundry Direct comes straight from Las Vegas, where Oliver Mackenzie and Alex Battaglia share their impressions on the technologies revealed on the showfloor – kicking off with an overview of the Nvidia RTX 50-series reveal keynote. It’s a claim in that presentation that I’m going to tackle in this week’s blog. Is the upcoming RTX 5070 really offering RTX 4090 series ‘performance’? Based on established parlance, the answer is – of course – no, but the reasons behind the claim are straightforward enough and prescient of the era to follow: big leaps in frame-rate via hardware are diminishing and, like it or not, the future is biased more towards software, with machine learning taking a leading role.
To tackle Nvidia’s RTX 5070 claim head on, the notion of performance parity is based entirely on the implementation of DLSS 4 multi frame generation, where the new $549 GPU is presumably tested against an RTX 4090 running the same resolution and settings but only using single frame generation. By all quantifiable metrics, the claim holds no water. Even running on a more modern architecture and process node, the RTX 5070’s 6,144 CUDA cores are no match for the RTX 4090’s 16,384 – meaning a similar disparity in terms of RT and tensor cores. A 384-bit memory interface on the RTX 4090 gives way to a 192-bit bus on the RTX 5070. While the RTX 5070 has faster memory, it only has 12GB of it, up against the 24GB of the outgoing machine.
Put simply, without DLSS 4, the RTX 5070 will not match the RTX 4090 – unless we’re talking about relatively simple games running to the limit of your display’s refresh rate (v-sync or capped). On the face of it, Nvidia’s claims are ludicrous – and yet hands-on reports from CES like this one from PC Games N are complimentary, detailing a Marvel Rivals demo showing a 5070 with frame generation showing much higher frame-rates than a 4090, though apparently parity is more common. My guess is that Marvel Rivals is CPU-limited in this demo, where frame generation shows the biggest multipliers to frame-rate, but even so, DLSS 4 is leaving a positive impression and the frame-rate claim is borne out, albeit with a big AI asterix attached.
- 0:00:00 Introduction
- 0:01:47 Nvidia CES demos – RTX Mega Geometry
- 0:14:25 RTX Neural Materials
- 0:21:24 RTX Neural Faces and RTX Hair
- 0:31:37 ReStir Path Tracing + Mega Geometry
- 0:36:49 Black State with DLSS 4
- 0:42:47 Alan Wake 2 with DLSS 4
- 0:46:01 Reflex 2 in The Finals
- 0:53:22 AMD at CES: AI denoiser demo, Lenovo Legion Go handhelds
- 1:03:51 Razer at CES: Laptop Cooling Pad, new Razer Blade
- 1:11:30 Asus and Intel at CES
- 1:17:29 CES displays: Mini-LED, Micro-LED, OLED + monitor sins!
- 1:30:07 Supporter Q1: Will Switch 2 support DLSS 4?
- 1:32:22 Supporter Q2: Did you see the Switch 2 mockups at CES?
- 1:33:56 Supporter Q3: Could you test DLSS against an ultra high-res “ground truth” image?
- 1:37:52 Supporter Q4: Why would a developer use Nvidia-developed rendering techniques over their UE5 equivalents?
- 1:40:38 Supporter Q5: Will multi frame gen solve game stutters?
- 1:42:05 Supporter Q6: Will multi frame gen make VRR obsolete?
- 1:44:27 Supporter Q7: Is Sony regretting sticking with AMD for their console business?
- 1:49:49 Supporter Q8: What do you think of the FF7 Rebirth PC specs?
- 1:52:37 Supporter Q9: What’s the craziest thing you’ve seen on the show floor at CES?
I would bet good money that Nvidia made the 5070/4090 comparison knowing that the idea would come under heavy scrutiny. The firm clearly believes that all the testing to come will be a net positive for its claims, even when negative points of the 5070 experience are brought to the fore. And at a brutal SEO-level, the more 5070/4090 comparisons that come along, the heavier the algorithmic boost the original claim will receive.
So, having spent some time with DLSS 4, I can project how these comparisons will play out. First of all, in games that really do use more than 12GB of RAM, the limitations of the RTX 5070 will result in either degraded visuals or reduced frame-rates. Indiana Jones and the Great Circle will be an interesting test case. In other games, there may well be parity with the RTX 4090 in terms of frame-rate, but input lag will be higher, and there may be more frame generation artefacts. The higher the base frame-rate being fed into frame generation, the less noticeable the hit to input lag will be, while the resulting output frame-rate may be so fast as to make the frame-gen artefacts less noticeable – but the RTX 4090 should still look better and play better.
Marvel Rivals may be interesting to test as a best case scenario, but it’ll be ‘full RT’ path tracing games like Alan Wake 2 and Cyberpunk 2077 that really put the 5070/4090 head-to-head through its paces – and ultimately, this comparison is key. While it’s possible to enjoy path tracing even on 4070-class hardware without frame-gen, the full bells-and-whistles experience is perceived to be 4090-only and the promise is that this level of ‘performance’ is now available for a $549 product.
Increased latency is the key limitation of frame generation that DLSS 4 does not completely crack. The way the technology works is simple: a standard rendered frame is generated, then the next one is buffered (causing latency). Then, frame-gen calculates the one, two, or three interpolated frames (you can choose with DLSS 4) that are displayed between the standard rendered frames. Nvidia mandates the use of its Reflex latency-reduction technology to mitigate the difference. The same system is in play for DLSS multi-frame generation with one difference – the extra interpolated frames are generated very, very quickly. In my tests, there was a circa 6.4ms addition to latency by adding two more frames compared to existing frame-gen tech.
At this point, I should take a pause and correct some misunderstandings I noticed in the YouTube comments to the DLSS 4 video we put out last week, embedded below. Latency metrics were confused with frame-time, so the idea that Cyberpunk 2077 had 50ms of lag was likened to a 20fps frame-rate. Truth is, every game has different levels of lag even when targeting the same performance level – as Tom Morgan demonstrated with this PS4-era testing back in the day. Tom’s latency metrics revealed a bunch of games running locked at 60fps delivering very different input lag: despite both titles running locked at 60fps, Call of Duty: Infinite Warfare had a 47.5ms latency advantage (!) over Doom 2016 and a 37.3ms advantage over esports favourite, Overwatch. The truth is, latency matters, but these differentials are somewhat higher than the lag added by frame generation in many scenarios. Frame-rate is clearly more noticeable, increased input lag less so. When did anyone complain about Doom or Overwatch lag?
I want to talk about the word ‘performance’ next. When we first unveiled DLSS 3, we talked about performance multipliers brought about by frame generation. Nvidia continues to do so. However, feedback from the community made me think about whether ‘performance’ was the most appropriate term. For the entire 3D era, increased frame-rates have been delivered hand-in-hand with reductions in input lag. It’s a performance boost in all areas. Is frame generation really a performance boost if latency is higher?
It was a good point, then I thought on this further: if 2x frame-gen isn’t delivering a 2x boost to frame-rate, aren’t we actually getting fewer standard rendered frames? When the RTX 4090 review came around, I took onboard the feedback and now refer to the increased frame-rate as exactly that – an fps boost – but not a performance boost. It’s a distinction I make because performance isn’t just frame-rate, even if – strictly speaking – it’s a measurement of the GPU’s ‘performance’, just in different terms. So-called ‘fake’ frames? It’s only an issue when the artefacts are obvious and noticeable. Like latency it remains an issue, but eventually I expect both to be solved.
And let’s be clear – Nvidia’s 5070/4090 claim is the firm laying the groundwork for a new paradigm that will be embraced by all vendors. That’s because it’s increasingly clear that the future of graphics technology will be skewed more towards AI generation and yes – ray tracing – than it will be by a bigger GPU with more shaders. It has to be, simply because the cost of the latest semiconductor fabrication processes is only heading in one direction – but if you want more ‘performance’ in the traditional sense, you’ll need to pay more for it. And even then, per the Direct this week, you’ll find that many of the advances being made are purely driven by machine learning.
Finally, is this all just one big Nvidia scam – another refrain we often hear? Well, if the technologies discussed in this week’s Direct don’t sway you, I’ll leave you with this quote from Mark Cerny, working in partnership with AMD, of course: “The strategies that we had up through PS4 Pro or the like… I wouldn’t say it’s reached a limit, but mostly it’s about making the GPU bigger or memory faster. And so, as we look to the future, the improvements will be ray tracing. I think we’re going to see a lot happening there. And then everything we can get done with ML…”
fbq('init', '560747571485047');
fbq('track', 'PageView'); window.facebookPixelsDone = true;
window.dispatchEvent(new Event('BrockmanFacebookPixelsEnabled')); }
window.addEventListener('BrockmanTargetingCookiesAllowed', appendFacebookPixels);
Leave a Reply