At CES 2025 this week, Nvidia CEO Jensen Huang announced the latest generation of GPUs, the GeForce RTX 50 Series. Like many of its cards, the RTX 50 Series has AI-powered deep learning features to boost performance, but the latest version of DLSS (DLSS 4) will use AI to generate up to three frames per rendered frame, inflating your frame rate. Most of its new features are AI-powered.
Of course, that will surprise nobody. Nvidia has been at the forefront of AI, which is what’s given it a market capitalisation of over $3 trillion, surpassing Microsoft in June of 2024. As Huang said at the company’s keynote, “We used GeForce to enable AI, and now AI is revolutionising GeForce.”
However, the news hasn’t all been as predictable as this. Along with the GPUs, Huang announced a series of tools for developers called the Nvidia RTX Kit, a group of “neural rending technologies” meant, broadly, to improve visuals in video games.
So, what’s in the kit? Firstly, there are RTX Neural Shaders. These shaders, according to a blog post on the company’s website, “bring small neural networks into programmable shaders, enabling a new era of graphics innovation. The applications of neural shading are vast, including radiance caching, texture compression, materials, radiance fields, and more.”
There’s also RTX Mega Geometry that makes it “possible to ray trace up to 100x more triangles than today’s standard”, and New Reflex 2’s Frame Warp, which reduces latency “by updating the rendered game frame based on the latest mouse input right before it is sent to the display”.
Nvidia is also introducing ACE autonomous game characters powered by generative AI, which “enables living, dynamic game worlds with companions that comprehend and support player goals, and enemies that adapt dynamically to player tactics.” We’ve already seen this at play with PUBG Ally, but it will also be used in the upcoming InZOI.
I don’t have high hopes for this.
I’m not going to lie and say I have more than a surface level understanding of what most of this means in practice, because I’m not a developer. It means more detail, more realism, more light, and maybe smarter NPCs. Is this good? Is it helpful? That’s for actual developers to decide. Not my circus, not my monkeys. What I do know, however, is what faces look like, which is where the next part comes in.
Related
Why Can’t I Stop Playing Games That Make Me Angry?
Some of the games that have made me the angriest are also the games I’ve played the most.
Can’t Read My Poker Face
Perhaps the weirdest of the inclusions is RTX Neural Faces, which “takes a simple rasterized face plus 3D pose data as input and uses a real-time generative AI model to infer a more natural face”. The phrase “more natural” is… well, misleading. You can see it for yourself in the video embedded below.
It’s particularly strange that the section of the blog post detailing the Neural Face technology is titled “Crossing the Uncanny Valley of Rendering Digital Humans”, because it doesn’t. It still looks wild. In the video, we see a character – the “Standard Face” – who looks like any character model you’d see in a modern triple-A video game. Her mouth is moving kind of strangely as she talks.
Then a screen wipe transitions to the RTX Neural Face, which shows a completely different-looking character. She looks like she’s put on a full face of makeup, contour included. Her skin is more tanned, her cheekbones more defined, her lips plusher, her chin rounder. Her features seem to shift slightly around her face as she speaks and tilts her head. Basically, the filter yassified her. It’s like the “hire fans” Aloy meme, come to life.
I don’t know what to tell you, man, it looks weird as hell. It took a perfectly good character with distinctive features and gave her a different face and, arguably, a different race. I’m clearly not the only one who thinks so – just look at the replies under Nvidia’s tweet about the technology. Character design is an art, and the Neural Face tech is an Instagram filter that wipes that out and gives it that familiar AI-generated shifting effect. Boo.
All of this is in the pursuit of better performance, so we can squeeze better graphics out of bigger games and make everything look and feel more realistic. It’s hard to imagine someone who would want to look at characters like this talking at them for probably 50 hours, and yet the tech exists. God, I hope nobody uses this, it’s so ugly.
Next
The Xbox Series S Was A Mistake
Yet another developer has spoken out about the underpowered console.
Leave a Reply