r/hardware Jan 20 '25

Video Review [der8auer] - RTX 5090 - Not Even Here and People are Already Disappointed

https://www.youtube.com/watch?v=EAceREYg-Qc
158 Upvotes

325 comments sorted by

View all comments

54

u/BinaryJay Jan 20 '25 edited Jan 20 '25

Here's the thing. I don't care how well timespy runs. I want to see difference in performance from 4090 using new transformer model DLSS SR and RR. Nvidia has said to DF essentially that the new transformer model uses 4X the compute budget and that it was codeveloped with Blackwell to run efficiently on Blackwell. They didn't come right out and say it's going to run badly on older RTX hardware but it was heavily implied there would be a cost to it that Blackwell is uniquely equipped for.

If the new DLSS features make a huge difference in quality, but don't run as well on older hardware I think it would be a very valid and relevant comparison. Also if I can turn on DLSS FG 3X or 4X without even noticing it compared to DLSS3 FG that's a big win for me as most of my gaming is single player these days and I have been generally pretty satisfied with FG so far.

So yeah performance numbers in a benchmark case are fine, or comparing some older games is fine, but the card is clearly much more powerful in other more non traditional ways that are going to affect how happy someone is with what is appearing on screens.

Anyways, it's not like anyone with a 4090 is going to be unhappy with what it's capable of over the next two years either but I think there is more nuance to this than just bar graphs.

46

u/kontis Jan 20 '25

This is exactly what Jensen was implying in interviews years ago: convince customers to buy new hardware because of new software (DLSS) instead of actual raw performance jump, because of the deaths of Dennard scaling and Moore's law.

11

u/Plank_With_A_Nail_In Jan 21 '25

But it is a raw performance jump just in a different area of compute.

4

u/latending Jan 21 '25

Frame gen isn't performance, it's frame smoothing with a latency penalty.

12

u/Strazdas1 Jan 21 '25

tensor cores is performance. Framege is just utilizing tensor cores performance. Its one of multitude of things that use tensor cores.

8

u/latending Jan 21 '25

Framegen used to not use tensor cores but optimal frame accelerators. Either way, it's objectively not a performance increase.

Take an extreme example, there's two frames, 5 seconds apart. You generate 1,000 fake frames between the two frames. How's your performance looking?

1

u/Zarmazarma Jan 21 '25

Okay, let's walk the thread of replies back a bit, since I think the original point has been lost.

But it is a raw performance jump just in a different area of compute.

The 5090 does have a big, objective performance improvement over the 4090. It's just not in fp32. It's in int4/int8/bfloat16/other tensor core operations.

This statement had nothing to do with frame gen.

3

u/noiserr Jan 21 '25

It's just not in fp32. It's in int4/int8/bfloat16/other tensor core operations.

But that's just lowering the precision. You can do that on current cards and get better performance since you decrease the needed memory bandwidth to memory.

I mean it's a nice feature for quantized LLMs as it does give you a bit more efficiency, but it comes at the cost of precision and it's not all that much faster despite the inflated TOPS number.

-2

u/PointmanW Jan 21 '25 edited Jan 21 '25

All of that doesn't matter when, as far as my eyes can see, it's the same.

I tried running a game at 120 fps and compared it against 60->120 fps with framegen, both look the same to me, so practically, it's a performance gain. the input lag is so small that I can't feel it either.

your example is just an absurd example with have nothing to do with the reality of the tech, but provided that they can generate 1000 frames inbetween with little cost to base frame and have a monitor with high enough refresh rate to can display all those frame, then it too, would practically be a performance boost.

-2

u/VenditatioDelendaEst Jan 21 '25

If the fake frames are warped and in-painted following my inputs, it looks like 200 FPS.

And that's a lot easier to do convincingly over a three frame gap than 1000.

-2

u/Plank_With_A_Nail_In Jan 21 '25

It not being the performance you want doesn't stop it being performance others want. 5090's will be sold to people who won't even use them to play games on.

-2

u/Plank_With_A_Nail_In Jan 21 '25

This hardware can and will be used for things that aren't gaming. The 5090 is going to be a monster home AI card.

Its not the cards fault you don't understand what people use PC's for other than gaming.

0

u/zxyzyxz Jan 21 '25

Not sure why anyone would get the 5090 over the Digits computer than Nvidia came out with, with 128 GB of VRAM, if AI is truly the user's primary use case.

11

u/PC-mania Jan 20 '25

I am also interested to see the difference in performance when the neural rendering features are used. The performance difference between 40-series vs 50-series with the upcoming Alan Wake 2 RTX Mega Geometry update and Half Life 2 RTX with Neural Radiance Cache should be very telling.

2

u/mac404 Jan 20 '25

Yeah, this will be interesting as well.

We have no idea when HL2 RTX will release unfortunately, but Nvidia did announce that NRC is getting added into Portal RTX soon at least.

8

u/[deleted] Jan 20 '25

[deleted]

17

u/BinaryJay Jan 20 '25

Yes, they said it'll be available for all RTX cards. What we don't know is how it will affect performance as you go back in time on the tensor hardware.

1

u/[deleted] Jan 21 '25

I mean... the top-tier Ada cards have more tensor core performance than the mid-to-low tier Blackwell cards anyway, right?

10

u/mac404 Jan 20 '25

Similarly, I am personally kind of baffled by how many people seem to care how much the raster uplift is for a 5090. That metric feels increasingly niche compared to Hybrid RT and especially "Full RT" performance (along with the practical impact of the other software features) if you're seriously considering spending that much money on a graphics card this year.

Related to the new transformer model, it is really hard to get a read for how it will play out in practice so far. It could be that the frametime cost will be reasonable for most cards when upscaling to 1080p, some when upscaling to 1440p, and very few (outside of Blackwell) when upscaling to 4K. Or it could be that they don't want to announce the free image quality boost for old cards too loudly when Blackwell isn't even out yet. Either way, I agree that the quality/performance tradeoff between different generations will be very relevant from a performance perspective if the quality is significantly better (which it seems to be).

1

u/[deleted] Jan 21 '25 edited 2d ago

[deleted]

1

u/ResponsibleJudge3172 Jan 21 '25

Not much, because rt and tensor takes 10% die space

-10

u/Jeep-Eep Jan 21 '25

I don't care about transformer models, I care about actual horsepower rather then filler.

5

u/DeCiWolf Jan 21 '25

Everything has been filler for a long time now brother. get with the times. GPU use so much tricks not a single frame is real. dont fall for the fake frames rhetoric.

-14

u/Cynical_Cyanide Jan 20 '25

Likes fake frames

Oookay buddy. Regardless of whether it's single player or multiplayer, you're either playing a game where reaction times matter (90%+) like any shooter, MMORPG, hack n slash, beat em up, racing, RTS (micro), platformer etc etc ... or you're watching a psuedo cinematic movie. Truly slow games like puzzle games very rarely have high GPU perf requirements in the first place, but if they do - You're barely going to notice the additional fake frames anyway, there's very rarely enough fast movement to make a big difference. DLSS can be great, but extremely rarely so for FG, it's mostly just placebo.

Look, cards have had new features since the dawn of the industry. Whether it's PhysX, or RT or whatever. That's obviously a good thing, and more choice is more betterer, but comparing as close to apples to apples is not just traditional, it's the only fair and rigerous way to do it.

If Nvidia wants to trade actual performance for features, that doesn't mean those features bring more performance, it means that it lets you do different things with the performance you do have. And, it's not like previous generations of cards haven't brought out awesome new features and advancements WHILE ALSO pumping up perf and perf per watt. This time Nvidia releases pretty much just new features, and we're meant to defend that as equal progress? Nah. This ain't it, chief.

6

u/teutorix_aleria Jan 20 '25

platformer

Bro is sweating at shovel knight tuning his pc to get 1ms less input delay. This is the most capital G gamer take i've read on this sub in a while.

I genuinely feel like the majority of people who say this stuff dont own a GPU capable of frame gen and haven't tried it. I have an AMD gpu and even AFMF2 works well at the driver level. DLSS FG is in another league its barely noticeable that its on at all.

1

u/Strazdas1 Jan 21 '25

he also says imput lag matters in RTS. Yeah, maybe for that 100 top in the world SC players. Not for anyone else.

4

u/ThatOnePerson Jan 21 '25

he also says imput lag matters in RTS.

Knowing exactly where your mouse is is nice, but you can render a cursor separately from the 3D/generated frames and all that, and you'd probably never notice.

I know Starcraft 2 runs at something like 25 tick/s. So inputs are already only processed like every 40ms.

1

u/VenditatioDelendaEst Jan 21 '25

God, I hate games that don't use hardware cursors.

2

u/teutorix_aleria Jan 21 '25

Nah we need to worry about 5ms latency in stellaris when its taking 80ms+ to generate each frame late game really need those lightning reaction times.

1

u/Strazdas1 Jan 22 '25

to be fair, Stellaris is not a RTS, its TBS, he didnt mention TBS.

1

u/teutorix_aleria Jan 22 '25

Stellaris is not turn based

1

u/Strazdas1 Jan 23 '25

It is. The turn time just goes continuos. In game logic is things happen every turn/ingame-day.

1

u/teutorix_aleria Jan 23 '25

Every single game has a tick rate, by that logic counterstrike is a turn based game.

1

u/Strazdas1 Jan 23 '25

counterstrike runs a continuous action and updates every tick. Stellaris does a do action. Pause. Do action. Pause. Just like TBS, just without having to manually issue turns.

→ More replies (0)

-1

u/Cynical_Cyanide Jan 21 '25

Stellar is is CPU limited you fool, lmao. It wouldn't matter either way.

0

u/teutorix_aleria Jan 21 '25

Its called a joke lmao

1

u/VenditatioDelendaEst Jan 21 '25

Probably not even for them. Surely those guys have enough muscle memory to supply their inputs feed forward.

1

u/Strazdas1 Jan 22 '25

someone else mentioned star craft 2 runs on 25 ticks/minute, so then the input is already with up to 40 ms delay based on games engine. So yeah i guess that 1 ms extra wont matter even for the pros.

-1

u/Cynical_Cyanide Jan 21 '25

Lmao imagine needing frame gen to play shovel knight!

What would you rather have, another 500fps on top of the 300 you already have, or 1ms less delay? Sure the latter is a minor improvement, but at least it is one.

1

u/Strazdas1 Jan 21 '25

I would say reaction time matter to the point where you consider input lag into it only in twich shooters, which most people never play to begin with.

0

u/Cynical_Cyanide Jan 21 '25

Disagree. Twitch shooters might be the worst case scenario, but in any shooting game you want to be able to react, faster the better.

1

u/Strazdas1 Jan 22 '25

Faster reaction is better, objectively. However how much is that more important than anything else?

1

u/Cynical_Cyanide Jan 22 '25

You phrase that like I'm suggesting a tradeoff for something better. I'm not. What would be ideal is simply more raw performance so that we can decide for ourselves what we want to do with it, but in lieu of that, faster real frames at a given quality level is better than loads of fake frames with reduced quality level for ... placebo most of the time.

1

u/Strazdas1 Jan 23 '25

You are suggesting a tradeoff. To get faster reaction here you are loosing motion clarity.

more raw performance is not goign to happen. You may as well wish cows would start flying. We hit physics wall there and its going to be very slow now.

0

u/[deleted] Jan 21 '25

[deleted]

1

u/Strazdas1 Jan 22 '25

most people do not play twitch shooters. While its true that top played steam game is a twitch shooter, it has its audience and people playing other games rarely mix in. Someone playing stardew valley (13th on steam charts) isnt goin to go play Counter strike or PUBG. Note here that i specifically say twitch shooters, not all shooters. Most shooters are not twitch shooters and dont require immediate imput reactions.

Literally 6 of the top 10 games on steam are shooters.

You meant to say 3.

1

u/[deleted] Jan 22 '25

[deleted]

1

u/Strazdas1 Jan 23 '25

Once again, im talking about twitch shooters specifically, only first three on your list would qualift. GTAV/FiveM isnt even a shooter at all to begin with.

1

u/[deleted] Jan 23 '25

[deleted]

1

u/Strazdas1 Jan 23 '25

the primary focus on GTA V is driving to start with...

And i was pointing out that input lag matters a lot less in shooters that arent of the twitch wariety. Heck, GTA V uses P2p servers, so most likely the lag from networking would be so much worse that input lag wont matter at all.

1

u/[deleted] Jan 23 '25

[deleted]

→ More replies (0)

-1

u/Plank_With_A_Nail_In Jan 21 '25

They aren't trading performance though the card is faster than any other consumer graphics cards plus it has these features on top.

No one is mean't to defend anything we are just trying to have a conversation so please stop trying to shut it down.

Don't like it don't buy it that's the only option you have, though something tells me you like it just can't afford to buy it.

1

u/Cynical_Cyanide Jan 21 '25

Lmao ... It's not about whether the card is faster, it's about whether the card would be faster if they refocused their efforts. This entire post is about how the raw perf is a disappointing increase, not that it's not at all faster.