NHacker Next
login
▲AMD’s RDNA4 GPU architecturechipsandcheese.com
131 points by rbanffy 16 hours ago | 21 comments
Loading comments...
steelbrain 6 hours ago [-]
Went down the MI300A rabbit hole that was just casually mentioned in this post (https://chipsandcheese.com/p/inside-the-amd-radeon-instinct-...). What a fun chip! (and blog!)
erulabs 7 hours ago [-]
Lower power consumption on a desktop monitor is an interesting technical challenge but I do wonder “Cui bono?” - obviously I’d want my gaming machine to consume less power but I’m not sure I’ve ever considered mouse-idle monitor-on power consumption when considering eg AMD versus Nvidia for my gaming machine.

Don’t get me wrong this is very interesting and AMD does great engineering and I loath to throw shade on an engineering focused company but… Is this going to convert to even a single net gain purchase for AMD?

I’m a relatively (to myself) a large AMD shareholder (colloquially: fanboy) and damn I’d love to see more focus on hardware matmul acceleration rather than idle monitor power draw.

Luker88 4 hours ago [-]
Some people appreciate leaving the pc open for light tasks even at night, and wasting too much power doing nothing is... well, wasteful. Imagine a home server that has the GPU for AI or multimedia stuff.

The same architecture will also be used in mobile, so depending on where this comes from (architecturally) it could mean more power savings there, too.

Besides, lower power also means lower cooling/noise on idle, and shorter cooldown times after a burst of work.

And since AMD is slowly going to the (ever next-time) unified architecture, any gains there will also mean less idle power draw in other environments, like servers.

Nothing groundbreaking, sure, but I won't say no to all of that.

delusional 2 hours ago [-]
> Imagine a home server that has the GPU for AI or multimedia stuff.

I imagine you wouldn't attach a display to your home server. Would the display engine draw any power in that case?

ThatPlayer 1 hours ago [-]
Is a PiKVM considered a display? I've got one attached to my home server. Alongside the dedicated graphics card, it probably uses more power than usual server motherboard KVMs, but it's still cheaper and accessible for home servers.
jayd16 7 hours ago [-]
Rumors have been floating around about some kind of PS6 portable or next gen steam deck with RDNA4 where power consumption matters.

There's also simply laptop longevity that would be nice.

kokada 3 hours ago [-]
I am not saying that this was the reason I bought it, but I recently purchased a Radeon 9070 and I was surprised how little power this card uses in idle. I was seeing figures between 4W~10W on Windows (sadly slightly more on Linux).

In general this generation of Radeon GPUs seems highly efficient. Radeon 9070 is a beast of a GPU.

dontlaugh 36 minutes ago [-]
They benchmark so well that I’m considering replacing my 3070 with a 9070xt.
makeitdouble 6 hours ago [-]
To wager a guess, would that optimization also help push the envelope when one application needs all the power it can get while another monitor is just sitting idle ?

Another angle I'm wondering about is longevity of the card. Not sure if AMD would positively care in the first place, but as a user if the card didn't have to grind much on the idle parts and thus last a year or two longer, it would be pretty valuable.

formerly_proven 4 hours ago [-]
Recent nvidia generations also about doubled their idle power consumption. Those increases are probably actual baseline increases (i.e. reduce compute power budget), while prior RDNA generations would idle at around 80-100 W doing video playback or driving more than one monitor, which is more indicative of problematic power management.
sjnonweb 3 hours ago [-]
Power efficient chips will result in more overall performance for the same amount of total power drawn. Its all about performance/watt
sylware 4 hours ago [-]
Another pane of AMD GPU R&D is the _userland_ _hardware_ [ring] buffers for near direct hardware userland programming.

They started to experiment on that in mesa and linux ("user queues", as "user hardware queues").

I don't know how they will work around the scarse VM IDs, but here, we are talking near 0 driver. Obviously, they will have to simplify/cleanup a lot 3D pipeline programming and be very sure of its robustness, basically to have it ready for "default" rendering/usage right away.

Userland will get from the kernel stuff along those lines: command/event hardware ring buffers, data dma buffers, read/write pointers & doorbells memory page for those ring buffers, and an event file descriptor for an event ring buffer. Basically, what the kernel currently has.

I wonder if it will provide some significant simplification than the current way which is giving indirect command buffers to the kernel and deal with 'sync objects'/barriers.

averne_ 2 hours ago [-]
The NVidia driver also has userland submission (in fact it does not support kernel-mode submission at all). I don't think it leads to a significant simplification or not of the userland code, basically a driver has to keep track of the same thing it would've submitted to an ioctl. If anything there are some subtleties that require careful consideration.

The major upside is removing the context switch on a submission. The idea is that an application only talks to the kernel for queue setup/teardown, everything else happens in userland.

formerly_proven 4 hours ago [-]
In terms of heat output the difference between an idling gaming PC from 10 years ago (~30-40 W) and one today (100+ W) is very noticeable in a room. Besides, even gaming PCs are likely idle or nearly idle a significant amount of time, and that's just power wasted. There are also commercial users of desktop GPUs, and there they are idle an even bigger percentage of the time.
DiabloD3 55 minutes ago [-]
Idling "gaming PCs" idle about 30-40w.

Your monitor configuration has always controlled idle power of a GPU (for about the past 15 years), and you need to be aware of what is "too much" for your GPU.

RDNA4 and Series 50 is anything more than the equivalent of a single 4k 120hz kicks it out of super-idle, and it sits at around ~75W.

daneel_w 25 minutes ago [-]
> Idling "gaming PCs" idle about 30-40w.

Hm, do they? I don't think any stationary PC I've had the past 15 years have idled that low. They have all had modest(ish) specs, and the setups were tuned for balanced power consumption rather than performance. My current one idles at 50-55W. There's a Ryzen 5 5600G and an Nvidia GTX 1650 in there. The rest of the components are unassuming in terms of wattage: a single NVMe SSD, a single 120mm fan running at half RPM, and 16 GiB of RAM (of course without RGB LED nonsense).

DiabloD3 21 minutes ago [-]
Series 16 cards have weird idle problems. Mine also exhibited that. They're literally Series 20s with no RTX cores at all, and their identical 20 counterparts didn't seem to have the same issue.

So, I assume its Nvidia incompetence. Its my first and last Nvidia card in years, AMD treats users better.

adgjlsfhk1 7 hours ago [-]
the architecture is shared between desktop and mobile. this sounds 100% like something that they did to give some dual display laptop or handheld 3 hours extra battery life by fixing something dumb.
syntaxing 11 hours ago [-]
More curious, does RDNA4 have native FP8 support?
krasin 10 hours ago [-]
I refer to the RDNA4 instruction set manual ([1]), page 90, Table 41. WMMA Instructions.

They support FP8/BF8 with F32 accumulate and also IU4 with I32 accumulate. The max matrix size is 16x16. For comparison, NVIDIA Blackwell GB200 supports matrices up to 256x32 for FP8 and 256x96 for NVFP4.

This matters for overall throughput, as feeding a bigger matrix unit is actually cheaper in terms of memory bandwidth, as the number of FLOPs grows O(n^2) when increasing the size of a systolic array, while the number of inputs/outputs as O(n).

1. https://www.amd.com/content/dam/amd/en/documents/radeon-tech...

2. https://semianalysis.com/2025/06/23/nvidia-tensor-core-evolu...

atq2119 9 minutes ago [-]
It's misleading to compare a desktop GPU against a data center GPU on these metrics. Blackwell data center tenor cores are different from Blackwell consumer tensor cores, and same for the AMD side.

Also, the size of the native / atomic matrix fragment size isn't relevant for memory bandwidth because you can always build larger matrices out of multiple fragments in the register file. A single matrix fragment is read from memory once and used in multiple matmul instructions, which has the same effect on memory bandwidth as using a single larger matmul instruction.

curtisszmania 9 hours ago [-]
[dead]