Gaming News

How Much Power Do GPUs Actually Consume?

Nailing down how much power PC components consume is a surprisingly difficult question. It’s easy to answer how much power a system draws at the wall — a basic Kill-A-Watt meter doesn’t cost much, and while it doesn’t offer features like power consumption tracking over time, it’ll still provide good bulk information if you track it manually for several minutes during a consistent workload.

Tracking the exact power consumption of a given component, be it CPU, GPU, or something like RAM, requires a soldering iron and some skill in using it. Over at Tom’s Hardware, they’ve built themselves an extensive GPU testing rig and put a huge range of cards through their paces to answer one question: How much power do GPUs actually draw?

Like CPUs, GPUs define a TDP rating. As with CPUs, that TDP rating is best thought of as a metric for GPU power dissipation (meaning, how much heat the cooler needs to be able to deal with) as opposed to an exact power consumption metric. Neither Nvidia nor AMD guarantee that a 150W GPU will draw exactly 150W, for example.

There are software tools for reporting GPU power consumption, but apps like GPU-Z still depend on the GPU telling the application how much power it’s using. Nvidia GPUs report total power consumption to the program fairly accurately, but AMD only reports its actual GPU core power consumption, not the impact of the rest of the board. This doesn’t impact system-level power measurements drawn at the wall, but it does hide the impact of GPU VRM and RAM power consumption from the GPU-Z application itself. THG’s new testing rig gets around this problem rather nicely.


Graph and data by THG.

I’m limiting myself to Metro Exodus — they have results on Furmark as well, if you want to see that data — but what we see here is pretty interesting. Once you factor HBM / VRAM into the equation, AMD’s Vega 64 family was a remarkable power burner, while the 5700, which offers virtually identical performance, was an enormous improvement. The implications of this graph are that the Radeon 5700 uses roughly 55 percent the power of the Vega RX 64, a nearly 2x performance/watt improvement.

This kind of achievement paints AMD’s Navi-based power efficiency gains in a new light last year. While it’s true that AMD’s 7nm is still competing against Nvidia’s 12nm and Ampere is expected out this year, AMD still made some very genuine improvements with RDNA. It claims it’ll be able to duplicate that feat with RDNA2, so we’ll see how it all plays out.

There’s another interesting cluster between the Asus RTX 2060 Super and the GTX 1080 FE (169W – 180W), showing how performance and power consumption tend to cluster around a common point, even between product generations. The RX 590 and RX 5700 XT are another really interesting point of comparison, showing how much performance can sometimes be improved within the same power envelope.

If you’re already aware of the general shape of PC GPU power consumption, you won’t see a ton of surprising information here, but it’s always neat to see these things broken out in more granular detail.

Now Read:

  • Analyst: Nvidia Ampere Will Boost Performance, Slash Power Consumption by 50 Percent
  • How to Download the Nvidia Control Panel Without the Microsoft Store
  • Why TFLOPS Are Bad for Comparing PlayStation 5, Xbox Series X Performance

Related posts

AMD, HP Unveil 2-Exaflop Supercomputer With Epyc, Radeon Instinct


Intel May Not Launch a New HEDT CPU This Year


Microsoft Teases New Windows 10 Start Menu Without Live Tiles