Gaming News

Intel’s Arc GPUs Reports Clock Speeds Differently Than Nvidia, AMD

This site may earn affiliate commissions from the links on this page. Terms of use.

A bit over a week ago, Intel officially launched its first Arc GPUs. The company will be offering mobile Arc chips first, with desktop GPUs coming later this year. Although the launch certainly answered many questions, it also raised several new questions at the same time. One of the most pertinent queries has to do with its advertised clock speeds. As it turns out, Intel is not following in the footsteps of its competitors in the way it lists GPU clock speeds. This led to some confusion at launch, but Intel has clarified the situation to Extremetech.

Right off the bat, the mobile Arc GPUs seem to have rather low clock speeds. Intel’s chart (below) just says “Graphics Clock” with no further explanation of whether it’s a base clock or a boost clock. This is not how Nvidia and AMD do things. AMD lists a “Game Frequency” and says it’s the clock speed one can expect while playing games. As an example, the Game Frequency of its RX 6800M is listed as 2,300MHz. Nvidia offers a range of clock speeds and calls it the GPU’s “Boost Clock.” This means you should expect to see the GPU’s clock speed boost within the range under load. For the RTX 3080 Ti mobile, that’s between 1,125Mhz and 1,590Mhz. In comparison, Intel lists the graphics clock for its entry-level A350M as just 1,150Mhz, which is low even for a newbie GPU. So what gives Chipzilla?

As it turns out, the clock speeds Intel lists are kind of meaningless. According to Intel, they are basically the lowest clock speed the GPU will achieve. In their testing of a whole bunch of chips, it was the worst case scenario it saw across a wide variety of applications. That means its clock speed could be a lot higher in certain applications, such as gaming. It could even be as high as 2GHz and beyond in some games, but a lot lower if it’s thermally limited. That is similar to the chip being thermally throttled, but remember this is a mobile GPU. Intel plans on sticking them them in a wide variety of laptops, which an array of sizes and thermal solutions. The way we understand it, this means Intel is just playing it safe. Instead of advertising a certain clock speed that the card might never hit in a certain laptop under certain conditions, it’s just low-balling it. Interestingly, if you go to the Intel website for its entry-level A350M, it describes the listed clock speeds as a “Base Clock.” There’s no mention of a boost clock at all.

A visualization of the range of clocks possible. (Image: Intel)

This is a somewhat surprising way to launch a family of GPUs. Companies are usually prone to providing numbers at launch that paint their product in the best light possible. Apple recently got in trouble for this by saying its M1 Ultra chip was faster than the RTX 3090 via a misleading chart. Heck, graphics card companies are widely known to always launch GPUs with vague performance numbers lacking any real world value. Despite this situation, Intel has taken the opposite approach.

One the one hand, we appreciate its honesty. However, it would also be helpful to know what the max clock could be in gaming, since this is a GPU after all. We all know “results may vary,” so what’s the harm in just providing that information? That’s why Nvidia provides a broad range of clock speeds, and why AMD adds a disclaimer to its number. It states, “‘Game Frequency’ is the expected GPU clock when running typical gaming applications, set to typical TGP (Total Graphics Power). Actual individual game clock results may vary.”

Ultimately, what will matter the most is performance, not claimed frequency, but gamers will need to keep in mind that AMD, Nvidia, and now Intel measure some of these metrics in different ways and make different claims as a result.

Now Read:

Related posts

Samsung Unveils First 8K Ultrawide Gaming Monitor With DisplayPort 2.1


Sony Will Keep Making PS4 Due to Limited Supply of PS5


Chinese Customs Officials Seize $3M Worth of Mislabeled XFX GPUs