Forget buying a dedicated graphics card, pretty soon you’ll be gaming without one. At least, if you’re part of the 90% of people who still game at 1080p or less. Recent advancements from both Intel and AMD mean their integrated GPUs are about to tear up the low-end graphics card market.

Why Are iGPUs So Slow in the First Place?

There are two reasons: memory and die size.

The memory part is easy to understand: faster memory equals better performance. iGPUs don’t get the benefits of fancy memory technologies like GDDR6 or HBM2, though, and instead, have to rely on sharing the system RAM with the rest of the computer. This is mostly because it’s expensive to put that memory on the chip itself, and iGPUs are usually targeted at budget gamers. This isn’t changing anytime soon, at least not from what we know now, but improving memory controllers allowing for faster RAM can improve next-gen iGPU performance.

The second reason, die size, is what’s changing in 2019. GPU dies are big—way bigger than CPUs, and big dies are bad business for silicon manufacturing. This comes down to the defect rate. A larger area has a higher chance of defects, and one defect in the die can mean the whole CPU is toast.

You can see in this (hypothetical) example below that doubling the die size results in a much lower yield because each defect lands in a much larger area. Depending on where the defects occur, they can render an entire CPU worthless. This example isn’t exaggerated for effect; depending on the CPU, the integrated graphics can take up nearly half the die.


Die space is sold to different component manufacturers at a very high premium, so it’s hard to justify investing a ton of space into a much better iGPU when that space could be used for other things like increased core counts. It’s not that the tech isn’t there; if Intel or AMD wanted to make a chip that was 90% GPU, they could, but their yields with a monolithic design would be so low that it wouldn’t even be worth it.

Enter: Chiplets

Intel and AMD have shown their cards, and they’re pretty similar. With the newest process nodes having higher defect rates than normal, both Chipzilla and the Red Team have opted to cut their dies up and glue them back together in post. They’re each doing it a little differently, but in both cases, this means that the die size problem is no longer really a problem, as they can make the chip in smaller, cheaper pieces, and then reassemble them when it’s packaged into the actual CPU.

In Intel’s case, this looks to be mostly a cost-saving measure. It doesn’t seem to be changing their architecture much, just letting them choose which node to manufacture each part of the CPU on. However, they seem to have plans for expanding the iGPU, as the upcoming Gen11 model has “64 enhanced execution units, more than double previous Intel Gen9 graphics (24 EUs), designed to break the 1 TFLOPS barrier”. A single TFLOP of performance isn’t really that much, as the Vega 11 graphics in the Ryzen 2400G have 1.7 TFLOPS, but Intel’s iGPUs have notoriously lagged behind AMD’s, so any amount of catching up is a good thing.

Ryzen APUs Could Kill the Market

AMD owns Radeon, the second largest GPU manufacturer, and uses them in their Ryzen APUs. Taking a look at their upcoming tech, this bodes very well for them, especially with 7nm improvements around the corner. Their upcoming Ryzen chips are rumored to use chiplets, but differently from Intel. Their chiplets are entirely separate dies, linked over their multipurpose “Infinity Fabric” interconnect, which allows more modularity than Intel’s design (at the cost of slightly increased latency). They’ve already used chiplets to great effect with their 64-core Epyc CPUs, announced early in November.

According to some recent leaks, AMD’s upcoming Zen 2 lineup includes the 3300G, a chip with one eight-core CPU chiplet and one Navi 20 chiplet (their upcoming graphics architecture). If this proves to be true, this single chip could replace entry-level graphics cards. The 2400G with Vega 11 compute units already gets playable frame rates in most games at 1080p, and the 3300G reportedly has almost twice as many compute units as well as being on a newer, faster architecture.

This isn’t just conjecture; it makes a lot of sense. The way their design is laid out allows AMD to connect pretty much any number of chiplets, the only limiting factors being the power and space on the package. They’ll almost certainly use two chiplets per CPU, and all they’d have to do to make the best iGPU in the world would be to replace one of those chiplets with a GPU. They’ve got a good reason to do so as well, as it would not only be game-changing for PC gaming but consoles as well, as they make the APUs for the Xbox One and PS4 lineups.

They could even put some faster graphics memory on die, as a sort of L4 cache, but they’ll likely use system RAM again and hope they can improve the memory controller on third-gen Ryzen products.

Whatever happens, both the Blue and Red Team have a lot more space to work with on their dies, which will certainly lead to at least something being better. But who knows, maybe they’ll both just pack in as many CPU cores as they can and try to keep Moore’s law alive a bit longer.

Profile Photo for Anthony Heddings Anthony Heddings
Anthony Heddings is the resident cloud engineer for LifeSavvy Media, a technical writer, programmer, and an expert at Amazon's AWS platform. He's written hundreds of articles for How-To Geek and CloudSavvy IT that have been read millions of times.
Read Full Bio »