Quick Links

If you've ever tried to get a vintage computer game up and running on a modern system, you've likely been shocked at how fast the game ran. Why do old games run out of control on modern hardware?

Earlier today we showed you how to run older software on modern computers; today's question and answer session is a nice compliment that digs into why some older software (specifically games) never seem to work right when you try to run them on modern hardware.

Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.

The Question

SuperUser reader TreyK wants to know why old computer games run crazy fast on new hardware:

I've got a few old programs I pulled off an early 90s-era Windows computer and tried to run them on a relatively modern computer. Interestingly enough, they ran at a blazing fast speed - no, not the 60 frames per second kind of fast, rather the oh-my-god-the-character-is-walking-at-the-speed-of-sound kind of fast. I would press an arrow key and the character's sprite would zip across the screen much faster than normal. Time progression in the game was happening much faster than it should. There are even programs made to slow down your CPU so that these games are actually playable.

I've heard that this is related to the game depending on CPU cycles, or something like that. My questions are:

  • Why do older games do this, and how did they get away with it?
  • How do newer games not do this and run independently of the CPU frequency?

So what's the story? Why exactly do the sprites in old games blaze across the screen so fast the game becomes unplayable?

The Answer

SuperUser contributor JourneymanGeek breaks it down:

I believe they assumed the system clock would run at a specific rate, and tied in their internal timers to that clock rate. Most of these games probably ran on DOS, and were real mode (with complete, direct hardware access) and assumed you were running a iirc 4.77 MHz system for PCs and whatever standard processor that model ran for other systems like the Amiga.

They also took clever shortcuts based on those assumptions including saving a tiny bit of resources by not writing internal timing loops inside the program. They also took up as much processor power as they could – which was a decent idea in the days of slow, often passively cooled chips!

Initially one way to get around differing processor speed was the good old Turbo button (which slowed your system down). Modern applications are in protected mode and the OS tends to manage resources – they wouldn't allow a DOS application (which is running in NTVDM on a 32-bit system anyway) to use up all of the processor in many cases. In short, OSes have gotten smarter, as have APIs.

Heavily based off this guide on Oldskool PC where logic and memory failed me – it's a great read, and probably goes more in depth into the "why".

Stuff like CPUkiller use up as many resources as possible to "slow" down your system, which is inefficient. You'd be better off using DOSBox to manage the clock speed your application sees.

If you're curious about how the actual code was implemented in early computer games (and why they adapt so poorly to modern systems without being sandboxed in some sort of emulation program), we'd also suggest checking out this lengthy but interesting breakdown of process in another SuperUser answer.


Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.