Ever sit down to watch a trailer for the latest video game, only to find yourself out of the chair and dancing with excitement by the end of it? “The graphics look so good, and did you see that explosion? It was like I was actually there!”
Unfortunately, over the past few years we’ve been taught that expectations rarely meet up with reality in the world of game trailers. But why is that? How do developers make a game look so good for three minutes at a time, only to have them fall flat once the full game makes it onto shelves?
“In-Game” vs. “In-Engine” vs. CGI Trailers
In 2005, the Killzone 2 trailer debuted at E3, featuring graphics unlike anything anyone had ever seen before (console or otherwise). The animations and character models were so fluid they looked like they were ripped straight from a computer-generated movie. Used as advertising fodder to show off the increased graphical capabilities of the PS3, the trailer was posted and reposted by every gaming news outlet in the country, and heralded as the launching point for “gaming’s second renaissance”.
Of course, it didn’t take long for the press to dissect the trailer bit by bit. As more actual in-game screenshots leaked out over the next few months, journalists and gamers alike started to wonder if the trailer they were shown at E3 was really telling the whole story. Turns out Guerilla (the developers of Killzone) had used a technique known as “in-engine rendering”, which allowed the developers to add in extra lighting elements, new animations, or other alterations to clean up the final product.
There are a few different ways game developers can create a trailer. Full CGI trailers, like the Overwatch trailer above, are made completely separate from the game engine. These usually include Pixar-esque cinematics that involve characters in the story fighting some kind of battle or having a lot of dialogue. Even though CGI trailers are a divisive promotional tool in the gaming community, they’re also commonly accepted as part of the advertising blitz necessary to get a game to sell by the time it’s on shelves.
“In-engine” trailers, like that Killzone trailer in 2005 (or the Total War: Warhammer trailer above), are a bit different. When you make an in-engine trailer, it works similarly to the pre-rendered CGI model, except that the 3D artists are animating characters using only the game’s engine to create a static cutscene. You may also see these referred to as “pre-rendered” trailers.
It’s easy to make in-engine shots look good because you can fine tune exactly how many resources the engine is using for any given element. An artist can push more graphical fidelity to a character’s face while blurring the background, or add more processing power to animations instead of loading the character’s artificial intelligence. They can also add custom animations or other cinematic effects that you wouldn’t see in-game, even if they require more processing power than a normal gaming PC would be able to handle. That’s why everything looks so flawless.
Finally, in-game trailers take place inside the actual environment of the game. In theory, this implies they’re recording someone actually playing the game as a “what you see is what you get” demonstration. When a company decides to release “in-game” footage for an upcoming release, it all starts with picking out the part of the game they want to show off most. Once the route for the player is planned and choreographed, a developer will run through the segment on a development PC, and record their movements as they progress through the map.
Why “In-Game” Doesn’t Always Mean What It Should
That’s not the whole story, though. In-game footage can still be altered. By carefully changing settings like how a particular shot is exposed, developers can be sure that their “in-game” footage looks its absolute best by the time the trailer is released, even if it uses features not available to normal gamers, or requires processing power no gaming PC would be capable of.
Sometimes, the case could be made that what we see in these trailers are what the company wanted the final game to look like, as a vision of what it could have been with endless resources and time at their disposal. In the case of The Division back in 2013, Ubisoft showed off a graphically rich, dense game filled with gorgeous textures that lined a living, breathing world. Now that the beta is out in 2016, three years later, testers everywhere are reporting how little the game they’re playing resembles the experience from the first trailer.
Many jump to the conclusion that the developer is misleading them. But it could also be a sign of developers with big ideas who are forced to accept the reality of working on limited hardware with finite budgets, and have to downgrade graphics or gameplay elements in order for the game to run without crashing every few seconds.
For now, there are only vague laws that are able to prevent companies from using the “in-game footage” tag on any recordings of gameplay that have gone through editing since it was originally recorded. After all, even pre-rendered cutscenes are technically “in the game”, so they get to be referred to as “gameplay”. The issue is that developers will often spend months laboring over how to make just one section of their game look as good as it can for the trailer, while ignoring the fact that those same resources probably could have been better spent improving the performance of the title as a whole.
There’s no established international body that can dictate how game companies promote their products, so until more concrete false advertising limitations are placed on developers for what they can refer to as “in-game” compared to “in-engine”, the problem will only continue to get worse from here.