I was thinking today about the future of game development and graphics. The year I graduated from High School (1992), there were two major game-genres created. Dune 2 created the real-time strategy genre and Wolfenstein created first-person shooters.
Here’s a sample of the graphics. They’re not that impressive, but they were for the time.
Dune 2 gameplay (1992):
Wolfenstein gameplay (1992):
Doom (1993):
Compare that to the latest games:
All of this makes me wonder: where things will go in the future? We jumped from Wolfenstein 3d (1992) to this in 17 years. Now, add another 20 years of technology improvements, where will we be then?
I think we’re approaching the limits of how much graphics can be improved. Even if we quadruple the number of polygons rendered every second, it’s just doesn’t result in huge image-quality benefits like it used to. The major issue now is getting artists to create all that content – the geometry, textures, animations, etc. Some of the areas where I think we’ll be seeing improvement:
Graphics and Expansive Worlds:
– We’re getting to the top of what can be improved as far as graphics goes. A number of games are already boasting expansive game-worlds measured in hundreds of square miles. It seems like it won’t be long before there’s good libraries of realistic 3d models that game developers can drop into games. There’s still a big market for unique look and feel (see Brutal Legend, Team Fortress, Plants vs Zombies) which will be done with armies of artists. And there’s also lots of work to be done in level design.
Physics:
– Work on physics systems have come a long way in the last decade. Games like Crysis started showing off the fact that their physic models can handle thousands of barrels collisions at the same time. Collision detection, cloth simulation, fluid dynamics are definitely getting there. The Havok physics engine allows games to drop-in physics systems into games, so it’s being standardized. The technology behind damaging 3d objects isn’t quite there. (For example, the Halo Warthog never shows any signs of damage.) There are some racing games that tries to simulate car collision damage, but it ends up being a one-off system that isn’t standardized for game engines.
One racing game that simulates damage to vehicles:
Even in the case where games do simulate damage, it’s done in predefined ways – for example, they might have “after a car accident” cars, but they don’t have “hit by a grenade” or “shot by a gun” damage unless they explicitly create that damage.
Collisions between humanoids gets even more complicated, which is why combat in games is mostly a bunch of scripted moves. Ever notice how in games like “World of Warcraft” that a character’s weapon seems to move through the enemy? That’s because calculating the actual collision effects are too complicated. Realistically, it would include factors like the weight of the weapon, the character’s strength, the weight distribution of the enemy being hit, his muscle tension, etc. Systems like Euphoria (below) at least try to handle that kind of stuff.
Artificial Intelligence:
AI covers a whole bunch of different things. There is so much work to be done on AI that we’ll still be doing work 20 years from now. I would be nice if there were “AI engines” like there are 3d engines, now, but it seems that every game has to reinvent the wheel when it comes to AI. This is partly because “AI” is really a blanket term covering a whole bunch of different things, and also because the AI has to work well within the game rules (whatever they may be). Some types of AI systems:
– Character AI: embodied, realistic actors with personality, knowledge, desires, etc. They should be able to handle conversations. Right now, this is generally handled with predefined conversations where game designers offer players a limited set of responses. Ideally, you would want a very open conversation, but that takes enormous amounts of work. That system is also vulnerable to problems if the game plot changes. (For example, if a character is removed from the game, you won’t want anyone to make reference to that person in any conversations.)
Recently, I watched a video for “The Witcher 2”. They were boasting about the advanced AI, but I didn’t think it was that impressive. For example, in this scene, the witcher casts a spell. Game characters react, but they don’t react realistically:
For a really believable set of characters, they should’ve had a whole variety of factors influencing their reaction. First: is magic common in the game world? If magic is uncommon then they should be astonished and fearful. If magic is common, then the spell should be less surprising. In this particular scene, it seems that the characters could’ve been killed if the magic was targetted at them – which should make them nervous. And, the AI system should figure out if the game-characters were aware of who cast the spell. If the spell-casting was obvious and within their visual field, they should react to the player. Depending on their culture, religious beliefs and the exact situation, they might react in a hundred different ways, ranging from “what the heck just happened?” to “bow down and worship the player as a god” to “kill the witch” to “run away before he kills us all” to “thank god we have a powerful magic man like you on our side”. The reaction of game characters should also depend on the particular situation. For example, if the player-character says, “I’m a powerful wizard, give me your money” (and then casts a spell to intimidate the low-level game-characters), they should react by giving him what he wants. But, if the player says, “I will help you defeat the evil lord …”, then the player casts a spell, they will be impressed. Character AI gets very complicated, but then, it’s trying to simulate human reactions – which *is* a very complicated thing. Game developers can script these kinds of reactions, but it’s a lot of work and players will always do something you didn’t think of. It would be nice if developers could drop-in game-characters that could react realistically without developers needing script reactions to a hundred different things the player might do.
– Strategic AI: decision-making (deciding what to build, where and when to attack, expectations of success, forming alliances, starting wars, making trades). It also needs to have a knowledge-system, beliefs about what other players know and expect. It needs to anticipate other player’s actions and determine intentions (is the other player sending that transport over to invade my territory, or is he merely bumbling around on the map?). Ideally, the strategic AI could learn, so if players figure out clever ways to consistently beat the AI, it would react to stop that exploit – like a real-person would.
– AI voice-generation (intonation, timing, realism) – this should include an AI personality and mental state, since those things influence speech. Very difficult, probably not possible without a full-scale AI.
– Voice comprehension / text-comprehension – I’m not talking about speech-to-text technology, but actual comprehension. This is very difficult, probably not possible without a full-scale AI.
– AI/Body/Physics interaction (sports games, Force Unleashed). One irritation of mine is how games don’t handle simple things like foot-planting. Instead, in games like World Of Warcraft, characters can spin around in a circle without moving their feet. (See foot-planting in Madden ’09.) At least the character-movement and collision systems are starting to be handled by systems like Euphoria. I’ve been impressed by some of the things I’ve heard about the Euphoria engine. For example, game-characters can see if you are throwing something at them, and put their hands up in front of their face in an attempt to protect their head. All this kind of stuff will be standard.
– Pathfinding AI. Pathfinding is all about finding the best path from point A to point B. To function realistically, it should include character knowledge and the AI’s mental-map: does the game-character know the area or not? The pathway has to take into account body-type (humanoid, rodent, etc), athleticism (can he jump, crawl). I’ve seen some middleware engines that will handle this kind of navigation. It would be good to have a generalized pathfinding system that can work in 3D worlds, allowing game-characters to intelligently find their way around, even if the player disrupts the game-world (by destroying walls, blocking doorways, etc). Here’s a demo of one system:
Tools:
– We’ll look forward to better/faster tools for creating stuff (AIs, game-world terrain, etc). Instead of, say, placing every plant on a landscape, the tools will allow creators to have game-worlds grow organically based on predefined settings (vegetation types, geography, etc) and then they can go through and edit areas to put-in the things for the game. Right now, these kinds of tools are available within game editors, but it should become standardized so that these types of tools can be used across games and game-engines.
– We’ll eventually have access to 3d-objects that contain a wide variety of animations and behaviors. For example, a truck that contains a suspension system, and gets damaged in realistic ways from weapons and collisions. Or, animals that are packaged into a standard format that contains not only the 3d image and animations, but reacts intelligently to sensory information, and has behaviors. For example, deer and birds have behaviors like eating, and wandering. They get startled easily and run away in response to nearby people and sounds. This gets more complicated by the fact that the animal would have to perceive the environment around it and react to it – for example, if it wants to run away, it looks for it’s escape paths. It can’t run up a cliff or through a fence, so the pathfinding AI has to integrate with this system. Its behavior system might cause it to avoid confined spaces, and prefer staying near vegetation where it is hidden. It would be very nice if game developers could just drop-in various animals into a game world and they would react intelligently to it.