In her introductory book Complexity: A Guided Tour, Melanie Mitchell distills how the rules of computational complexity can be applied to any system by anyone interested in understanding the behavior of that system and the network of which that system is a part: ”The deep ideas of computation are intimately related to the deep ideas of life and intelligence.” She defines three rules regarding computational complexity, all of which can be applied to video games, which are themselves complex systems that “exhibit non-trivial emergent and self-organizing behaviors”:
- The collective actions of vast numbers of components give rise to the complex and changing patterns of behavior;
- Systems produce information and signals from both their internal and external environments;
- All systems adapt to change their behavior to improve their chances of survival or success.
Video games are interactive, and most incorporate text, visuals, and sound organized by code that do different things based on the action of the player(s). Taken separately, there is no game, but when combined, the player becomes immersed in an interactive experience created through all of the parts working together. For those games that use any kind of artificial intelligence and player-feedback, the patterns of play-output change even more to create more varied experiences and a greater environmental depth. For a game such as Pong, there are three visible moving parts: two paddles and a ball. The play is simple, yet by integrating speed and geometry, every game of Pong is subtly different, and is always a challenge because of the infinite variations borne of angles of attack. In a game such as Undertale, previous player decisions and actions work together to create unanticipated scenarios with characters every time one plays the game. All of the coded elements work together to both record a history of play while using that history to tailor a new player experience as the game advances, and as the game is replayed by the same player over time.
To the second point, video games rely on their internal code to do things when activated, but they will typically wait for user input to do anything else. Basic information on how to run comes from within the game’s internal environment, but without any information provided by the player, nothing new happens, and no outcome is produced. With an arcade game such as Pac-Man, the game will play various demo scenes on a loop until the player deposits money or a token. That interaction starts the true gameplay, but even then, the player need not do anything for the game to unleash its ghosts into the maze. Eventually the player will be caught and a life lost. If the player takes agency of the eponymous hero, the ghosts still attack, but at east the player has a fighting chance. The game uses external player data to fight back, something it cannot do on its own.
The final point is most relevant to video games programmed with artificial intelligence. The best games learn with the player and adapt to player behavior in order for the computer to win, to protect its assets. Perhaps the most rudimentary example is basic computer chess. The program knows all of the characteristics of all of the pieces on the board, and most likely has been programmed with a suite of moves and counter-moves based on player-action and probability/statistics. The game’s goal is not to lose to the player. In more contemporary games such as Half-Life (1998), the game adapts its strategy based on player behavior to determine how to move enemy soldiers or aliens in order to outflank, disable, or kill the player’s character. Most first-person shooters contain a difficulty setting that can make the game’s AI more aggressive, or “smarter.” MMOs and RPGs have similar AI programmed to behave in complex ways, always in reaction to a player’s movements. At times, a player might feel like the algorithms are being fought instead of the mobs (game-controlled enemies), and that by understanding how these algorithms work, they can be used against the game in order to secure a victory for the player.
Mathematician John Holland adds to Mitchell’s characteristics of complexity and how complex systems behave:
- Complex systems organize themselves into patterns;
- Complex systems include chaotic behavior. Small changes in initial conditions produce large later changes;
- Complex systems exhibit fat-tailed behavior: rare events occur more often than would be predicted by bell-curve distribution.
Any player who has spent a few hours learning a game comes to understand the game’s quirks and behaviors, how to operate within its rules (the rules of play and interaction vary from game to game). Many good games teach the player these rules through interactive play, where the player learns how to succeed through interaction with on-screen elements. In the apocalyptic zombie thriller The Last of Us, different zombies behave in different ways. Their movements and actions self-organize, waiting for the player-activated trigger to occur: an errant noise, the beam of a flashlight, the proximity between player and monster. As the action unfolds, the zombies interact with some predictability, yet have enough random behavior and aggression to keep the game both challenging and emotionally engaging.
Most games also depend on chaos to create interesting scenarios that reward repeat play. Casual games such as Bejeweled and Candy Crush Saga rely on player’s opening moves to solve a puzzle, the earlier decisions affecting how future arrangements of jewels or candy appear and behave, sometimes making a level easy to solve, but more regularly making a level impossible to complete. The levels play differently every time, and the chaos is introduced into the complex system through player decision-making as well as the random introduction of pieces into the game as jewels are collected and candy crushed.
Rarity occurs more often than not in games that purposely engage in a randomized rewards system for player behavior. In Diablo II, players regularly find magic items more so than their “common” counterparts. Pokémon Go players regularly find rare pocket monsters in the wild to collect, although the random appearance of which rare collectible characters appear continues to make the game interesting. Compare that to real-world archaeology where the bulk of finds can be pottery, bone, terracotta, charcoal, etc., with the “pretties” already having been removed by previous ancient people, by animals, by looters. In games, players can expect to find rare things based on the algorithms introduced by the developer. Because players know this in advance, they continue to play to earn those rewards.
Above, the term “computational complexity” is used as the main type of complexity that appears in video games, these being computer-built environments relying on a coded rule-structure. Computational complexity assigns levels of complexity/difficulty to different collections of problems. In video games, computation is what a complex system does with information in order to succeed or adapt in its environment. Mitchell, writing about computational complexity, reminds us that “no individual component of the system can perceive or communicate the “big picture” of the state of the system.” If it did, the game would be complicated and not complex. When large groups of components come together to operate as a collective, however, things happen as a single unit.
When considering the question of machine-created culture, complexity plays a large role in how and where and when artifacts or built environments appear. In the case of video games, structures are not the only things that can be considered built environments. Everything in a game is built, sometimes coded directly, but more often that not appear based on sets of rules (algorithms) that describe the parameters of something so that the game can make either random or calculated decisions on how something should appear. Mitchell defines algorithms as being “series of steps by which an input is transformed into an output.” This can include things that are to be interpreted by the player as being “natural”, i.e., grass and trees. In some games such as Minecraft, landscapes are not built by the game until the player arrives, the interaction directing the appearance of the landscape, the game computing millions of decisions at lightning speed. Complex systems are non-linear in this way, especially with open worlds. The player’s appearance and movement marks what author David Byrne calls the “local bifurcation point.” That point determines where and when the complex behavior will begin to emerge.
So how does this complexity work? With earlier, arguably simpler games such as Super Mario Bros., there is no real complexity to be found within the game. The game always plays the same way, and players can memorize maps and enemy behaviors to create the phenomenon of a speed-run, or playing a game as fast as possible without errors while maximizing point-totals. Later games, however, began to incorporate subtle differences to maps (seen as early as the text-prototype that would later become Adventure), to non-player characters (NPCs), to quests, to artifacts/treasure. These subtle differences are all governed by their own sets of rules that are then placed within the framework of the game. Because these rules work together, they create complexity, which in turn can drive future complex behavior that emerges from the initial contact with more elementary pieces of the game. As the player progresses, past choices of actions, travel, and gear combine to dictate future play. Complexity creates a hierarchy of more complex forms, and these hierarchies evolve over time.
To begin to analyze complex systems that comprise most video games, we must find recurring patterns in a system’s changing configurations. Laws govern how an initial state can change. These laws are the rules/algorithms established by the game’s code. These games are adaptive systems and use adaptive agents to move the game along, everything from small events such as seeing critters run around to create ambience in World of Warcraft to major events such as boss-fights. As archaeogamers, we must discover the mechanisms that generate data and need to describe the adaptive interactions of large numbers of agents. Diversity results from continuing adaptation. Actions of one agent are dependent on the actions of others. In the boss-fight example, what happens to the main monster in the fight can determine what attacks it uses to defend itself, as well as if it spawns “adds”, secondary monsters to assist it in its act of self-preservation. WoW’s aggro mechanics are famous because of their complexity in governing the movements and attacks of sometimes dozens of mobs/enemies as they interact with each other and with anywhere from 1–40 players in a single event.
One can apply to games Holland’s three levels of activity of these adaptive agents that appear in computationally complex adaptive systems:
- Performance (moment-by-moment capability);
- Credit-assignment (rating usefulness of available capabilities);
- Rule-discovery (generating new capabilities).
A mob is at first in the moment, activated by player-agency. In a more complex scenario (such as a boss-fight), the game analyzes the usefulness of the mob (strengths and weaknesses) to create a strategy at which point the mob is deployed into the fray. The mob then “learns” from its successes and failures based on trial-and-error, attempting to inflict as much damage on a player as possible while in turn preserving itself. It discovers the rules governing the current situation while observing player-action, and in turn responds in a more intelligent (or at least different) way. The behavior of a complex adaptive system is always generated by the adaptive interactivity of its components. The seeming randomness of what happens within a chaotic event like the boss-fight is, as Mitchell puts it, “balanced with determination.” The mobs follow the rules, but at the same time have a freedom of movement in which to execute those rules. The player introduces randomness into the scenario, making it different every time. Game developers know that the players introduce random behavior, and the games wait for the player interaction to begin in order to react to the interference with an otherwise stable system at rest. Players upset the equilibrium of the game, and the algorithms of combat respond to restore the game to its original state of rest.
With complexity and algorithms comes the idea of emergence, that a new behavior appears based on how rules work together and are acted upon by and agent, which is in the case of video games, the player. The more lines of code there are in a program, the more there is a chance for complex behavior to emerge.
Byrne describes three results of emergent behavior in complex systems:
- Equilibric system stays as it is;
- Close to equilibric system: moves back to a stable condition when disturbed;
- Far from equilibric system: systems can change radically.
All video games can fall into these three classes. Games such as Frogger are equilibric systems. The player completes the jumping puzzles as the indifferent traffic passes by. Pitfall players swing on vines and leapfrog over snoozing alligators that could care less of a player’s success or failure. With games in the Tomb Raider series, or in the Doom series, or other shooters, these are close-to-equilibric games: the player’s presence disturbs the mobs enough to agitate them, inducing them to fight. If the player dies, the mobs return to their initial routines as if nothing had ever happened. Far-from-equilibric games include any game with a “permadeath” mode (e.g., Dwarf Fortress), or games that change forever based on player action (e.g., Undertale).
Emergence exists at the border of chaos and order, and is more likely to occur in systems with a high number of connections (not parts). We see this in video games just as we do in real-world cultures. Most video games are rule-based systems, which even though they can be complex, are restricted by the rules imposed upon them by the developer. With MMOs and with “real” life, we see a more general complexity, which uses rules more as guidelines than absolutes. I can manipulate real-world objects in ways that I cannot in games. In the real world, I can use a plate as a flying disk (maybe once), but in a game, I am restricted to using a plate as a plate.
Complexity gives rise to emergent behavior in both the real and virtual worlds; the more connections there are to exploit and explore, as Ian Hodder wrote in 2012, the more opportunities there are for things to go wrong or happen in unexpected ways. This was made clear in an earlier post about glitches. Emergent behavior in video games, however, can be created based on environment, a kind of digital landscape archaeology. These features are what Byrne calls “attractors.” Certain combinations of elements are more successful at yielding emergent behavior. I can more easily find ammunition by shooting bad guys in Medal of Honor than I can by scouring the ground and shelves. I can find more settlements situated by potable, flowing water than I can inland.
When thinking about the archaeology in a game that is created by a game’s complexity, Kevin Schut writes, “this mimicry (or humanity) can approach something like the behavior of actual humans and can produce unanticipated and unique cultural interactions, as evidence by all the bugs and exploits gamers discover.” The virtual approaches the real. We see this happen every time we play, as we try to define the rules governing complexity and emergent behavior. It is no different than attempting to understand how things worked in antiquity. We can hypothesize why and how emergence occurs, but as Mitchell says, “we are waiting for the right concepts and mathematics to be formulated to describe the many forms of complexity we see in nature.” There is not yet an equation that will predict when emergent behavior will happen. Experience can help with the prediction, but to date, this cannot be quantified.
—Andrew Reinhard, Archaeogaming