“Every object of design sets a trap by presenting a problem in the form of what appears to be its solution. It is the spoon that determines that we should transport soup to the mouth.”
—Vilém Flusser (design philosopher)
The human cyborg dates back to the invention/discovery of simple machines: lever, wheel, inclined plane, pulley. One could argue that tools define humans. We are rarely without a tool in at least one of our hands, a smartphone, a pencil, a trowel, a spoon. These things we make and use comprise material culture. Our stuff defines us. Excavate a modern house or an ancient Tel and you will find things left behind. These things examined singly and then taken together tell stories. They are the stories of us. It might be late this evening as I write this, but I cannot think of a single human story that does not have at least one thing in it that is manipulated in some way.
Human-thing interaction is constant, and with today’s technology, it truly never stops. In fact, the technology we created persists without us. A pencil remains a pencil even when it is not being used by someone. Its pencil-ness persists, designed with purpose, although that purpose could be writing/erasing answers to math homework, or possibly killing someone a la John Wick. The things we make have primary and secondary purposes, and design can lend itself to other uses not considered by the designer. These alternate uses emerge from the complexity of design, and the design creates intuitive use.
Because humans depend on the things they make/use, we take a subservient role to them. I rely on my car to get me to the train station, and the train to get me to the office on time. I rely on my phone’s alarm to wake me up in the morning. I rely on coffee, which is dependent on climate, labor, manufacturing, transportation, and commerce among other things. The train relies on its engineer, on track maintenance, on signals. If I didn’t know any better, I’d say that we’re answering to our own technology. We serve the technology we created and then purchased. We made roads, then cars, then traffic lights. We let the lights tell us what we can and cannot do, stopped by the invisible force of a rule/law. We get impatient when the light takes too long to change, yet most of us will sit and endure the wait. We complain about our digital technologies, too, our tools of communication. We bring this pain on ourselves. As the Buddhist saying goes, “all possessions lead to suffering.”
So do we really control the machines, or do the machines control us? We take raw materials to create something, and by doing so invest that thing with rules of usage. As the usage of something spreads like a virus between users, the rules become shared practice, and then becomes part of our culture. We surrender our control to our things, even though we made them in the first place. But this is a fallacy.
People rely on manufacturing in order to have instant access to any thing they need. Someone can design a spoon and create a prototype by hand. If successful, its design schematic will be created electronically to inform machines how to make a spoon. We see this on a smaller scale with 3D printing. Humans make the design and the rules for production. The machine takes it from there. Did the human make the spoon, or just the design of the spoon? This is procedural generation in the real world.
So what about video games? These are created by people. Some games contain their own ready-made culture with lore attached, cooked up by writers. These games contain things, and the design of these things ties them to various cultures in a game. For the most part, nearly every game is a product of deist world-building and significant attention to detail. The designer is in complete control. In the instance of games containing procedurally generated content, the designer writes algorithms containing rules of manufacture that the game then executes. The designer might make a wood-grain texture and a metal texture. The designer might make a long handle or a short handle. The designer then instructs the game to create a spoon with a texture and a handle, but does not define which to use. So the software “chooses” these. Spoons are similar, but different. Add enough textures and styles, and the spoons begin to show complex variety and types.
What if you took this to an extreme? What if you made a game in which everything is procedurally generated? What if that game included the ability to create never-before-seen cultures based on the rules set by the designer? This is already well underway with Mark Johnson’s Ultima Ratio Regum. Are these really new cultures emerging from the code? Are these machine-made, or made by the designer? Regardless of the answer, the rules dictate creation and use, which force players to abide by what their machines tell them. The games influence our culture directly, unless we are engaged in counterplay. Even then, that counter-culture is influenced by a game and its rules. We are still subjecting ourselves to what the game expects players to do. If this happens to one player and makes the player respond in a certain way, it is machine-created behavior. But what if enough players behave in a similar way as dictated by the rules of a game? I’m calling this machine-created culture (MCC). Our culture is also (at least in part) defined by how we use our things, yet the things have their own rules that dictate use, and ultimately form the shape of the culture we have. We delegate the task of the creation of things to automation. Our culture ceases to be our own invention, but is instead machine-made.
I’m interested to see where this thinking leads me (if anywhere). Machine-created BEHAVIOR is easy to see. It happens all the time. The console crashes, so I submit an error report and then re-log to assess the damage to my previous saves. If enough players play a game that encourages the behaviors of looting, or of cooperation, and then share their experiences with one another on how to exploit the game mechanic, how to create and read a loot table, how to glitch the game for gear, we see a turn from behavior to culture. It’s people-made, but also machine-made. But it’s ultimately the technology and rules within the game that determine behavior (both human and machine) and ultimately a culture of gameplay.
More interesting to me than the above is what happens when hardware executes the rules of software and unintended things happen as a gamespace is made. The hardware isn’t conscious (yet) of what it’s done. It just did what it was told. Yet at the same time it has done something in following the rules that the original maker (and the player) never expected or intended. The conjunction of hardware, software, maker, and player makes a new thing, and that entanglement proceeds as the world of the game unfolds. How does this interaction/agency create a new in-game culture, and a new extra-game culture? Is it really “culture”, or is this something new, something archaeological and anthropological in a virtual space that requires new vocabulary? I suspect I will spend the rest of my life refining and rewriting the concept of MCC. I had to start somewhere.
The ultimate test will be when a game is created that creates its own cultures that then interact with each other and evolve, which can be done independently of human intervention. What will that culture (or those cultures) look like? And will humans be able to recognize it? How many examples of shared behavior must exist before culture emerges from that shared behavior? At the speed of computing, how many iterations of culture will appear in a second, and how many iterations will there be after a minute? It’s evolution made infinite yet crammed into a very small period of time. The human scale is made behemoth and slow as the generations within MCC unspool at the speed of light.
—Andrew Reinhard, Archaeogaming