So how can we fix or, with more modest ambition, improve moral choices in games? My goal would be to encourage players to treat moral decisions as any other game decision, instead of opportunity to slide a progress bar. Choice gets more interesting when different choices have things to recommend them. There’s value in the player weighing long-term vs short-term advantage, or considering how to deal with a situation in more than one way, or better yet more than the obvious, habitual, or direct way.
Improve What We’ve Got
Let’s start with the least radical of changes to the current ecology. Keep the existing binary moral system. Accept that players will always identify the good moral choice. Fortunately, ethical failures of men and women in the real world aren’t failures of knowledge either. They are failures of will. We fail because we give into temptation. If you want a reference look up deadly sins.
If evil choices embody selfishness, then why aren’t we manipulating the player’s greed, ambition, and attachment to his character? Consider the recent example advertising “rational self-interest” philosophy. Harvest Little Sisters instead of saving them, and get a bigger reward. Or do you? Bioshock’s designers execute a sad retreat by compensating the good player with gifts from the Little Sisters. The selfish choice doesn’t actually pay off. So instead of involving the player in this test of the human spirit, the game has rigged the game against Rapture’s ethics.
Put it in microeconomic terms: there is some reward that would tempt my character to engage in an evil act in a game, even when I’m playing a character that’s nominally good. If not game money, then a sufficiently awesome weapon, piece of armor, new superpower, or the like. At some level of reward, I’m going to consider, and probably even commit, an evil act. Once we get that established, to paraphrase the Shaw quote, all that’s left for designers is to haggle with the player over the price. Evil choices should be tempting: not advertising that you can roleplay a deranged socially maladjusted psychopath.
Conversely, stop portraying every benevolent decisions as low cost and high reward. That’s well and good if want to construct Disney-esque fairytale universes, but how about using these M ratings to show off a world where goodwill isn’t always so profitable?
The next step? Get rid of the morality meter. Let the player make choices in the moment, without the need to reinforce rigid, inhuman caricatures. Without the monitor overhead or the incentive (see below), the player is more likely to make choice based on circumstances, not on intentions set before the game loaded. Then let the dice fall where they may. Consequences fall in as the game needs, preferably with some sense of balance to allow for choices on either side. AI characters can react appropriately to what they see the player do, or what they’ve heard of him doing.
This means that advancement systems need to sit in ignorance of the player’s moral choices. Abandon unlocking of abilities based on the good-evil meter. This means that even your Light-side Jedi can purchase the Force lightning ability. Or the equivalent. Is that so bad? Obviously, if you’re developing your content and systems together for a new IP, it’s easier than retrofitting it in. Though in the case of Star Wars, I’m sure someone in the expanded universe has managed the trick.
At times, games have shown desire to innovate and deviate from comic book archetypes of good and evil. For example, Mass Effect draws lines between paragon and renegade. The idea, I think, was to question the means we accept. What are you willing to do in the name of a greater good? The paragon always does the right thing, but the renegade does whatever it takes to get the job done. Put it that way, it doesn’t sound so bad, does it? Sadly, the writing in Mass Effect quickly falls back into a familiar pattern of the polite, self-sacrificing good guy versus the self-centered, selfish asshole.
Morality can be more interesting than that. Bioware’s paragons and renegades want to see justice done (another opportunity for some debate) and generally save the galaxy. We don’t need comic book villainy to imagine one of these characters doing questionable things in the name of a greater good. Lies, deceit? No problem. A heavy-handed approach? Okay. A willingness to resort to violence? Sure. Torture? Hmm.
Now let’s steal some ideas from a freshman ethics course. Would you be willing to assassinate the chancellor of Germany in 1933? You would? How about when he’s spouting his creed in beer halls a decade earlier? Earlier, when a failed teenage artist? An infant with five siblings? How about his father, before his birth? Or, consider another angle: how much “collateral damage” are you willing to accept? Blow up his poor family’s house? The city of Braunau am Inn? How about every person in Austria-Hungary? The accounting of lives saved comes out ahead, you know…
On the Shoulders of Giants
We don’t have to invent the gameplay out of nothing. Applied ethics has done some of the heavy lifting of imagining our scenarios for us. In essence, the question of how far you’re willing to go exposes two kinds of ethics: utilitarianism and a form of deontology. Do the ends justify the means? Sometimes, maybe? Having a simple understanding of different ethical systems (best link I could find here) doesn’t mean that you have to preach to your players.
The strengths and weaknesses of different ethical systems create opportunities. Philosophy and ethics texts are full of imagined scenarios juxtaposing the choices of one ethical system against another. As game designers, we don’t have to rely on description or judgment of text. We can put situations in front of the player and let him work it out. Will the player respect cultural norms? Will he or she bend or break the law in the service of something greater? When does the player stop making exceptions? Each ethical system places distinct values on results, intents, cultures, or the persons doing the action.
Leaving behind our college textbooks, even if you want limit your ethics even to D&D stereotypes, play with the other alignment axis. Set up the rule of law and the needs of society (law) against personal liberty and individual rights (chaos). The world around us today seems made for these sorts of questions, as the interests of the state compete with the rights of the individual.
Is this a problem worth investing in, after all this diatribe? This paradigm of grossly obvious choices has been around in games for a while now, and it seems more popular than ever. Would complexity or depth actually go over well among our players? Do they want more challenging choices?
It’s reasonable to ask whether moral choices are loved for what they represent. A fantasy morality. In these games, it’s easy to make the right moral choice. We don’t personally have to sacrifice anything. We get praised and rewarded for our benevolence. What could be easier?