Original Author: Luke Dicken
I’ve been doing some very simplistic hacking around in Unity lately – really basic stuff, mostly to build a visualisation system for my research. In the process of doing this, I made a little maze, “borrowed” some assets from a Unity tutorial, and set it up so that you could control one of the characters with the arrow keys. What I’m working towards is using Automated Planning to assume control of the character, and then get into the special sauce that I’ve been working on for the past few years to make it more awesome – but more on that another day. Anyhow I decided to show it to my mom as it was, and she seemed to get a real kick out of it, possibly because it looks a lot shinier than whiteboards full of equations and diagrams.
The next day I got an email talking about how she really enjoyed killing the wizard at the end of the maze. I raised my eyebrows quite high – the assets I’d taken from the tutorials were a robot and a battery, and the aim was to collect the battery and return to the start, which disappeared as soon as the robot model collided with it (like I said – simplistic).
This is something that we often find in the gamedev world – the player has a tendency to bring their own perspective, bias and, frankly, baggage in with them and that can lead to wacky situations and weird unintended narratives being shoehorned by the player into the game, especially when things are not transparent to them.
Last year at the Paris Game AI Conference, the University of Minnesota’s Baylor Wetzel presented an incredibly thought provoking session titled “Inside Your Players’ Mind with Playtesting” in which he laid out some experiments he’d been doing with his students. The setup was a simple turn-based fantasy strategy game with two players. Each had a selection of different units in various quantities and every turn chose to attack one of their opponent’s units, with the aim being to eliminate the opposing army and thus kill the other player. The experiment was run with one human and one AI in each game, and after the game concluded, the human player was asked to fill in a survey, and evaluate the AI’s difficulty, fun and realism levels as well as attempt to guess the rules that were driving the decision process.
Wetzel’s evaluation was based on a 1-5 grading system and only a sample of 13 participants, which maybe isn’t quite as fine grain as it needs to be for the detailed analysis the results were presented in, so I’ll gloss very quickly over them : The results showed that the player’s perception of difficulty and realism seemed to demonstrate a correlation, but difficulty and fun, and realism and fun did not. What I think is the most interesting thing about the results of the survey wasn’t the numerical data however, it was the narrative that the participants tried to use to justify the actions of the AI players when guessing the decision making. One of the best examples of this was given in the slides :
“The AI has a hatred of dragons and always attacks them first. He’ll attack strong units but when he sees a dragon he can’t help himself and immediately goes for them.”
Of course, it would be perfectly possible to create a logic system that obeyed this kind of narrative device, but in actual fact the behaviour that the player had observed was significantly more simple; it attacked the unit with the longest name first.
In many ways this ties in with a chat I had recently with Mitu Khandaker of University of Portsmouth for her “Gambrian Explosion” column over at Game Set Watch on the subject of it was incredibly diverse, but in breadth, covering the full spectrum of potential moral alignment of the player. In practice, players tended to be either polarised fully light-side or dark-side, so the majority of the diversity was never experienced.
The point is that the nature of the system isn’t important – what really matters is how the player perceives the system because ultimately that is what drives popularity and sales. You can get away with a dumb system masquerading as something more, provided nobody notices. Equally, you can have a very sophisticated system, based on a range of features and advanced algorithms, but if that same behaviour could be approximated by something a lot more simple, its possible that people will assume that you’ve done the simple thing and not understand the subtlety of your implementation.
It’s a balancing act to juggle how smart to make your algorithms, and how much quick hacking you can get away with, but the main thing that I’d like you to take away from this post is that sometimes less is more and sometimes, more is less. Play into that and make sure the player sees your work at its best – or better if possible.
Oh and seriously, don’t show your mom half-baked things you’re working on. It never ends well.
/* ').html("/web/20121218220145/./api.nrelate.com/rcw_wp/0.50.6/?tag=nrelate_related&keywords=How+Complex+is+Complex+Enough%3F&domain=www.altdevblogaday.com&url=http%3A%2F%2Fwww.altdevblogaday.com%2F2011%2F05%2F27%2Fhow-complex-is-complex-enough%2F&nr_div_number=1").text(); nRelate.getNrelatePosts(entity_decoded_nr_url); /* ]]> */