There are no Casual Games…

Original Author: Savas Ziplies

…but Casual Gaming (and Casual Gamers)! Wait, wait… Before you stop reading and ask yourself why I would make such an offensive proposition, please hear me out.

My Origin
I have a very tense relation to the terms Casual and Core Games that found on three things: 1) I am getting older and my best “Gamer” times are over ^^’, 2) I develop Browser Games and 3) I develop in Java! A very bad combination to go into a Game Developer discussion… trust me!
I am a Gamer for over two decades now, started with Pong and played a high percentage of every mentionable game ever made. Studying Computer Science and developing Games was a reasonable step and I like what I am doing now, I am good in what I do (yes, I am ^^), developing Games and I am proud of what I achieved until today. But nowadays if I mention that I develop Games, I get asked:

  • “Oh, very cool. What Games?”
  • “Browser MMOs”
  • “Ahhh…*pause*… Casual Games!”

Even with swallowing the bitter taste of that sentence I somehow feel unvalued for being part of one of the largest Browser Games around, over four years old, still growing and established way before Facebook. A massive simultaneous multiplayer game, relying heavy on PvP, time intensive and based on a very technical Story. All features that are normally related to so called “Core Games”. But if a Browser Game features such elements why is it that the term “Browser Game” is instantly related to “Casual Game”? And why in general is “Casual Game” leading to the idea of “not a real game”?
In my specific case Browser and Casual is not the only evil term. The dialog above often continues as follows:

  • “Why do you think I develop a Casual Game?”
  • “It’s in the Browser… probably Flash.”
  • “I am Java Developer.”
  • “I think you said you develop Games. How can you develop Games in Java?”

But that is another story I will cover in another post ^^’.

My Problem
Another thing that leads me to my “new” thinking was the evolution of my own gaming habits. As mentioned, I am a Gamer, a Core Gamer you would say, played Games from Bulletstorm Demos… but the actual gaming sessions changed!

I am part of the working community now. Most of the day I am sitting at my work desk and if I get home I have some commitments to do or just want to get some peace. Nevertheless, my Angry like the Birds outside my Windows (couldn’t find a transition to my iPad here ^^).
So, every evening I really have to decide what I do and IF I play. And even if I play, the time a playing session takes reduced a lot nowadays. For example, I played Plants vs. Zombies as well as Dead Space. I played Mirror’s Edge as well as Angry Birds. All four games would be categorized into Casual and Core Games, but the way I played them somehow did not fit the definition. I played Plants vs. Zombies for hours straight but Dead Space actually in 15-20 minutes chunks until the end (not only because it was scary). Mirror’s Edge I played through in one session but Angry Birds just 15 minutes some evenings.
Now, with the advent of all this classification that somehow does not fit my overall love for games of every type that “entertains” me, I questioned myself: Am I doing something wrong? Or is the classification not practicable?

The Definition
With that many inconsistencies in my general understanding of Browser and further more Casual Games I tried to find a conclusive definition. During the search of a definition I had to notice that I never read so many different ways of defining something, especially as most definitions come down to attitudes of the writer. Because of that, let’s start with a “not so ideal” example from the Urban Dictionary:

Casual games are any kind of game that is over hyped and over rated or just the exactly same thing as a previous version that was over hyped and over rated, these games are known by gamers as “crap” because even with all the perfect scores the games still have mediocre graphics and shitty plots that casual gamers think are good. Usually the only thing that makes a casual game not-total shit is the multiplayer; otherwise these games would get ratings lower than dirt. With shitty graphics and a generally horrible campaign mode, the halo series is the indisputable king of casual games.

But all jokes aside, for a more serious definition from the Casual Games SIG from 2005/2006:

The term “casual games” is used to describe games that are easy to learn, utilize simple controls and aspire to forgiving gameplay. Without a doubt, the term “casual games” is sometimes an awkward and ill-fitting term – perhaps best described as games for everyone. Additionally, the term “casual” doesn’t accurately depict that these games can be quite addictive, often delivering hours of entertainment similar to that provided by more traditional console games. To be sure, there is nothing “casual” about the level of loyalty, commitment and enjoyment displayed by many avid casual game players – just as there is nothing “casual” about the market opportunity and market demand for these games.

That is an interesting definition. Let’s have a look at some more. Wikipedia describes:

Most casual games have similar basic features:

  • Extremely simple gameplay, like a puzzle game that can be played entirely using a one-button mouse or cellphone keypad
  • Allowing gameplay in short bursts, during work breaks or, in the case of portable and cell phone games, on public transportation
  • The ability to quickly reach a final stage, or continuous play with no need to save the game
  • Some variant on a “try before you buy” business model or an advertising-based model

The About.com extend the definition with specifics about the price point and the platforms:

  • Style Of Play: Casual Games are now considered “games for everyone” – with a special emphasis on whether your mom can play it.
  • Distribution: Casual Games are frequently distributed with a “Try Before You Buy” model. Where a person can play for an hour for free and then decide whether to purchase or not. This model of play grew out of the Shareware distribution model.
  • Casual Games are usually sold for $19.95.
  • Platforms: Casual Games can now be found on Cell Phones and Consoles such as XBox 360 via the Xbox Live system.

and

Casual games are most often played via a Flash or Java based platform on a PC, but are now appearing in larger quantities on video game consoles and mobile phones.

The definitions often come with a timeframe of around the millennium or 2001.

An Interlude
Let’s move away a little from the term “Casual Games” and the definitions given and have a look at the last sentence: The Year! If we take a look on what happened and was released around that time that is somehow “defined” as the origin of the term we will find things like the Java 1.3 was released introducing the HotSpot VM and building the foundation for JavaME (J2ME at that time) that brought gaming very heavily to normal phones.
This interlude is important to understand how Games opened up to a larger community (yes, long before the Wii) away from the nerdy PC hardware geeks that “pimped” their autoexec.bat to play games as of today these build a large majority of the people defining and mostly complaining about “Casual Games” (no offense).

The Ambiguity
If we sum up the definitions the following list could be seen as a general understanding of Casual Games:

  • Easy to learn/simple gameplay
  • Simple controls
  • Forgiving Gameplay/quickly reach a final stage
  • Gameplay in Short Bursts
  • Games for Everyone
  • Up to 20$
  • Try before you Buy
  • Flash and Java Games on the PC side/DLGames on XBoxLive, PSN, etc.
  • Since 2000/2001

This list looks pretty decent doesn’t it? As you can guess from the headline the list is not as decent as I hoped it to be. I often refer to these hand full of games that somehow should fit these rules, are named casual but do not really allow a distinct identification of what a casual game should be.

Let’s start off with Lara Croft and the Guardian of Light. A franchise that may have brought many women to gaming, featuring intense 3D platform gaming and 3rd Person Shooting Gameplay. With GoL it became a DLG with a strict isometric perspective. It’s on PC and Consoles, downloadable, costs under 20$ and has (in my opinion) simple controls to master the fine placed action and puzzles. Now, are Tomb Raider and Lara Croft becoming casual? Is it just that game? Or does Lara Croft not count?
My fifth example (to use the full hand) would be every Wii Game. Nearly every gaming site and every “Core Gamer” defines a Wii Game as a Casual Game. Why? Because your Family got into “your hemisphere”?

In general if we just take some of the bullet points, some of the definitions describe things that nearly every game, no matter if Casual or Core, wants to achieve nowadays or is a general gaming tradition:

  • Try before you Buy

Demos, Shareware, … Nothing new to the experienced Gamer and Games in general.

  • Gameplay in Short Bursts

Actually, this is something popping up more and more since the advent of consoles. PC users are used to saving games, being able to use up space on their hard disc. For console gamers this was no natural thing to use so developers very often used stages with manual and automatic save points that were not separated too far away from each other to not enrage the player if he dies. I mentioned Dead Space and my very tight gaming sessions playing through it. This was only possible because of the very “controlled” stages and their save points that I could reach in the given time frame.

  • Forgiving Gameplay/quickly reach a final stage

This as well is something that especially First-Person-Shooters nowadays provide to the user. “Old” Gamers remember a time when it was a necessity to know where the next HealthPack is. Today, we rely on a regenerative system, often presented with the argument to be more accessible to more gamers (“games for everyone”). Becoming casual? And regarding the second part, I could get heretical now but games such as Modern Warfare do not really provide that much gaming time to the user anymore. 5-6 hours are some times normal.
The problem is that gaming following the definitions given is way older. This is why gameplay elements can hardly be used to define the games themselves. What is left are technical definitions, prices as well as hardware to describe the so called “Casual Games” and these obliterate more and more.
So, with all this ambiguity coming from the point of defining the Game, wouldn’t it be better to define the interaction?

Classic Classification
We tend to define things based on their surroundings and the “object” using the “subject” (“People Playing a Game” in our case) because that is what we visually perceive. And as it is easy for us to define unknown things from what we know, we derive the Browser into our experience of Casual Games as the Browser was never a dedicated environment for games but so many things that so many people do, not only gamer. Therefore, it is very easy for “Core Gamers” to define games such as Plants vs. Zombies as Casual Games as their Moms or Dads are playing them.
The problem with the classification and the according definitions of Casual Games is that they try to really define constraints where these games may fit in. In a time where it becomes harder and harder to “just” define the Genre of a game (e.g. Puzzle-Survival-Horror Adventure-Games) it is even harder to define an umbrella term of games in general. But my personal strongest point regarding the definition of “Casual Games” is that most of the people that play “Casual Games” do not even know that these are “Casual Games” (or did your Mother or Sister ever talked about Casual Games when playing Wii or DS?).
The classification normally is given by “Core Gamers”, Developers or Game Editors that want to separate themselves from these “unappreciated” games (in many cases). But what we were able to see from the definitions normally used to describe Casual Games is that these do not fit the real world anymore. Especially as they evolved over the last years, away from most simplistic Flash Games to the best gaming experiences of the last decade (e.g. Limbo and more)
What is required is to divide not only Games but the interaction, the gaming. For gameplay we have genres. Now, we need a new graduation for Facebook, Flash, Indie and everything else that evolved our gaming experience (and will in the future). To what this New Classification could be, I can give you no answer. This needs a long discussion and a broad overview of everything gaming has to offer nowadays.
But what all Gamers need to do is to be open minded to new possibilities and not argue with the term “Casual Game” anymore, especially those that call themselves Core Gamer. I think we all do not want to hear another: “Epic Mickey is a Casual Game. It’s on the Wii!

My Conclusion
My intention was to make a polemic assertion, presented with my experience, many questions and concluded with my own ways of thinking. If you were looking for THE definition of Casual Gaming, this post does not deliver. It just brings up some things that do not work out in our current scheme of games classification and with the ever growing amount of releases that qualify to our current definition of Casual Games we should quickly start thinking about a new way of filtering, fitting all modern characteristics such as Steam and all the other new ways of developing, presenting and distributing games, challenging the “old way” of games development.
I started off with arguing that there are no “Casual Games” but “Casual Gaming” and I tend to support this even if I give away no new definition because such a broad definition of games cannot be made, if the gamers that count are so broad and different themselves. I agree that I only presented arguments for my theory but as long as it is possible to oppugn the current definition that easily it is in our hands to discuss and define better definitions for our most beloved games… that are changing pretty quickly right now!

Also published on INsanityDesign.com

The Art of the 48 Hour Game

Original Author: Nick Darnell

The idea isn’t new, a few people get together one weekend with a common goal; make a game in 48 hours. I have game jams on the mind because the Global Game Jam is this weekend. So I thought I might do a post on hosting and participating in a successful Jam.

I’ve been party to five game jams over the last few years and I’ve made some observations of what works really well and what doesn’t.

1) The Organizer

Every Jam needs an organizer, the guy who aligns everyone and makes sure there’s a Jam Site and that that everything is ready to go long before anyone else shows up. There’s a fair amount of work that goes into a Jam, especially if you require internet access. A lot of businesses that might like or be open to hosting it tend to nix it because of the work, risk or liability issues. Depending on the size of the Jam, the organizer also needs to recruit helpers. People that have experience with Jams and can help people out that are new to the experience and possibly to the technology primarily being used in the Jam. Also, don’t forget to setup a source control server for your participants. Otherwise how will they collaborate?

2) Pick a Framework

We tend to use XNA for our Game Jams. While I’m sure you could use UDK or Unity for a Game Jam, those frameworks have a much higher barrier to entry. If you are unfamiliar with the technology, you’ll spend a lot of time chasing down tutorials and you just don’t have that kind of time in a jam. Doing an XNA game in a weekend for someone who has never touched XNA is far more achievable. But you should ask your participants, heck, I almost want to host a Jam focused around Little Big Planet 2.

3) The Schedule

Typically a Jam runs Friday to Sunday. Starting on Friday around 5-6 o’clock and ending at approximately the same time on Sunday to do presentations. We’ve found that the first night is usually not the time to start burning the candle at both ends; most people are already tired from work. You should recommend to the jammers to get into groups, talk about the game, go get some food and then do a little work on it. Go home and get plenty of rest and come in Saturday bright and early and ready to write a ton of code for 12+ hours.

4) Simple and Creative Theme

Your theme should be creative and simple. Something that allows for a lot of creativity but doesn’t completely open the players up to making whatever game they want. The global game jam themes tend to actually be my least favorite. The group organizing it tends to pick themes that are kind of vague, and actually a little hard to turn into concrete ideas for games. Last year’s theme was a little better than their first, “Deception – and you had to incorporate a net, a set or a pet”. The year before that it was “As long as we have each other, we’ll never run out of problems”. Personally, I find these kinds of themes a little annoying because they tend to turn into a challenge of “how do I morph my game idea into this vague theme”, which I don’t think is what you want.

You should really try and give your jammers something concrete to build from. For example, the local jam we do in Raleigh, NC has had themes like “Triangles” and “Blocks”. My personal favorite was Madlibs. Everyone comes to the jam with 5 nouns, 5 adjectives and 5 verbs. We then entered them into a simple website which randomized everyone’s words into game titles in the form of “Adjective Noun Verb”. So you might see game titles like “Meaty Nun Flyer”, “Musical Dragon Twirler” and “Pixelated Martini Roller” just to name a very small number of the ones generated. I like this method a lot because a title like “Meaty Nun Flyer” is evocative, immediately conjuring up possible game mechanics and a mental image. Something I find very important when I’m trying to crank out a game in 48 hours.

5) The Grouping

After the theme has been announced you should give everyone 15 minutes or so to come up with an elevator pitch for their game. If you are doing something like madlibs you skip this step. Then have each person with an idea stand up and pitch it. Have someone write down the title and a 3 word description. After everyone finishes, now comes the least fun part, the puppy killing phase. You need to reduce the number of game ideas to the ones people actually want to work on.

We tend to do this by first getting a general consensus of what people would even be interested in working on. So just go down each game title, remind them what it was and then have them raise their hand if they would want to work on it. People can vote multiple times in this phase. You should cut anything from the running that doesn’t at least have 3-4 people even interested in it (numbers vary based on the size of the jam).

After you’ve narrowed the list you should have the final grouping phase. Start by going down the titles left and asking people to vote once. Ask them to pick the title they want to work on, even if no one else wanted to work on it. People can skip this phase if they are unsure. This will give you the list of games that are going to be made. From here, anyone left undecided can just join the group they are most interested in.

6) Forget

Forget everything you ever learned about software engineering. It’s actually nice to remove yourself from the mindset of someone writing production code 50+ hours a week. Copy paste code, make everything public. I mean you’ve got 48 hours; it’s really not the time to worry about overhead, performance or maintenance.  Always take the quick and dirty path to getting the game done, you simply don’t have time to constantly refactor systems during this process.

7) Gameplay First

Get the mechanics and controls of the game working before anything else. You need to try and aim for getting the game done by Sunday at noon and leaving the next 4-5 hours to do the random bits like improve graphics (maybe add a neat shader effect) and add a title screen, all of the polish items basically. It also means you’ll be having fun on the last day instead of stressing over your game only being half done.

8) The End

Just some final thoughts I have on the subject. If you’re a student you should come to game jams. Industry folks do participate and it’s a good way to make contacts.  If not, at the very least you’ll have something cool to show a potential employer. If you’re already in the game industry, you should participate in game jams. The experience can be very invigorating to return to a simpler time when games or projects don’t stretch on for several months or years. It’s nice to be able to sit back and feel like you’ve accomplished something that is a whole instead of merely a part in such a short time.

Cross posted from my personal blog.

Managing Decoupling

Original Author: Niklas Frykholm

The only way of staying sane while writing a large complex software system is to regard it as a collection of smaller, simpler systems. And this is only possible if the systems are properly decoupled.
Ideally, each system should be completely isolated. The effect system should be the only system manipulating effects and it shouldn’t do anything else. It should have its own update() call just for updating effects. No other system should care how the effects are stored in memory or what parts of the update happen on the CPU, SPU or GPU. A new programmer wanting to understand the system should only have to look at the files in the effect_system directory. It should be possible to optimize, rewrite or drop the entire system without affecting any other code.
Of course, complete isolation is not possible. If anything interesting is going to happen, different systems will at some point have to talk to one another, whether we like it or not.
The main challenge in keeping an engine “healthy” is to keep the systems as decoupled as possible while still allowing the necessary interactions to take place. If a system is properly decoupled, adding features is simple. Want a wind effect in your particle system? Just write it. It’s just code. It shouldn’t take more than a day. But if you are working in a tightly coupled project, such seemingly simple changes can stretch out into nightmarish day-long debugging marathons.
If you ever get the feeling that you would prefer to test an idea out in a simple toy project rather than in “the real engine”, that’s a clear sign that you have too much coupling.
Sometimes, engines start out decoupled, but then as deadlines approach and features are requested that don’t fit the well-designed APIs, programmers get tempted to open back doors between systems and introduce couplings that shouldn’t really be there. Slowly, through this “coupling creep” the quality of the code deteriorates and the engine becomes less and less pleasant to work with.
Still, programmers cannot lock themselves in their ivory towers. “That feature doesn’t fit my API,” is never an acceptable answer to give a budding artist. Instead, we need to find ways of handling the challenges of coupling without destroying our engines. Here are four quick ideas to begin with:
1. Be wary of “frameworks”.
By a “framework” I mean any kind of system that requires all your other code to conform to a specific world view. For example, a scripting system that requires you to add a specific set of macro tags to all your class declarations.
Other common culprits are:
  • Root classes that every object must inherit from
  • RTTI/reflection systems
  • Serialization systems
  • Reference counting systems
Such global systems introduce a coupling across the entire engine. They rudely enforce certain design choices on all subsystems, design choices which might not be appropriate for them. Sometimes the consequences are serious. A badly thought out reference system may prevent subsystems from multithreading. A less than stellar serialization system can make linear loading impossible.
Often, the motivation given for such global systems is that they increase maintainability. With a global serialization system, we just have to make changes at a single place. So refactoring is much easier, it is claimed.
But in practice, the reverse is often true. After a while, the global system has infested so much of the code base that making any significant change to it is virtually impossible. There are just too many things that would have to be changed, all at the same time.
You would be much better off if each system just defined its own save() and load() functions.
2. Use high level systems to mediate between low level systems.
Instead of directly coupling low level systems, use a high level system to shuffle data between them. For example, handling footstep sounds might involve the animation system, the sound system and the material system. But none of these systems should know about the others.
So instead of directly coupling them, let the gameplay system handle their interactions. Since the gameplay system knows about all three systems, it can poll the animation system for events defined in the animation data, sample the ground material from the material system and then ask the sound system to play the appropriate sound.
Make sure that you have a clear separation between this messy gameplay layer, that can poke around in all other systems, and your clean engine code that is isolated and decoupled. Otherwise there is always a risk that the mess propagates downwards and infects your clean systems.
In the BitSquid Tech we put the messy stuff either in Lua or in Flow (our visual scripting tool, similar to Unreal’s Kismet). The language barrier acts as a firewall, preventing the spread of the messiness.
3. Duplicating code is sometimes OK!
Avoiding duplicated code is one of the fundamentals of software design. Entities should not be needlessly multiplied. But there are instances when you are better off breaking this rule.
I’m not advocating copy-paste-programming or writing complicated algorithms twice. I’m saying that sometimes people can get a little overzealous with their code reuse. Code sharing has a price that is not always recognized, in that it increases system coupling. Sometimes a little judiciously applied code duplication can be a better solution.
An typical example is the String class (or std::string if you are thusly inclined). In some projects you see the String class used almost everywhere. If something is a string, it should use the String class, the reasoning seems to be. But many systems that handle strings do not need all the features that you find in your typical String class: locales, find_first_of(), etc. They are fine with just a const char *, strcmp() and maybe one custom written (potentially duplicated) three-line function. So why not use that, the code will be much simpler and easier to move to SPUs.
Another culprit is FixedArray<int, 5> a. Sure, if you write int a[5] instead you will have to duplicate the code for bounds checking if you want that. But your code can be understood and compiled without fixed_array.h and template instantiation.
And if you have any method that takes a const Vector<T> &v as argument you should probably take const T *begin, const T *end instead. Now you don’t need the vector.h header, and the caller is not forced to use a particular Vector class for storage.
A final example: I just wrote a patching tool that manipulates our bundles (aka pak-files). That tool duplicates the code for parsing the bundle headers, which is already in the engine. Why? Well, the tool is written in C# and the engine in C++, but in this case that is kind of beside the point. The point is that sharing that code would have been a significant effort.
First, it would have had to be broken out into a separate library, together with the related parts of the engine. Then, since the tool requires some functionality that the engine doesn’t (to parse bundles with foreign endianness) I would have to add a special function for the tool, and probably a #define TOOL_COMPILE since I don’t want that function in the regular builds. This means I need a special build configuration for the tool. And the engine code would forever be dirtied with the TOOL_COMPILE flag. And I wouldn’t be able to rearrange the engine code as I wanted in the future, since that might break the tool compile.
In contrast, rewriting the code for parsing the headers was only 10 minutes of work. It just reads a vector of string hashes. It’s not rocket science. Sure, if I ever decide to change the bundle format, I might have to spend another 10 minutes rewriting that code. I think I can live with that.
Writing code is not the problem. The messy, complicated couplings that prevent you from writing code is the problem.
4. Use IDs to refer to external objects.
At some point one of your systems will have to refer to objects belonging to another system. For example, the gameplay layer may have to move an effect around or change its parameters.
I find that the most decoupled way of doing that is by using an ID. Let’s consider the alternatives.
Effect *, shared_ptr<Effect>
A direct pointer is no good, because it will become invalid if the target object is deleted and the effect system should have full control over when and how its objects are deleted. A standard shared_ptr won’t work for the same reason, it puts the life time of Effect objects out of the control of the effect system.
Weak_ptr<Effect>, handle<Effect>
By this I mean some kind of reference-counted, indirect pointer to the object. This is better, but still too strongly coupled for my taste. The indirect pointer will be accessed both by the external system (for dereferencing and changing the reference count) and by the effect system (for deleting the Effect object or moving it in memory). This has the potential for creating threading problems.
Also, this construct kind of implies that external systems can dereference and use the Effect whenever they want to. Perhaps the effect system only allows that when its update() loop is not running and want to assert() that. Or perhaps the effect system doesn’t want to allow direct access to its objects at all, but instead double buffer all changes.
So, in order to allow the effect system to freely reorganize its data and processing in any way it likes, I use IDs to identify objects externally. The IDs are just an integers uniquely identifying an object, that the user can throw away when she is done with them. They don’t have to be “released” like a weak_ptr, which removes a point of interaction between the systems. It also means that the IDs are PODs. We can copy and move them freely in memory, juggle them in Lua and DMA them back-and-forth to our heart’s content. All of this would be a lot more complicated if we had to keep reference counts.
In the system we need a fast way of mapping IDs back to objects. Note that std::map<unsigned, Object *> is not a fast way! But there are a number of possibilities. The simplest is to just use a fixed size array with object pointers:
Object *lookup[MAX_OBJECTS];
If your system has a maximum of 4096 objects, use 12 bits from the key to store an index into this array and the remaining 20 bits as a unique identifier (i.e., to detect the case when the original object has been deleted and a new object has been created at the same index). If you need lots of objects, you can go to a 64 bit ID.
That’s it for today, but this post really just scratches the surface of decoupling. There are a lot of other interesting techniques to look at, such as events, callbacks and “duck typing”. Maybe something for a future entry…

Cats have nine lives

Original Author: Paul Evans

“Oh no I died again!” – not something you generally get to say in real life is it? Yet gamers readily accept death as a temporary failure state – just a blip towards achieving their goal. Death has been abstracted in games in many varied ways over the years and in some games abstracted away completely. It has been used to provoke emotion, to punish, to teach, to up the stakes or just to increase a score.

But the cat came back the very next day…

Arcades with video games and pinball machines are quite rare now in the UK (now they are full of fruit machines) but are still a good starting point for looking in to players’ lives in games. Lives in Pinball are represented by identical silver balls – a limited resource per game. A player may start with three balls and might even win extra balls but eventually the game will end with a score. Score is used as a measure of success across all of your lives, with each finite reincarnation another chance at increasing the net accomplishment.

PacMan actually shrivels up in to nothing and makes a pitiful whining noise as his little spherical body implodes at the touch of Inky, Blinky, Pinky or Clyde… but another identical PacMan takes his place. These games are over thirty years old but the themes of credits, lives and scores still permeate throughout modern games in all sorts of disguises.

This kind of death and credit system is useful to monetize failure. Want to see more of the game? Well okay then… you have ten seconds to cough up the money and we will let you see some more. If you do not come up with the money then back to the start for you! Progression therefore will cost you, mastery of skills and patterns only earned after costly failure. Is it worth antagonizing a player with this kind of death in games that are not pay-per-play?

Sonic teach via death. I cannot believe that these games were designed so that a brand new player could get through the entire game without learning through a fatal kind of trail and error. Punishment for death can increase from the arcades where you can always bribe your way through a section, because in some games you pay in time instead. After dying a few times in a row the punishment goes from a pained animation to having to restart an entire area – taking away hard fought progress from the player. While some players thrive on this type of this type of challenge I would argue this action can be demotivating. Only a certain kind of player continues on to conquer this, others will just walk away in frustration.

you have to wear in the next stage if you do!

Games with lives and credits often also keep score, either in terms of points or fastest times. They promote competition through leaderboards, with players trying to out do each other for bragging rights.

If it bleeds we can kill it (apart from if it is the player)

There is a snowy mountain path continuing to the NORTH. To the SOUTH a noisy group of villagers carrying flaming pitchforks are closing in on your location. They all seem kind of angry. Well apart from the springer spaniel that is wagging his tail furiously. He seems to think this is the best walk ever.

> NORTH

You tripped over something and fell off the cliff, a whooshing sound fills your ears followed by a very nasty crunching noise. Breathing heavily and looking up you see the villagers standing where you were laughing. One shouts down to your broken body “enjoy your trip?” Another takes your INFRARED SUNGLASSES that fell from your pocket before your tumble and puts them on. “OOOh INFRARED SUNGLASSES, I’ve always wanted a pair of these! I’m seeing red… oh look at that INVISIBLE TRIP WIRE across the path!” Rolling your eyes, you promptly expire.

THE END.

he told about it). The lack of death (but the presence of peril) made the game a very accessible adventure game.

Fable III, you get knocked down, but you get up again. There is a price though; you get scarred up and you could lose some experience towards your next guild seal. This cosmetic toll is unbearable to some – I have known people to quit to the dashboard to avoid the auto save kicking in and saving their character in a knocked out state. Of course others play well but get knocked out on purpose… because scars are cool and perhaps they like the villagers making negative comments about their looks (just another reason to take the safety off).

The player in these games has one life that is constantly threatened but never ends. It gives a player license to experiment without fear of a sticky end. Games where the player cannot die often focus on telling a story rather than keeping score. They might also have to fight against the perception of being “easy”.

Ouch that really stings

There are a breed of games where death is both an inconvenience and a common occurrence. Gone are the limits on lives and credits. A player can retry many different things in different ways without fear of being sent back to an arbitrary check point a designer decided would be a good place to start again.

rewind time to before that death happened. No penalty really, unless the backwards noise really grates against you.

Bioshock have the concept of clones… so just like Pacman and the hero’s ship in Space Invaders there is always a new vessel for a player to jump in too. Though in Crackdown & Bioshock the effects of the previous clone are still present in the world. You literally are a player clone army and in Crackdown II there is actually an achievement rewarded for finding all the ways an agent can die. Death then can be its own reward.

Demon’s Souls adds a community aspect to death. Death can come completely out of the blue but blood stains can be used to communicate to other wary players what is in store for them – death is a form of shared learning experience.

Limbo sees the little boy avatar you control dying over and over again with very little game penalty. I take it back, the deaths are really quite disturbing and stomach churning though I’m sure there are those reading that just think it is funny! Heck it is probably a good way of acclimatizing to that kind of horror. Each death is over so quickly and so little ground is lost that it hardly costs any player time at all to die.

These games attempt to limit the frustration of death by placing the player close to the point of failure. They dust off the player and say “have another go, try something different”. Many of these games want you to complete them and know that some of the challenges they offer are tough.

No you really are dead

Ground Hog Day, where the entire world in unaware that all this has happened before (and all this will happen again). The player though has hopefully gleaned some knowledge to progress past that sticky point, so perhaps after x amount of times it will actually happen differently!

The downside of this is that the game does not know you played through a section either, so if there are any sections of non-interactive play then a player is doomed to sit through them again and again. As a player I really, really do hate times where I am forced to sit through sections of the game deemed so essential that I must watch them before interacting again. Perhaps there is a happy compromise, a separate save file held that indicates if a player has experienced that oh-so-important content once even if they revert to a previous save?

Red Dead Redemption do sometimes have fresh dialogue for retrying a totally failed mission from a checkpoint. GTA does exact a penalty for death (or at least lack of heath) – the player does not automatically revert to a save but instead ends up in a hospital with the contents of their previously bottomless pockets empty.

Red Dead Redemption did rely on teaching me that I cannot disarm people in story mission duels via killing me off and reverting me to a save / checkpoint. I really would have preferred disarmed opponents that had to die to have killed themselves rather than gimping the disarm mechanic and killing me in duels. Being quite a persistent gamer I believed them to just be formidable opponents, so I did try one particular duel over and over again for an hour before I caught on (sigh).

If a player veers away from where the story teller wants them to go, they are killed off and put back somewhere they can make a different choice. It does become frustrating if that choice is not clear, or if a player sees what they are supposed to do but lacks the skill to do it. Perhaps worse still, when the player remembers something from several hours ago that would change things now. Arguably save slots offer redemption for mistakes… at the cost of perhaps many hours of player time. Multiple save slots also potentially endanger the weight of player decisions.

Mr. Resetti would disagree!

You wear yellow, they wear red

red shirts.

see comments from other gamers on this vid).

The Sims eventually die if you play with them. Sure you could just switch characters, keep reverting to a save or allow them to continually imbibe from the water cooler of youth but you are always able to let time take its course and eventually death will come to collect them. Then they are gone. Ok, well you might get a ghost hanging around but they are not remotely the same.

Mass Effect II rewards good leadership and truly knowing the players own team with extra story and different outcomes. There comes a point where deep knowledge of a the supporting cast strengths and weaknesses can save their lives.

Having a player care about their supporting cast is terrible when it feels unfair (why won’t this healing potion work on them this time… it worked the other thousand times?!) When done right though it gives a player extra emotional investment in not only the avatar they control but more of the world they are experiencing.

That’s all folks!

I would be very interested in hearing about how you think player death may evolve in the future, more examples of the above and your own personal experiences in the comments. Many argue games are getting easier – some would say they are becoming more accessible. A few designers have designed away player death completely in their games.

Geometry Wars understands that smashing down a button lots of times should just let the player try again (a touch many gamers I am sure appreciate). Other games use checkpoints and saves to snatch failed player actions out of the air as if they never happened; or return a carbon copy of that character back to the world sometimes none the wiser of the predecessors fate.

@paulecoyote.

Thanks

@daredevildave

Also many thanks to my lovely wife proof reading this for me several times before I published it.

twitter. Everything in this article is Paul’s opinion alone and does not necessarily reflect his employer’s views, nor constitute a legal relationship. Copyright ©2011, Paul Evans.

Windows 7 mobile – a rant.

Original Author: Jake Simpson

So the Windows 7 phone launched recently, to much fanfare but not huge initial sales – reports were as low as 40k initial sales on launch day. I dare say they are larger than that but still, MS has to be disappointed with those kinds of numbers on the outset.

However, to give it it’s fair due, having played with a friends Win7 phone for a bit, this really is a nice piece of kit. The UI is finally where is needs to be and the whole experience is pretty much on a par with the iphone, even if functionally it is not – lack of cut and paste, small irritations like that. Overall it’s definitely a worthy competitor, no question.

But for lots of mobile devs, on closer examination, there is a hidden gotcha that absolutely astounded this author upon discovery.

All development on the windows mobile has to be done in XNA / C#. The Windows Mobile experience is basically a platform where you build CLR assemblies of what is basically pseudo asm code, which is then recompiled at run time into a native platform binary code.

What that difficult-to-parse sentence means, at root, is that all that mountains of C, C++ and potentially Objective C code that already exists for apps/games on other platforms cannot be ported across to the Windows 7 mobile platform. You have to rebuild from scratch. If you are a developer, you begin to see where I am going with this.

So why is MS doing this? From thinking and asking I’ve come to the conclusion that the reasoning behind this is many fold, to whit :-

1) You get automatic access to all the Xbox Live api’s that already exist in the XNA environment for the Xbox 360. This isn’t to be sneezed at – high score tables, achievements and so on, plus the entire infrastructure is there to expand on. There’s a lot there and it also gives rise to the idea of linking a mobile implementation with an XBox 360 implementation – sharing of data or game play.

2) MS gets to hide the metal. Any developer who was around in the early 90’s,- or indeed is mainly a PC developer today, – will know the pain of the compatibility lab testing. Where you have to deal with many strains of video card drivers, sound cards, memory and CPU combinations and so on. Even today, when DirectX has pretty much taken over as the graphical api of choice on the PC, there are still lots of little gotcha’s that developers have to take into account to handle the wide diverse abilities of the all the PC’s out there. Smart phones are heading in this direction – traditional handsets have already been there for years. It’s not unusual to have over 50 different sku’s for a java based game for different handsets – and now smart phones (particularly android) are heading in this direction. If you can’t control the hardware you run on (as Apple does) then you have to support everything, and that’s time consuming and painful for the developer.

So Microsoft’s answer to this is to basically create a run time compiler that takes your generic code and recompiles it for the specific hardware when you run the application – the idea being that the compiler is specific for whatever hardware you are using and will create optimal code for that platform.

Traditionally a C++ compiler takes your high level C++ text code and makes actual binary assembly out of it – code that is native to the target you are compiling for. On a PC, it makes X86 assembly language. On a Playstation 3, it’ll make PowerPC assembly code. The code generated is ready to run in the native language of the chip it’ll be running on. In the JIT world of CLR assemblies, which is what the C# compiler generates, this is not so. It generates ‘generic’ code, code that can never run on ANY platform until it’s been recompiled to target that. It’s like an intermediate step – take the high level text, crunch that down to generic assembly language and that assembly language object file (called an Assembly) can then be compiled “just in time” to run on the target platform when you ask to actually run the application.
In this situation a developer never really needs to know anything deeply specific about the hardware (some assumptions are necessary, but that’s in the spec for the platform anyway – you will *always* have X memory and some degree of hardware graphical acceleration). This frees up developers from having to worry about targets – he doesn’t care if this is a phone, an MS XNA driven tablet or an XNA enabled Internet Fridge – and lets them concentrate on the game instead, plus giving MS the ability to push this platform onto more varied physical platforms without worrying about the developers code running on it.

3) Exclusives. As Sony no doubt thought with the PS3, the more different and specific your platform is, the more specific and exclusive developers will be on it. If you can’t write code to easily port to other platforms then you tend to stay on whatever platform you are comfortable with, has the best tools and cuts the best deals in terms of royalties OR who has the biggest install base.

4) Tools. Pretty much all the development tools for XNA exist now, are in constant use and will be familiar to those who’ve done any XNA (or to a certain degree, C++) coding for the Xbox 360. There’s familiarity in them there hills!

5) Run Time expansion. This ones a bit technical – but as it was explained to me, XNA’s Just In Time compiler system will actually page bits of the code in and out of memory as they are used – so effectively you can have a much larger executable than memory in the target device and XNA will just ‘handle it’.

6) Competition for Android. MS is less worried about Apple in this scenario simply because Apple is both a hardware and a software company. MS is purely a software company, so it must provide a platform that can run in multiple different hardware configurations. But so is Google, and frankly they’ve stolen a march on Microsoft here – there are Android phones all over the place, Android tablets appearing and MS has almost nothing in the market to match. So they need to develop a platform similar to Android to at least compete – and I dare say they see XNA as that.

So yeah, it’s understandable why MS went this way. But, there’s some definite cons to the whole situation too.

Firstly, the idea that the JIT compiler will produce decent code is fairly unrealistic. It’ll produce workable code that’ll run, but optimised for a specific platform? It’s taken 10 years to produce decent compilers for X86 and that’s been around for many years. The idea that several JIT compilers will produce optimal code for their specific platform is fairly unlikely. And because it’s occurring at run time, when developers have no tools to go in deep and see what’s going on, makes it that much harder to really optimise. How do you optimise pseudo assembly code when you have no idea what it’s final implementation is actually going to look like?

Secondly, the Run Time Expansion, where XNA compiles just bits of your program that are heavily used on pages out parts that aren’t – well, it’s a cute idea but really it’s a hangover from when mobile devices were far less capable than they are now. Apples IOS doesn’t have this feature and I’ve not seen anyone complaining about it? As a justification for XNA, no, not really there.

Thirdly – and most importantly – the idea of exclusives, and thereby the implication of the push down to developers to only allow them MS approved dev environment and language – is the biggest “excuse me??” point. Many mobile smart phone developers develop in C++ purely in order to be able to port their stuff across platforms easily. Android operates in Java, but drops down into C++ relatively easily. IOS can be developed in Objective C and/or C / C++ – you can mix and match to your hearts content. So my big old C++ codebase that runs Zipzap on the IOS platform is almost instantly portable to the PC, Android, Mac, hell, even the PS3 if I want to spend the time doing that.

But Windows 7? Nope. I have to either start from scratch with that, or use one of those C++ to C# convertors and cross my fingers that a) the code runs (and it’s unlikely given the complexity and usage of some external template libraries I have) and b) it runs well, which is also frankly unlikely.

So basically Microsoft has pushed the onus of redevelopment onto my shoulders – I cannot reuse code but must code specifically to their platform. Now this is kind of what Apple has already done with XCode and Objective C and lots of developers have accepted that – but this isn’t an apples to apples (ha! See what I did there?) comparison. There’s a phrase well known in game dev circles – Be First or Be Best. Microsoft, in this case, is neither. They are competing head on with a VERY established and VERY popular platform (IOS) and so requiring their developers to jump through very time consuming extra hoops to develop for them is a problem. You only get to push practices like that on developers if you are the best platform (and unfortunately Win7 phone sales denote that market share wise, they are decidedly not) or first – and obviously they are not here either. It’s an unfortunate fact that most of the very small 2 or 3 man shops who have made the Iphone App store what it is look at the Win7 phone, think “Yeah, I’d like to port iFart to that!” then discover the fact they would have to re-engineer from scratch and just say “Screw it”.

The lack of C++ support on Win7 mobile phones is a damning and, while a gutsy move on Microsofts part, ultimately going to result in it loosing to Apples juggernaut. And what’s worse is that it’s only this way because Microsoft has decided it has to be. There is no reason why a Win7 phone could not have a C++ code compiler on it – games would be faster and yes, although it does mean that developers may have to deal with differing target abilities, they have to deal with that with Android anyway, so scalable code is the order of the day regardless. Microsoft just has to recognise that this is desirable, as so many of the developers they ultimately depend on have already pointed out, although it’s also worth pointing out the entire infrastructure of how XNA is designed is built to NOT allow direct native code running on the target platform. XNA doesn’t even like managed C++ code very much (where C++ code is reduced out to the same set of CLR assemblies as C# code is) because the CLR code that is generated specifically and explicitly does not allow some of the features of C++, specifically to stop a C++ application from taking total control of the device.

This is definitely a David and Goliath moment, the Win7 phone vs the giant that is the Iphone / Ipad. But Microsoft is making it into a David and Goliath moment where you need to go get David a Latte and it had BETTER be a non fat milk, with whipped cream and chocolate bits, or he’s not even going to show up for the fight.

To be clear – I like this new platform and I want it to be successful. I think it’s well made, responsive and the UI is great. In an Iphone-less world, this would rule the roost. However, this is not an iphone-less world and MS seems hell bent on making life as difficult as it can be for developers, and I fear this will be the downfall of this platform.

Jake Simpson

Tips and Tricks for Debugging Optimized Code

Original Author: Max Burke

Everything’s fine, everything’s dandy, your game is humming along at

60fps, when suddenly (oh no!) it crashes! You attach a debugger but,

oh no, this isn’t right… this isn’t right at all! If you’re lucky

you might have some (albeit wrong) source code information, but more

than likely you’re lost in a sea of assembly.

Don’t panic! Any sudden movement will startle the game, forcing it to

attack. Well, not really, but before you abort and start up that debug

build, take this opportunity to poke around and see what you can first

find out.

Regardless of what platform you’re developing for you’ll want to do

some reading about its ABI. The ABI (application binary interface)

describes how the stack is laid out, how and what individual registers

are used for, how parameters are passed to functions, etc. Many are

available free with a bit of digging, for example Microsoft’s x86 ABI

can be found on

MSDN, and the

64-bit PowerPC Linux ABI is also freely

available.

If you’re feeling
rldicls,

if you’re feeling that your
lfsux,

and just want to packuswd and head somewhere warm, you should download

the assembly/architecture reference manuals for the processors you are

developing for.

For the big-ticket consoles you’ll want to check out IBM’s 64-bit

PowerPC programming environment manual.

Sony has some great documentation posted for CellBE development on

their public CellBE site.

For x86/x64, you’ll want part

1 and part

2 of Intel’s

architecture manuals.

ARM’s Infocenter website

has all you’ll need for ARM devices like the Nintendo DS and nearly

every smartphone under the sun.

I should briefly touch on one basic idea behind modern compilers. On

the journey from somewhat legible source code to head scratching

assembly the compiler breaks code up into segments called basic blocks

which are a contiguous segment of code that has exactly one entry

point and exactly one exit point. Many optimizations are limited to

basic blocks and being able to figure out which assembly blocks are

associated with which sections of code is hugely valuable when

debugging.

So, now you have your crash in the debugger and your architecture

manuals by your side, now let’s look at what actually went wrong. Most

good debuggers will indicate what caused this mess — if you’ve hit an

access violation/segmentation fault/htab miss chances are you’re

reading from or writing to a bad memory address. If things just look

off, if your debugger has become lost and is displaying things that

look like nonsense, you might have branched to nowhere as a result of

a corrupted function pointer or vtable entry.

Now that we know what the problem is, let’s figure out how we came to

be in this mess in the first place. Perhaps our code spilled off the

end of an array and trampled over some adjacent data, perhaps some

header files changed and you forgot to rebuild the accompanying

library (not like I’m bitter…). If you have a stomp-detecting

allocator now would be a good time to turn it on. Virtual memory

access protection can be a huge help — setting blocks of memory to

read only or write only depending on what their intended use is can

help flush out errant accesses when they happen rather than millions

of instructions later.

If it was an access violation, look to see what address it was

attempting to load from or store to. Is it zero (or near zero, in the

case of accessing fields of a structure via a null pointer)? Different

address ranges often have their own purporses, did it happen perhaps

near stack memory? Heap memory? Was the function attempting an

operation requiring cacheable memory on non-cached memory, like atomic

updates of video memory? Sometimes a floating point number will end up

in the wrong place and you’ll see values like 0x3f800000 in a general

purpose register.

Did the crash happen in the prologue of a function? If so, check the

ABI to find out how parameters are passed, look at them, and see if

they make sense. Does that first parameter really look like a pointer

to the structure the function was expecting? If you view the data in a

watch window does it look right? I’ve had GPUs in some cases run all

over my data, writing values that were nonsensical but yet followed a

pattern.

Did the crash happen in the middle of a loop? Look for registers that

are used as loop counters. Sometimes these are a bit hard to spot

because of the reordering that happens during optimization but a good

way to identify them is to look for the code that checks loop end

conditions. Usually some comparison and/or conditional branch pair

will point to which registers are used for indexing.

If you crawl back up the call stack you’ll find that your variable

watch window has become mostly useless. When optimizing compilers try

to push values into registers and keep them there as long as possible

and in many cases variable watch information is tied to a stack slot.

There’s a good chance that the value you’re looking for exists in one

of the callee-saved/non-volatile registers, a group of registers

sectioned off by the ABI that says they can only be modifed by a

downstream function if they are first saved to the stack at known

offsets. Because the save slots are specified by the ABI the debugger

can determine their values precisely as you move from stack frame to

stack frame. The x86 platform throws a wrench into the works because

of it’s register envy so it uses the stack quite often even in

optimized builds.

Debugging optimized code is a bit of an art form and will require

patience but with the right tools and knowledge it becomes much

easier. If you have any tips that you’d like to add, or if you’d like

me to expound on anything I’ve touched on here, please chime in below!

The importance of controls

Original Author: Adam Kidd

In making your first game as an Indie developer there is a lot to think about; you have to make the graphics, the sound effects and music, perhaps you want to include artificial intelligence, physics or particle effects or any number of cool things that will make your game unique and theoretically awesome. If all of the sound, graphics and features are the soul of your game, the controls are the door that lets the player in to experience that world in all its’ phantasmagorical delight.

With this in mind, it is surprising however how even many big budget games will spend far too little time on the controls of the game. It only takes a quick look at the initial success of Konami’s Pro Evolution Soccer (PES) and the comparative failure of Electronic Arts’ FIFA Soccer games just a few years ago to see how much of a difference the controls can make. Although FIFA had some of the best sound effects and commentary, and the graphics were arguably better than those in PES, it quickly lost out to the more responsive and better organised controls in Konami’s games. It took several years before EA realised this, and once they made the positive move to improve their control system, they quickly caught up to and have been on an even or better footing against Konami since.

As an Indie developer, particularly if you’re targeting the web as a platform, you can find a whole horde of people who just clearly didn’t think about their controls. So how can you make sure that the controls you use in your game beat your competitors?

I think there are four key areas to think about when designing your controls, which combine into the ironic acronym C.R.A.P. That’s right, make your controls CRAP and they’ll be better than your competitors.

So what does CRAP stand for?

Consistent

Your controls should be consistent. This might seem like an obvious statement but consistency doesn’t just mean that your jump button should always jump, it means that when you press the jump button, it always jumps the same way. Whether this means that you always jump the same distance, or like in platform games like Super Mario Bros. the longer you hold the button the farther you jump (until the maximum height), the way the computer or console reacts to your input is always the same. An example of this being done the wrong way is Mortal Kombat Armageddon for the Nintendo Wii. When playing the game, to perform special moves, gestures have to be made with the WiiMote, but either the tracking system in the controller, or the code that listens for it just isn’t able to consistently differentiate between the gestures for different special moves. This leads to a very frustrating experience where you try to do one thing, and often after multiple attempts you end up losing because the controls just didn’t work as they were supposed to.

Following this same train of thought, your controls should also be consistent with your successful competitors. If someone else has made a similar game to your idea, and it has been successful, chances are it has excellent controls, so why not use similar controls for your game? Anyone that has played your competitors game will immediately feel at home when playing your game as they already know the controls! As with any rule, there are always exceptions, and if you have played your competitors games and found that something doesn’t seem right or could be improved on in their controls then by all means go for it, it will probably make your game even better. Just be careful not to make arbitrary changes to something that has already been shown to work, it’s often unwise to try and fix something that isn’t broken.

Responsive

There is nothing more frustrating when playing a game than the controls feeling unresponsive. When you’re playing a game and you press a button, you expect something to happen not in half a second but RIGHT NOW! There should be no noticeable lag between using the controls and the appropriate reaction in the game. This becomes all the more important with fast-paced or time sensitive games like First Person Shoot-em Ups, around which an entire business has been created of making controllers with faster response times to give the players that buy them ever-so-slight advantages over those without. If the game itself is the cause of lag between input and action, the devoted players that you want to play your game – the ones are likely to spend their hard-earned money on the downloadable content you create – will find another game to play instead.

Adoptable

When using all the different controls in your game, the player should be able to easily switch from one control to another. The controls a player will need to use the most should always be the ones in the positions on the controller that are easiest to reach, and for this reason should usually be grouped together in a way that the player can simply and easily play the game without any awkward movements. The most important thing when trying to implement this in your game is to think about how the player will need to hold their hand to press all the buttons. If one of the buttons seems awkward, can you move it elsewhere, or if necessary even remove it? Can the player easily adopt a comfortable position when playing your game for extended periods of time?

Progressive

Finally, the controls should be progressive. When a player first picks up your game it should be easy for them to start playing it at a basic capacity. They should be able to learn the required controls to get through the easiest levels of your game without looking at a manual or going through lengthy and tedious tutorial levels. The bottom line is to lower the entry barrier to your game so that more people can have fun playing your game. After they’ve played the game for a while, you can introduce further controls that change the game in ways that make it more challenging for the player, but give them greater rewards, and give the player a feeling of accomplishment.

Counter-Strike is an excellent example of progressive controls; a new player starts off buying weaponry with their mouse, then moving and shooting with a combination of the mouse and keyboard. As the player gets better they may start purchasing weapons with keyboard shortcuts, which means they can leave the starting area faster and gain a time advantage over other players, and as they play more they become better at aiming their mouse at the opponents head and chest, making it easier to defeat their opponent. They become nimbler at pressing the keys and getting into cover when under attack by the enemy, allowing them to survive longer. Eventually the players who put more time in will be able to make more accurate shots faster while avoiding their opponents’ bullets and their time spent is continuously rewarded by reaching the top ranks in the scoreboard after each round they play.

Next time: An introductory tutorial to haXe game development in FlashDevelop. (It’ll be much better than this!)

 

How I fell in love with mocap…

Original Author: Mike Jungbluth

This week marks my fifth year as a game developer.And like any anniversary, I can’t help but reminisce about times gone by.Add to that the fact I only started animating with the computer a little time before, and I am the definition of sentimental at this moment.But with this being my first post, I figured you might indulge my trip down memory lane as a perfect introduction to my background and way of thinking.So pull up a chair… er, pull your chair closer to your digital firebox and settle in for my technological soap opera.

My fear and distrust of technology started immediately.I went to school for hand drawn, “traditional” animation, forsaking the computer as an impure method of crafting such a magical art.2d animation versus 3d animation was (is?) all the rage while I was in school, as many believed the influx of 3d KILLED 2d, and that to even THINK about trying computer animation was to sacrifice your warm, organic heart for a cold, lifeless toaster.

Though once graduating, the reality of just wanting to animate instead of working at Radio Shack quickly tempers such hostility towards method.So by the time I finally realized learning how to animate in a 3d medium was necessary to find work, I just kept telling myself I had to make peace with it.But then a funny thing happened.I fell in LOVE with computer animation.I was no longer held back by my drawing ability or the time it took to process each drawing to finally see my character come to life.I could quickly add subtlety to facial expressions and held moments that breathed a sense of life into my creation more than I ever felt I could before.And ultimately, it was the same thing I was doing when was slaving over the drawing board.I was creating movement and life by my own hands, just with different tools.And by the time I was putting a reel together to find work, adding any of my hand drawn animation didn’t even enter my mind.

So fast forward through the job hunt, the job interviews, my first studio, learning how game animation varied from film, shipping my first game, then leaving my first studio for my second and you will find me excitedly animating The Incredible Hulk.Everyday I got to go to work creating over the top animations of a giant green monster smashing other brightly colored creatures.And after two years of hand keying animations in games, I was getting pretty comfortable with my skills.Sure, I still had a lot to learn, but I was as adapt with a mouse and keyboard as I ever was with a pencil and paper.But of course, comfort and technology are always at odds.It was getting towards the end of the project, and there was still the little issue of 30 minutes of cinemas that needed to be done.And as much as every animator at the studio would have LOVED to hand key every one of those 30 minutes with an attention and love it deserved, the deadline meant that wasn’t going to happen.So it was decided by those leading the project that we were going to use mocap.

And there it was.The first shot in the next animation war I was about to be drafted into.

I had heard of other animator’s war stories about their experience with motion capture.Most complained about how constrictive it was, having a key on every frame and no easy way to adjust timing without losing the subtlety of the motion.But the biggest issue was that the animation wasn’t yours.It was the director’s intent with an actor’s movement.By the time the animator got it, they were just meant to be a faceless drone that smoothed it all out, added some finger movement and copied the facial animation from the video of the actor.And that is the opposite of what an animator wants to do.We got into the field to create performances and feelings of emotion that are real to anyone that sees them.We are the actors, we are the directors.We are not the janitors of other people’s movements and performances.

So, with all those thoughts running in my mind, I am given my first cinematic with mocap.And instantly, all those horror stories came true.Without a mocap studio of our own, it had to be done offsite, many states away.That meant that only a couple people could go to interact with and direct the actors.Not having the tools to process or experience to clean up much of the data, everything was first outsourced to another studio.But as with any outsourcing, what you get back isn’t going to be able to go right into the game, so even when the cinematics were delivered, we still had to touch them up and get them into the engine.Our custom rigs and tools were not made with mocap in mind, so our workflow took a real hit.It was a nightmare and everyday I swore that I would never work with mocap. Ever. Again.The war of key framed vs mocap was on, and I felt stronger about it than I had about 2d vs 3d.Probably because this directly affected how the rest of my career was going to play out.Mocap was a dirty skinjob!

But again, life has a way of finding ways to challenge those convictions.I was on the search for a new studio to hang my hat, and the one I was most interested in was heavily invested in mocap.To the point they had their own motion capture space.And while I was initially being brought on to do key framed animation, I knew I would have to deal with mocap.But be it hubris or just general excitement for the studio, I accepted.And when it came time to work with mocap data, I braced for the worst.Then surprising thing happened.What I found was that it wasn’t as bad.Mind you, it still sucked.I was having to clean up someone just talking in front of a table, and neither the performance, movement or direction were ones I felt any form of connection to.But at least this time there were tools in place and I knew if something wasn’t working, it could be recaptured quickly or I could talk about the intent of the performance with the director or actor.It also fit the characters and the world better than in a superhero game.So both visually and work flow wise it made sense.So I dealt with it as the majority of my time was still key framing creatures and just generally awesome stuff.I was comfortable and content, so of course mocap had to find a new method of attack.

As that project was wrapping up, I was moved to another game with another team and lead animator.And quickly my day to day work was ONLY dealing with mocap data.And what was worse, it was just taking files from an excel list, with no interaction between the director or actor what so ever.Sure, the tools were better than what I had on Hulk, but this felt just as soul crushing.My connection to the characters was non-existent and I am sure that was felt in every asset I checked in.Mocap had fired back, and it was a critical shot.So, when I again switched teams and leads, I felt like I was being medevaced to safety.And now I was part of a resistance to fight back!When my new lead said he didn’t care if we used mocap or key frame, just as long as we got the work done to the proper standard, I gave mocap the middle finger.I was going to do nothing but key frame, and my time with mocap was all but a distant memory.

But as is always the case, deadlines started to loom.And the need for mocap to get everything done became apparent.But this time, it came with a caveat.If we were going to use it, each animator was able to direct the actors.So even if we couldn’t animate it, we could still get the performance we wanted.While I figured this was mostly just a consolation prize after having my precious key frame taken away, I was on board to try my hand at directing, if for nothing other than the experience.Plus, directing actually sounded like it might be kinda fun.There was a glimmer of hope.

The first time I directed mocap, it was awkward.I wasn’t sure where I should stand, who I should talk to, how physically involved I should get with the actors, how MUCH direction I should give them, and when I was asking too much of them.I actually left that first session with a new level of dread for all things mocap.I had all but written off any sense of fun with the mocap process.But then I was delivered the data to be cleaned up.And at that moment I realized what I had always hated about mocap: the lack of ownership.Sure, the motions weren’t 100% dictated by me, but the intent was.The performance was what I would have done had I hand keyed it, and now I just got to go in and push the timing, and the poses.And that was enough to keep me going back to each session, willing to invest myself more and more into the mocap experience.By the end of the project, I was looking forward to directing the actors more than anything else I was involved with.

As that project wrapped, and we started to do R&D on the next, it was decided the art style was going to be less realistic and more stylized.You know, the type of thing animators dream about.We had visions of key frame animation and cup cakes dancing in our heads.But I had a nagging feeling that we would be missing out on an opportunity if we didn’t use mocap.The reality is, as games require more animation, the ability to keyframe all of them becomes harder and harder.And in all honesty, keying idles, turns and the sort aren’t the most fun.So I decided to work with the mocap department to try and come up with a method to integrate it with a more keyframe sensibility.The rest of the animators on the team weren’t too excited about the prospect, but since it was just R&D, and there hadn’t been much success before, I don’t think they worried much that anything would come of it.

Before we captured anything, I sat down to identify what the biggest issues the team had with mocap.At this point, I was done fighting the war, and wanted to come up with a treaty.

  1. The team did not like MotionBuilder, which was the defacto mocap program.They were comfortable with 3ds Max Character Studio and wanted that as the only program we had to use.
  2. Quick delivery of the solved data was needed so that the animators could jump right into the editing process.
  3. Proof of concept was needed that something stylized could be created, in essence not making it look like mocap, faster than just hand keying it.
  4. That complete ownership could be felt with the final product.We needed to be able to say, this is my animation from beginning to end.

The first two points were largely out of my hand.We had a mocap department that handled all the tech, and up until this point solving quickly to character studio biped had been an issue.But thankfully when they looked into it again, they found with newer versions a lot of the kinks had been worked out and they could deliver as quickly as they had in previous pipelines.Problems 1 and 2 were quickly taken care of.That left 3 and 4 squarely on my shoulders.But this was the fun part, and was ultimately what I had wanted all along.

First step was to get to know the character, like we would with anything we animated.But the important part was getting the actor in on the research soon after I had an idea what I was looking for.As soon as I had some reference pictures, video, key frame tests, concept art and the model, I would sit down with them as I was scheduling the shoot.This gave them time to understand the character and start thinking about what was going to be needed.This also got us both talking about and explaining the purpose of each animation before anything was even recorded.By the time we did get into the studio space to record, we were both aware of what I wanted out of the character, and how we were going to do it.This allowed us to just get past the functional part of the animation and drill down to the personality, timing and emotion of the action.You know, the stuff us animators go gaga over.This also opened both myself as the director and the actor to experiment with different methods of expression in their performance.If the character is blind and meant to stumble around, we would have the actor keep their eyes closed, and put some cushions and mats throughout the space for them to bump into unknowingly.If they are meant to be a violent, loud character, getting them to scream violently would come more easily if they weren’t worried about just trying to remember the basics.And having spent all that time getting into character meant we were both comfortable physically interacting with one another when it came time to posing.And getting that emotion out of the actor comes through in the data.The way they hold them selves, and whether they are giving it their all or holding back can be seen when you put it onto your character in the game.And getting that performance out of the actor is incredibly rewarding as a director, because you are crafting a performance.And THAT is that true sense of ownership we had yearned for.

After that, editing the mocap to get something stylized was just a matter of playing around.I first key framed the action I wanted to get a time estimate that I could compare the mocap against.With that in place, I dove into editing process.It was at this point I realized that essentially what I was doing was capturing video reference of what I wanted to animate, but now had the ability to manipulate that reference freely and quickly.And while I am in no way a propionate of character studio or biped (maya custom rigs FTW!) I found my familiarity with the tools made for quick and easy mocap editing that was FUN!It became my sloppy joe moment.See, I don’t like ketchup or mustard on their own, but when you mix them together with some magic, it becomes a delicious sloppy joe.And in the case of max and mocap being ketchup and mustard, ownership was that magic ingredient.

And here is the moment I fell in love with mocap.

Keyframe Vs. Mocap Test

Sure, nothing revolutionary in the final product or the work method.But it was enough that at the end of the process, I was excited about the animation and proved that adding mocap into the workflow could personally pay off.I wasn’t saying we should use mocap for everything, just that we could benefit from using it where it excels: idles, walks, runs, transitions.It would give us the base that we could quickly push to fit out needs, and save us time, allowing us to really throw ourselves into the more elaborate, over the top animations we all love.The moment that I truly realized it was successful was when a couple days later a couple of the animators most against its use were in the mocap studio, by their own choice!The treaty had worked!Humans and Cylons were working together in harmony!

It took me five years to really believe what I had just said to make peace with my conscience when I first started down this career path.2d, 3d, keyframe or mocap doesn’t really matter.They are just tools and you don’t always get to choose which one you use.What you do get to choose is your involvement in each, because that is where you will find your happy place.

Week 1: Round-up

Original Author: Mike Acton

We’ve reached the end of the first week here at #AltDevBlogADay. It’s been really exciting to see such a great group of developers get together and share their experiences, ideas and feelings about the process. We’ve had a few bumps along the way (we’re still working out the site theme, hosting, RSS feed, and messaging issues) but overall it’s been a really fantastic experience so far!
We have eight days left in our first cycle (15 days, then we wrap back around.) I simply can’t wait to see what everyone else has up their sleeves to share.
Remember, if you’re interested in participating with us – hit me up on twitter (@mike_acton) or shoot me an email (macton@gmail.com) – We welcome devs from all walks of life. So far, we have programmers, artists, animators, designers, bizdev and audio peeps queued up to share (big studio, small indie shops and everything in between.) So definitely don’t be afraid to join us! If you have an interest in posting more regularly and you want a group of friendly (virtual) faces to help you do that, we’re here to help.
If you missed any of the posts this week, here’s a round-up of links to all the awesome posts we’ve seen so far (hopefully not missing any!):
And of course, everyone loves reading the comments – so please, join in on the conversation! Either here, or on Twitter.
Mike.

Research tastes better when served with source

Original Author: Jonathan Adamczewski

I’ve been reading a range of computing-related research in recent times and there’s one thing in particular that bugs me: research presented without (or with insufficient) source code.

On several occasions I’ve attempted to implement a neat algorithm that I’ve read about and have been confounded: written explanations often confuse more than they enlighten, and pseudo-code (where present) invariably glosses over critical implementation details, disguising the complexity of the implementation and/or run-time.

(And, in my limited experience, seeking assistance from the author via email rarely elicits a helpful response. I’d like to hope that that is not the norm, but I fear that it is not…)

The inclusion of source code with published research provides the means for others to more easily reproduce, test and validate the assertions from a paper, particularly exposing flawed assumptions and bugs.

More importantly, it should make it easier for others to build upon the research. The implementation reveals a great deal about research – its scope, shortcomings and opportunities for improvement – in ways that are often not exposed by the text alone. It provides a platform that can be built upon directly.

Standing on the shoulders of giants is easier if we can get a hand up from those already doing so, rather than having to re-engineer the same stepladder that got them there.

Even if done poorly, the availability of source code should still result in an improvement over the current state of affairs – I’d rather have poorly written source code than poorly written explanation of the same.

Space restrictions for submitted papers certainly does not encourage the inclusion of source code. To be fair, it makes little sense that reams of source code should be published in paper form.

Realistically, it makes little sense to expect that recent or future research will be read printed on paper. Ever. The common 2-column A4 format for publication is terrible for reading on-screen and not particularly pleasant even when printed. Continuing to prepare research for publication in this legacy style ensures that papers remain hard to read, while also preventing easy inclusion of more effective ways of presenting information – not just source code, but larger, more detailed diagrams, interactive systems and more useful, intuitive navigation (to name a few).

The issue of intellectual property of course needs consideration, but if the technique is able to described publicly it is not unreasonable to expect that an implementation should be no less freely available – even if both are encumbered by patent or other restrictions.

For researchers, my exhortation is to publish your source – it can only make your research more relevant and useful.

Jonathan Adamczewski is a graduate researcher at the University of Tasmania. Follow him on brnz.org/hbr