Pick Up That Can: The Problem With Interactive Objects

Original Author: Norm Nazaroff

This article has also been posted on my personal blog, Beyond the Norm.

I’m sure that most of you have already seen the recently released Deus Ex 3 gameplay trailer. One of the game elements highlighted (/rimshot) in the footage is Eidos’ method of calling out interactive items in the game world: a bold, bright yellow outline and highlight over anything you can interact with that’s more or less in the direction the player is looking.

Some of the fan reactions to the trailer have been quite surprising, with a few gamers going so far as to call themselves “outraged.” They don’t like having information in their face, and what’s more they seem to find the assertion that they need to be vaguely condescending. Some folks have even brought out the dreaded “immersion breaking” phrase. All this fuss over some simple object highlighting! Clearly, most modern games need some way of pointing out what is and isn’t relevant to help streamline gameplay, so what’s the big deal with Deus Ex’s method?

Framing the issue

First, let’s note that there are actually two different problems to address when it comes to interactive objects in modern games:

  • Showing the player objects that can be used, picked up or manipulated.
  • Indicating to the player which objects are actually important at this moment.

The concepts are similar but, importantly, the second problem is really a subset of the first. Further, the degree to which these two things differ (that is, how much of the former problem the latter encompasses) can vary a great deal between games. We can indicate the general range of this with the Internet’s favorite tool: a two-axis graph.

A two-axis graph showing how 4 games related to each other in terms of interactivity and level clutter.

I’ve taken the liberty of inserting what I feel are four representative examples, one for each quadrant. I should emphasize that this graph does not assume any sort of value judgments. No quadrant is any better or worse than the others, it’s just a simplified way of quantifying our options.

Fleshing out the axes

Valve Software’s Portal gets categorized as both austere and interactive. Excepting the latter third of the game, there isn’t a lot of stuff sitting around in the Aperture Science testing grounds. This is not a game that focuses on props, but what is included is almost all interactive: auto-turrets that will attack you and can be knocked over; cubes to pick up and move around; giant red buttons to press; bouncing energy spheres to re-direct. We have Spartan environments combined with highly interactive game objects.

Mirror's Edge environment screenshot showing its focus.

Note the clean, uncluttered areas and bold primary colors.

Mirror’s Edge sits at the intersection of austere and static. Though not nearly as focused as Portal, the level design of Mirror’s Edge is still beautifully direct. There are some props around to give the world flavor – piles of boxes on pallets, wooden planks to indicate good jump points; the occasional planter box or advertisement – but very few of these objects are meant to be interacted with. You’ll sometimes find a wheel that needs to be rotated or an enemy weapon that can be picked up, but that’s pretty much the extent of what you need to manipulate in the game.

Uncharted 2 environment screenshot showing dozens of props

The detail work in this shot is breathtaking.

Sticking with static but moving along the horizontal axis toward cluttered we have Uncharted 2. This game has some of the most striking environments in this generation (not to mention being one of my personal favorites!) and they’re filled to the brim with stuff. In particular, the Nepal level is practically overflowing with props: burned out cars; piles of rubble and trash; orphaned furniture and appliances; plastic crates and other detritus of a city in conflict. For the most part, though, these things are there purely for purposes of immersion. It’s fairly rare that you need to actually manipulate part of the environment, and these are mostly limited to either cinematic moments (such as bits of train car that start to break as you climb on them) or navigation elements like ropes and doors.

The final quadrant of our graph is represented by Half Life 2, a game renowned for being one of the first to introduce general physics objects as a core mechanic in a mainstream shooter. There is stuff all over the place in Half Life 2, and almost all of it can be picked up and manipulated with the iconic Gravity Gun. In fact, many of the game’s puzzles involve using the various physics objects to solve simple lever or weight problems (and this has been increasingly true in the episodes.) As a result, Half Life 2 is both cluttered and interactive.

The problem of consistency

Now that we’ve broken down our examples, it’s very interesting to note that the two games at the top of our interactive axis do almost nothing to indicate to the player what can or can’t be manipulated. The Gravity Gun in Half Life 2 does have a subtle animation that activates when an object can be picked up, but this is pretty much the extent of their signaling mechanism. There’s no indication at all if you’re just picking stuff up with your hands.

Both of the games at the static end of our graph, on the other hand, do attempt to indicate when objects are interactive. Mirror’s Edge applies the same red highlight treatment that they use to suggest efficient climbing routes, while Uncharted 2 has two different approaches depending on the item in question. For things that are meant to be picked up – which are primarily guns and grenades – they play a flashing effect on the item itself. For environmental objects such as statues or carts that can be pushed the game generates a HUD prompt when the player is sufficiently close to the object to initiate interaction.

Why would games that have less interactive objects feel the need to highlight them? The answer is actually fairly obvious: because these objects are the exception rather than the rule, it’s necessary to counteract the player’s expectation that items in the environment are there primarily for aesthetic reasons. In essence, these games need to momentarily break the immersion they’ve crafted in order to make certain that the player understands what needs to be done.

The problem of fidelity

It’s worth going back to the concept of immersion as it applies to interactive objects. Games like Uncharted 2 fill their environments with interesting props precisely because it makes the world feel more alive, more lived in. This is despite the fact that these objects typically don’t do anything, but that’s because they don’t really need to. From a development standpoint, it’s much easier (not to mention more efficient) to make great looking stuff to fill out the world when you don’t have to find a use for all of them or spend valuable CPU time handling their physics.

Furthermore, the high quality of the environment and the desire for seamless immersion creates pressure to make the objects that are interactive blend in as well as possible. This is, of course, exactly the opposite of what’s easiest from a game design perspective, and it isn’t a new problem. Back in the days of yore, adventure games found themselves in a similarly difficult situation. As hardware improved, background art got increasingly lavish and detailed and, as a result, it became important for interactive items to meld well with these more immersive scenes. One of the side effects of this progression can be found in the phrase “pixel hunt,” a derogatory term that came to be associated with many later adventure games.

Because the worlds were so detailed, filtering objects that were important to the game from the ones that were important for reasons of aesthetics became a matter of hunting around the scene looking for spots that would give you a “use” cursor. This was not a particularly fun mechanic, and the problem contributed to the eventual decline of the genre. More modern takes on adventure games offer various aids to reduce the issue, with many offering player abilities that cause all interactive items to flash or highlight briefly.

Jumping back to our modern game examples, yours truly once spent several minutes trapped in a tiny room in Uncharted 2 simply because I didn’t anticipate that a statue could be moved. There are dozens, if not hundreds, of statues scattered throughout Drake’s adventure, and seldom are they interactive objects. I ended up resorting to the 3D equivalent of pixel hunting, in which I systematically walked around the room looking for a HUD prompt to appear and indicate what it was I needed to do.

Bringing it on home

Let’s get back to our original topic: Deus Ex 3’s object highlighting scheme. We know that the problem it’s trying to solve is real, and that similar techniques have been employed in other games. Given that, why are people so unhappy about this particular example?

The crux of the matter is this: no matter what, any attempt to break interactive objects out of the environment results in a momentary loss of immersion. Even when it takes on a much more subtle form – such as exploding barrels that are all red, or electrical switch boxes that all happen to look exactly the same – the presence of these cues reminds us that our character exists in a world of limited interactions. The result of Deus Ex’s extensive, always-on highlighting is to constantly remind the player that no matter how alive and immersive the world feels, most of it isn’t actually interactive.

Deus Ex 3 object highlighting screenshot

Tactical options available.

Of course, different sorts of games require different solutions. Slower paced games (such as adventure games) have more freedom to let the player dink around and discover interactivity, whereas fast-paced games with lots of time pressure situations (such as shooters) have to be more explicit. Some settings are more restrictive than others, as well. A game set in ancient Rome doesn’t have many hooks to integrate something like Deus Ex’s system into the narrative, whereas a more futuristic, sci-fi setting like Crysis 2 is less restrictive. In fact, it’s almost certain that Eidos was hoping to leverage the cyberpunk themes in Deus Ex to make it easier for players to accept their augmented-reality approach.

The ideal solution might be creating world in which everything that should be interactive is interactive, but the reality is often isn’t practical. It’s easy enough to conceive of a game where most of the props can be picked up – just look at Half Life 2 – but what about one where all the doors open? All the vehicles can be driven? All the TVs turned on? Grand Theft Auto 4 probably comes closest, but it’s not clear to me if more linear experiences would be significantly improved by these additions.

I think the most important takeaway is this: the amount of player feedback you need to give is inversely proportional to the amount of interactive objects in your game. That is, the more interactive you are the less you need to worry about getting the player’s attention because you have already created the expectation that things are interactive. If you have less frequent (or less important) cases of this in your game, you need to do something to remind the player that items in the environment sometimes need to be manipulated to succeed. It could be that Deus Ex 3’s scheme goes a little too far given their level of interactivity, in which case their best option is simply to scale it back until the proverbial Goldilocks is just right.

Introduction to .NET Interoperability

Original Author: Promit Roy

To kick off my AltDevBlogADay career, I figured I would start with an area I have a disturbing amount of experience: .NET interoperability with native code. Several years ago, I initiated a project called SlimDX, that exposes the entirety of the DirectX APIs (and several others) to managed code.  It was a long trip down the rabbit hole from there, resulting in probably one of the biggest interop projects outside MS itself at over 200K lines. I’ve worked on a lot of interop projects in the meantime, and after attending tools sessions at two GDCs, it’s become evident that the knowledge for how to build bridges between native code and C# is not widespread or well documented.

Today, I’ll just cover the basic options available and their pros and cons.

Platform Invoke

Platform Invoke — P/Invoke for short — is the built in .NET mechanism for calling C APIs. Any exported function in any shared library can be invoked. The runtime also includes a large number of options for converting types back and forth, a process called marshaling. P/Invoke was primarily designed to support the Windows API, so it can tackle a wide range of C style calls but also includes a lot of mysterious names and types that only make sense from the perspective of a Windows developer. All the same, it’s a highly flexible system that is also infuriatingly subtle and awkward to use for complex cases. You can wrap practically any function a sane C API could throw at you, but getting the marshaling options correct for the data you’re transferring can be tricky. It’s often easier to simply write private functions to do it than to figure out how to trick the framework into doing the right thing.

Normally, you have to write P/Invoke wrapping code yourself, though the site SWIG which parses C and C++ code to generate a wrapper. It can even handle C++, by creating an intermediate C API.

The performance of P/Invoke functions is mediocre. The runtime goes to substantial lengths to prevent the native code from damaging the managed runtime, which includes all kinds of checks on stacks, memory, exceptions, and so on. That takes up a long time, so if you’re trying to use native code for performance, it’s critical that your API provides large batch processing functionality. It may be beneficial to rewrite smaller functions in .NET.

There’s also another problem, which is that the shared library to be loaded is named explicitly at compile time. This makes dynamic loading a lot more difficult, and breaks .NET’s normal x86/x64-agnostic arrangement.

COM Runtime Callable Wrapper

I won’t dwell on this, because most of you are in games and that means you’re probably not writing COM interfaces. COM exposes a nice restricted version of an interface-based API, along with a dedicated definition language to describe an API that is compatible with C++ without actively exposing its quirks to other languages. .NET can automatically generate wrappers of COM objects, which is moderately helpful if you already have a COM interface. The actual generated wrapper is not pleasant, but it does work.


This is the jackhammer. It’s Microsoft’s follow up to the now long deprecated Managed C++, and it allows you to compile a single binary containing both native and managed code. SlimDX and several other major interop projects use C++/CLI, since it allows you to express a .NET API while doing anything you want with native code underneath. Microsoft’s tag line when this monster was released was IJW, “It Just Works”, which is some kind of cruel joke.

C++/CLI stands in a limbo world blend of managed and native code, stirring the complexities of both together into a single pot. At the same time, the two worlds mix about as well as water and oil. You can compile nearly any native code perfectly well in C++/CLI, but it won’t help because they’re not managed interfaces. No, instead you have to write all new interfaces, structs, enums, and classes to mirror the native world, and there is absolutely no support whatsoever for letting the compiler do this automatically. In short, you’re left to create decorators for your entire project, write all of your marshaling manually, and be mindful of all the differing rules on both sides of the divide. (Memory and threading in particular can be dangerous.) Let your imagination go wild about what happens when the time comes to refactor the native interfaces.

By all rights, it should be possible to extend SWIG to generate C++/CLI code, assuming you want to do that. This is a pending area of research for me and I would be curious to hear if anyone else has looked into it. However, if you’re dealing with something like middleware that has a very stable public interface (DirectX in SlimDX’s case), then C++/CLI is a very powerful and effective (but dangerous) tool.

For the most part, C++/CLI suffers the same drawbacks of P/Invoke, except that the interfaces you can call are dramatically expanded. Anything C++ can interact with will work. Since you’re writing everything yourself, there’s also much more of an opportunity to exercise control over exactly how and when the expensive transitions between managed and native code occur. (Inadvertently giving you another thing to worry about.)

Interprocess Communication

This is actually one a lot of people seem to forget about, and it’s probably the best choice for most game toolchains. There’s any number of possible ways to implement it, but it’s fairly straightforward to build a simple socket based messaging system, and you can share the message structure definitions by using SWIG to generate them. If you get ambitious, there are plenty of more sophisticated techniques: named pipes, remote procedure calls, shared memory, etc. The real drawback here is that it’s difficult to retrofit to an existing system. If you’re going to drive your interop with IPC, it’s generally best to design with that in mind from the ground up, especially since it’s typically a lot more expensive than in process function calls.

This probably won’t be a useful approach if you want to use specific libraries as part of a C# application. If your system is amenable to it though, it can be much less work to write a native code IPC server for a .NET client, which exposes a high level interface to your engine. There are dividends to be paid in terms of modularity, and frankly in terms of being somewhat independent from .NET when something new comes along.

What Next?

So there you have it, four basic methods of getting native and managed code to talk to each other, and I’ve made all of them sound awful. This was only a high level overview though, and simply giving up now just isn’t the game developer way. I am curious to hear from you guys, though: what do you want to know? For my next post, I would like to dig into detail about the mechanics of using one of these interop methods cleanly and safely. I’ve alluded to a lot of rules, and a lot of experience (some of it bad!) in handling the problems inherent in this collision of worlds. My question is, where would you like me to start?

Game Design In The Kitchen

Original Author: Christian Arca

This post is inspired by Swery’s (www.twitter.com/swery65) GDC talk “Design Is In The Coffee” which was a very insightful talk and inspired me beyond belief. For that Swery, I thank you.

Cooking is one of my favorite hobbies. Recently, I’ve been doing a lot more of it. In the past, I have gone on “cooking streaks” where I would cook non-stop for a week and a half and then hit a wall. But this current streak has lasted a little over two months. One of the most surprising discoveries but also a blatantly obvious discovery is that I have gotten better. A lot better. Granted, I’ve always fancied myself as being able to execute delicious meals but this is different now. What I’m cooking now isn’t based on making a slight change to a recipe I found. It’s based on experience, flavor profiles, and trial and error. As I thought about this the other day while experimenting with a new frittata recipe I had realized I had found game design in the kitchen.

A lot of these lessons are basic but if there’s anything I’ve learned over the past couple of years is that sometimes it’s the most basic things which escape us. Here are the game design lessons I have learned in the kitchen and would like to share with you.

Work With The Best Product

When you’re cooking, working with fresh, top grade product is essential. The flavors that you will get from cooking market fresh wild salmon are not the same you would get from cooking with canned salmon. If you want the very best dish you’ll use the very best ingredients. Quite simple.

Similarly in game design, you work with the best game mechanics. They are your ingredients. You don’t just throw any cards, dice, action points, and bidding together with the hopes of producing a good game. You choose a card mechanic, analyze it, make sure it’s the best card mechanic you can use and if it isn’t you refine it so that it is the best. Just like one sour ingredient can ruin a dish, a game mechanic that isn’t the best in it’s natural form can ruin a game. Take for example mini-games which require monotonous repetition of a quick-time event. These mechanics eat nothing but corn and hormone injections and while they do without a doubt do the trick, they are not the best product. Whereas a mini-game which is a resource management simulation which ties into the very core of the progression and flow of the main storyline. Well that’s grass fed, thoroughly massaged, Kobe beef baby.

Cook What You Know

Although I might have wanted to cook a venison, cherry, cocoa nib, eucalyptus dish when I first started cooking I knew that the attempt would be futile. I’d lack the knowledge in technique, execution, and who knows what else (I still lack the knowledge to even come close to making this dish) to even attempt it. So I started with what I already knew. Seared citrus scallops, broiled chicken with mango salsa, things I had cooked before and was comfortable with. Rather than having a sad looking inedible dish, I had tasty enjoyable looking dishes.

You can’t expect to go from having zero experience with first-person-shooter level design and create one of the most polished FPS levels anyone has ever seen. You have to work up to it. So you start with what you know about FPS levels and you work your way from there. The level is at first small and might only feature a giant open area with spawn points that are equal distances apart, but it’s a level and you can play in it, dissect it, and decide what you’d like to improve on. It’s not a complex massive level which you can’t pick apart much like the venison dish I’d love to make. Or, maybe you are a great RPG designer and are great with skill trees but want to make an FPS. So you apply what you know about RPGs and skill trees and introduce them to a first person shooter which is an RPG done in the first person view where a majority of the weapons are projectile based RPG spells or attacks. By designing what you know, you can set yourself up for small, incremental improvements.

Re-invent The Traditional

Everyone loves a traditional Reuben sandwich or taco. It’s delicious and has been enjoyed by many for quite some time. Working with traditional dishes lets us experiment and perhaps create a deconstructed Reuben sandwich which tastes just as good as a traditional one but allows us to explore different techniques and makes it more playful. Maybe you’re curious about fusing flavors together so you try a duck confit taco for a french Mexican twist. Once again, it’s all about playing off traditional dishes to make something new and exciting.

In game design we see this all the time but often enough we don’t go back to the original traditional game mechanic. Rather than saying lets take a look at how Bejeweled Blitz adapted the match three game mechanic the question should be how can I adapt the match three game mechanic into something new and interesting? Bejweled Blitz is just one example but there are many others. Spy Party, at heart is Guess Who which in it’s simplest form is a hint based guessing game. Thinking about the traditional mechanics which successful games are built upon allow for an objective look at the mechanic without restricting ourselves by only thinking about the mechanic within the context of the game.

Experiment With Your “Flavor” Profiles

Once you’re able to execute traditional dishes and understand what products need to be combined to form a composed dish, you start understanding how these products work together and how they create a flavor profile. With this basic knowledge, you can start experimenting and compose dishes from ingredients which result in more complex flavor profiles. Experimenting with how to combine ingredients to create new savory, sweet, or bitter flavor profiles (or for the more ambitious – experimenting by combining flavor profiles) is when you start to explore new territory and come across a successful new dish.

A game’s flavor profile is much more about “feel” but could also perhaps be classified as genre. Regardless, by taking the tried and tested game mechanics which one has executed flawlessly time and time again and combining with other game mechanics which would work well together to create a great flavor profile is the next step. Perhaps you decide to pair an FPS with action points, or maybe you’d like to use the match-three mechanic to progress through a skill tree. By understanding the mechanics on their own, we better know how to pair them with other game mechanics which we might have not thought would be fun when combined. There are interesting flavor profiles for games to be made.

Make More

The most obvious thing I found out about cooking is the more I did it the better I got at it. Knife skills, flavor profiling, temperature control, consistency, texture, all the elements which encompass cooking all greatly improved. I’m not going to let this cooking streak end.

Much like my cooking, I need to design more. In fact – I don’t think any of us can say that we design enough. With that said – we must make more. Board games, card games, office games, trivia games, any games – we need to make more. Maybe we’re stuck on a 2 year project or maybe we’re on a 3-month development cycle. Whatever our situation is, we must find the time to make more games. By making more games we’ll become better designers – or hope to at least.

I’ve shared what I found in my kitchen – now it’s time for you to share what you make in your kitchen. Maybe we’ll find ourselves competing in an Iron Game Designer or Top Game Designer in the future but until then, we’ll have to do with sharing amongst ourselves and creating amazing games.

Special thanks to Merci (@merci), Robin (@robinyang), and David CZ (@czarneckid) for their comments, time, effort and love they contributed to this post.

Putting The ‘You’ Back Into Unicode

Original Author: Garett Bass

Today I will be sharing a public domain C++ implementation I developed for iterating, measuring, comparing, transcoding, and validating UTF-8, UTF-16, and UTF-32 strings. But first, a very brief overview of what I’m working toward, and a bit about Unicode. Then a few implementation details, and finally the obligatory download link.

Inspired by projects I’ve worked on professionally, I’ve developed a plan for a fairly general purpose gameplay data editor I’d like to build as a hobby project. Though I’ve done all of this before in C#, I’m very keen to craft this tool in C++. To some that may sound masochistic, but for all it’s dark, dusty corners, I’m still rather fond of C++. I’m looking forward to taking back the reigns of explicit memory management, and trying some out some DOD on a project where I already have a good grasp on the requirements.

This project involves a lot of text manipulation and I already know that some of my data lies outside the ASCII range for localization purposes. My tool will be reading and writing XML, JSON, and the absolute minimum every software developer must know about Unicode.

UTF? More Like Double-U.T.F!

The good old ASCII character set was developed in the 1960′s based on telegraphic codes, and was only large enough to represent symbols commonly found in Roman-based languages. A single 7-bit ASCII character could represent 128 unique alphanumeric symbols, which severely limited its practical application to international text. This becomes less surprising once you discover that ASCII is short for the American Standard Code for Information Interchange (With Other Americans Only, No Doubt). I’m not sure why they didn’t just call it USCII. Of course it wasn’t long before the international community began to take advantage of that 8th underutilized bit, developing the various permutations of THEMSCII.

In the late 80′s, Joe Becker of Xerox, and Mark Davis of Apple proposed the creation of a universal character set to facilitate multilingual computing. Initially a 16-bit character system, Hamlet for reference.

The Undiscovered Bard


Twenty-one bits makes for an awfully wide character, or “codepoint” in the Unicode parlance. A specific codepoint is denoted by a hexadecimal number prefixed with U+. For example, U+0058 is the codepoint for “LATIN CAPITAL LETTER X”, and U+1F46F is the codepoint for “WOMAN WITH BUNNY EARS”. Clearly the Unicode Consortium is a very serious bunch.

Fortunately, we rarely need all 21 bits. The majority of codepoints in common use are in the range from U+0000 to U+FFFF, also known as the Basic Multilingual Plane (or BMP). These fit comfortably into the 16-bit character system originally envisioned by Becker and Davis. Still, if most of your data is in the ASCII range, it seems terribly wasteful to use two bytes where you could be using only one.

In order to accomodate this broad range of values, a variety of encodings have been developed to map a particular codepoint to a sequence of bytes. The three most common of these encodings are UTF-32, UTF-16, and UTF-8, where UTF stands for Unicode Transformation Format, and the numeric designation is the bit width of the individual “code units”. Each code unit is a fragment of a complete codepoint, and each encoding specifies how a sequence of code units must be combined to represent a specific codepoint.

The simplest encoding is UTF-32, which uses a single 32-bit code unit to represent each codepoint. Due to this one-to-one mapping, UTF-32 is ideal for comparing the relative values of two codepoints. Unfortunately, this format is very space-inefficient, wasting at least 11 bits per 21-bit codepoint, and more than 16 bits in the most common cases. Since each code unit is a contiguous four-byte block, UTF-32 also complicates communication between machines of differing endianness.

The middle road is UTF-16, which uses either one or two 16-bit code units. In many cases, this encoding is ideal for runtime representation of text. In most common cases, where the majority of codepoints are within the BMP, UTF-16 is much more space efficient than UTF-32. Still, the two-byte code units of UTF-16 also introduce endianness issues if used as a data-exchange format.

According to Wikipedia, UTF-8 is “the de facto standard encoding for interchange of Unicode text”. This is likely a result of two key advantages. First, since each code unit is a single byte, UTF-8 is an endian-agnostic format. Second, the first 128 Unicode characters fit in a single UTF-8 byte, and map directly to ASCII characters in the same range. This means that any valid ASCII file is also a valid UTF-8 file, providing some legacy support. However, UTF-8 is a variable width encoding, requiring between one and four bytes to represent a unique codepoint. As a result, it is the most complex of the three formats, requiring more computation to reconstruct a full 21-bit codepoint.

Signed vs Unsigned Character Types

So, let’s take a look a some code to implement the kinds of transformations illustrated above.

char8& Transcode1 (const uchar32*& in, uchar8*& out) {
      uchar8& result(*out);
      const uchar32 c(*in++);
      if (c <= 0x007Fu) {
          // U+0000..U+007F
          // 00000000000000xxxxxxx = 0xxxxxxx
          *out++ = (uchar8)c;
          return (char8&)result;
      if (c <= 0x07FFu));
          // U+0080..U+07FF
          // 0000000000yyyyyxxxxxx = 110yyyyy 10xxxxxx
          *out++ = L2Flag + (uchar8)((c >> 6));
          *out++ = LNFlag + (uchar8)((c     ) & LNBits);
          return (char8&)result;
      if (c <= 0x0FFFFu));
          // U+0800..U+D7FF, U+E000..U+FFFF
          // 00000zzzzyyyyyyxxxxxx = 1110zzzz 10yyyyyy 10xxxxxx
          *out++ = LNFlag + (uchar8)((c >> 12));
          *out++ = LNFlag + (uchar8)((c >>  6) & LNBits);
          *out++ = LNFlag + (uchar8)((c      ) & LNBits);
          return (char8&)result;
      // U+10000..U+10FFFF
      // uuuzzzzzzyyyyyyxxxxxx = 11110uuu 10zzzzzz 10yyyyyy 10xxxxxx
      *out++ = L4Flag + (uchar8)((c >> 18));
      *out++ = LNFlag + (uchar8)((c >> 12) & LNBits);
      *out++ = LNFlag + (uchar8)((c >>  6) & LNBits);
      *out++ = LNFlag + (uchar8)((c      ) & LNBits);
      return (char8&)result;

Here I take a pointer to an input array of uchar32, and an output array of uchar8. I then increment each pointer as I consume enough bytes and produce enough bytes to transcode a single codepoint. All of the core algorithms are like this, performing an operation on a single codepoint, which makes it easy to write the template Transcode() function that processes an entire string.

However, the astute observer will notice that this algorithm requires unsigned character types, which could be rather inconvenient on some of our favorite platforms. Sadly, the C standard doesn’t specify whether char is a signed or unsigned type. But, with a little template magic, and the help of some good inlining, the compiler can generate all of the relevant algorithms for us, so that we can happily make use of native char and wchar_t literals.

What’s In The Box?

Ok, here’s the public interface:

class Unicode {
  public: // static methods
      static int Compare1 (const L*& lhs, const R*& rhs);
      static int Compare (const L* lhs, const R* rhs);
      static uint Measure1 (const In*& in, const Out* out);
      static uint Measure1 (const In*& in);
      static uint Measure (const In* in, const Out* out);
      static uint Measure (const In* in);
      static In& Next (const In*& ptr);
      static In& Previous (const In*& ptr);
      static uchar32 Transcode1 (const In*& in)
      static Out& Transcode1 (const In*& in, Out*& out)
      static void Transcode (const In* in, Out* out)
      static Error Validate1 (const In*& in);
      static Error Validate (const In* in, const In*& err);
      static Error Validate (const In* in);

All of the template arguments should be char8, char16, or char32, all of which I’ve defined for you with some more template magic. Particularly handy is that wchar_t maps to either char16 or char32 as needed, so native string literals should just work. If you have your own fixed-width types by the same name, you may have to do a bit more work, because I rely on my templates to help with the signed/unsigned cast.

These are all inline definitions, and I hope the names are pretty obvious, because I haven’t taken the time to comment each method. With the exception of the Validate() methods, none of the other methods attempt any validation, so garbage in, garbage out. This seemed pretty reasonable to me, but YMMV.

Testes, Testes, 1..2..3?

It wouldn’t do much good to write all this code without some reasonable test coverage. I wrote tests to exercise all of the above functions across UTF-8, UTF-16, and UTF-32 input and output for each of the following cases:

  • ASCII “hello world”
  • Endpoints for 1-byte UTF-8
  • Endpoints for 2-byte UTF-8
  • Endpoints for 3-byte UTF-8
  • Endpoints for 4-byte UTF-8
  • Examples of Unicode Encoding Forms (Table 3-4, The Unicode Standard, Version 6.0, pg 92)

I included the test harness with the project. It relies on UnitTest++, for which I include a pre-built MSVC static library. I’ve run the tests under both Visual Studio 2010 Professional, and CodeLite v2.9.0.4684.

TODO: I really ought to test against MultiByteToWideChar()


I used a very lightly customized build of premake4 to create the project files, but I expect the standard build would do nearly as well. I really love premake. It has never been easier to spin up a new C++ project without having to tinker with every last setting.

It is getting late here, so I’m just going to host the code right here for the moment, but I will try to put it up on github soon.

How is constant formed?

Original Author: Jonathan Adamczewski

A couple of days ago, Neil Henning (asked a question:

SPU gurus of twitter unite, want a vector unsigned int with {1, 2, 3, 4} in each slot, without putting it in as elf constant, any ideas?

There were a few different responses. a great article on ways to build constants on the SPU. Neil’s question got me thinking about how numerical constants are handled on various architectures at the machine level, so I did some investigating.

In this post, I talk about one of the simplest operations a computer can do: putting a small constant number into a register, which — while simple — gives an insight into the architecture and raises plenty of questions. I also describe a method for learning to read and understand assembly language.

I look at the way of doing this on x86/x86_64, PowerPC, SPU and ARM architectures because those are the architectures for which I have working toolchains.

Let the compiler be your guide

My method for learning assembly programming goes something like this:

  1. compile some higher-level code
  2. look at the assembly language generate by the compiler
  3. try to understand it (using a suitable reference)
  4. change the code and/or the compiler options
  5. goto 1.

When you’re starting out, it’s a slow process — there’s lots of subtle details that won’t be readily apparent. Like many things, though, persist and you’ll get better at it. Also, ask the internet.

The process that I go through here uses the compiler as a teacher. Realistically, the compiler doesn’t know everything there is to know about assembly programming or instruction sets, and there will come a time when you know more than it does and must defeat it in hand-to-hand combat. For now, we learn.

Learn you a ‘ssembly

So here’s a program fragment written in that a-little-higher-than-assembly-level language, C:

int three() {
      return 3;

A function (three) that returns a constant value (three). I’ve saved those three lines to a file called const.c. Really simple. I can then issue the command

    gcc const.c -O -S -o-

which runs GCC, with the arguments:

  • const.c — the name of the source file
  • -O —  to perform optimisation (usually makes the generated code simpler and easier to understand)
  • -S — to stop after compiling – do not assemble or link (causes the compiler to output assembly)
  • -o- — to write the output (assembly) to stdout (straight to the terminal, not to disk)

(If you know nothing about assembly programming, it’s probably a good idea to start with a simpler architecture — for me, my first was SPU, which I think is one of the easier architectures to start with. x86/x86_64 is not a good place to start)

To start, lets look at what my x86_64-targeting GCC produces:

(note that I’ve added one extra option to the command line above: -masm=intel, which tells GCC to output the easier to understand Intel assembly syntax instead GCC’s default AT&T syntax)

        .file   "const.c"
          .intel_syntax noprefix
  .globl three
          .type   three, @function
          mov     eax, 3
          .size   three, .-three
          .ident  "GCC: (Gentoo 4.5.2 p1.0, pie-0.4.5) 4.5.2"
          .section        .note.GNU-stack,"",@progbits

That’s a lot of … stuff … for a simple one-line function. Most of the output is information for the assembler that isn’t all that important for what we’re interested in.

How I’m deciding what is important:

  • Words followed by a : (colon)  are labels e.g. three:.LFE0:. Any label that is referenced somewhere is interesting, the rest are not.
  • Lines that start with a . (period) and are not labels (e.g. .globl, .size) aren’t interesting to us — they’re there for the benefit of the assembler.

Throw away the uninteresting pieces and what’s left?

          mov    eax, 3

Hopefully, you can see some connection between this and the C program above — there’s a three: label, the number 3 is there, and ret is probably short for return…

So how do we work out what it really means?

Grab a copy of the two Intel Instruction Set Reference PDFs, open the second one and look for ret. There are 12 pages describing this instruction (actually mnemonic, not instruction — more on that in a moment), but the first of those is enough to see that it tells the processor to return to the calling procedure — so it is our ‘return’. (How it knows where to return to is a question for another time)

Regarding the other line, we see a mnemonic (mov) and two operands (eax and 3). To be clear about terms, operands are those things that the instruction operates upon, and (from the Instruction Set Reference):

A mnemonic is a reserved name for a class of instruction opcodes which have the same function

This means that mov does not represent one particular instruction but instead indicates that the assembler should generate an appropriate instruction for the mnemonic and operands provided. You can find mov Move — in the Instruction Set Reference. It is used to move data from one place to another.

The operands for mov here are eax and 3 — one is a register name, the other is a literal value. In this case, eax is the register that is used to store the function’s return value (I’m not going to say more about x86 registers for now, except to link to a descriptive image). The 3 is the number being put in the eax register.

To summarise:

mov eax, 3 moves the value 3 into the eax register.

We learned a thing!

Equal but different

There’s a lot in common between assembly languages for different architectures — once you know the basics of the structure (labels, instructions, assembler directives) it’s just a matter of working out the details. Just.

Here’s what GCC generates for a simple immediate load for some other architectures:


        # load the immediate value 3 into general purpose register 3
          li 3,3

Immediate value of 3 into register 3? Which is the register, and which is the immediate? Unlike x86, registers are specified by number, not name, so you need to know a bit about the architecture to be able to interpret the assembly language. Fortunately the operand order is the same as for x86 Intel syntax: destination, source. You could think of li 3,3 as something like register(3) = 3.

If you read through the PowerPC Architecture Book (in three volumes), you’ll see (if you can stay awake) that there is actually no li instruction. In this case, li is an extended mnemonic — a shorthand form of an instruction intended to make assembler language programs “easier to write and easier to understand.” Under the hood, li is actually addiAdd Immediate (the details of that are also something for another time).


        # immediate load the value 3 into register 3
          il $3,3

You can find il in the SPU Instruction Set Architecture document. Again, the operands are ordered as destination, source — in this case, the register is specified with a $ prefix, making it slightly easier to differentiate between the two.

As an extra bonus, the il instruction puts the number three in a register four times — four 32 bit values stored in a 128 bit register.

Hopefully you’re noticing some patterns by now: the destination and source operand ordering, the slight notational variations between architectures to keep you on your toes, the seemingly random and inexplicable mnemonic names. This is assembly — it’s great!


        @ move the value 3 into register r0
          mov r0, #3

You’ll find mov in the ARM Architecture Reference Manual — as was the case for x86, it’s a mnemonic that covers a class of instruction opcodes. To make it even more interesting, the manual lists five different bit-sequences that may be used to encode the Move Immediate instruction — ARM is weird like that (yet again, something for another time).

There are minor differences: constants are prefixed with #, and the register is called r0. And comments start with @, which makes me uncomfortable…

Summing up

Four out of four instruction set architectures examined can be used to load a value into a register! Send money now!

Hopefully, I’ve managed to convey the basics of reading and starting to dissect and understand the assembly language generated by your compiler. There’s obviously a lot more to it than loading data into a register, but we see even from this trivial operation that there’s a lot in common between assembly languages across different architectures (at least, between those presented here). This doesn’t answer the question about how to construct the constant as requested by Neil, or even why that process would be more difficult than the single instructions described above. I hope to answer (or get closer to answering) these questions (and more!) in later episodes.

(Please let me know if you found this useful, informative, dull, too simplistic, containing too many assumptions, is too narrow or verbose. Leave a comment below, or message (and follow) me on twitter. Thanks :D)

[Photos by Karen Adamczewski]

Designing First Time User Experiences for Social Games

Original Author: Mitch Zamara

Disclaimer: The following are my own opinions, and observations, and do not represent those of my employer, either past or present. Nothing in this document represents confidential or privileged information. Use this information at your own risk.

After spending the last 2 years working in social games, I’ve had a chance to play through a very large number of social games, from the biggest games on the platform, to many smaller ones. One thing that I’ve noticed across a large number of these games, is their ability to borrow mechanics, viral flows, concepts, systems, and a large number of other things. However, the one thing that I rarely see properly emulated is the first time user experience (FTUE). The FTUE is arguably (in my opinion) the most important part of the game that you will build. It’s the only thing, aside from your loading screen that all installed players will see. The number of players who make it to session 2, session 3 etc..will determine the ability for your app to “hold water”, as those players remain, and brings their friends to the application.  I wanted to share 5 observations/tips/lessons that will hopefully help fellow social game developers in creating successful FTUE’s.

Design, Test, and Iterate; Early, and Often!
The second you have your core loop implemented in your game, you should start designing and testing your first time user experience. This will not be something you just do once and are done. (And if you do, you’re doing it wrong!) Play through it, have everyone on your team play through it, and have everyone submit notes and feedback. Isolate the good feedback, update your tutorial, and test again! Keep doing this until the very day you launch. Hell, after you’ve launched you should keep tweaking and optimizing your FTUE.

Limit the Number of Steps
While it’s important to guide your players through the first steps of your game, teaching them the very basics, it’s also important to limit the length of your guided tutorial. Empower your players to use the mechanics and methods you teach them, and never pull control from them for too long. Look at the successful games on the platform that have a guided tutorial, and look at how many steps they have. The best games are able to capture the core loop in a handful of steps, and transition out of ‘tutorial mode’ quickly. The player should have enough knowledge by now to complete basic tasks and objectives (like quests) until you start to unfold the subsequent mechanics and systems of your game.

Don’t Expose Too Many Features
As the space has evolved over the last couple of years, the number of features, systems, and mechanics included in each game has risen significantly since the early days of games like Parking Wars, and Mouse Hunt. New games have complex economic systems, character customization, and a wide range of other features that seem simple, but together can easily overwhelm an average player. That’s why you should limit what features are exposed to players. If you have multiple resource types, then front load the player with the secondary resources, and teach them how to make the primary resource. Once they get that down, then introduce them how to earn the secondary resources. Focus on the features that are the most eye catching, enjoyable, and fun to do, and delay the rest of the supporting features until later sessions. Players wont feel overwhelmed, and they’ll be more likely to pick up the complexity of your game if they are served it in small digestible bites.

Present Enough Viral Opportunities
This topic extends the prior point about your app needing to ‘hold water’. In addition to being able to retain your players who install your app, those same players need to also become your means to reach new players! The way to achieve this is by presenting enough viral opportunities. A great example of this, is looking at the FTUE deconstruction of CityVille done by Kevin Rose. In the first 3 levels of the game (less than the first session lasts) the player is asked to invite friends, send gifts, and post to their wall a total of 9 times! It’s extremely important to note that while this may seem like a lot of opportunities to ‘spam your friends’.. almost all of these scenarios feel like natural opportunities to share information with your friends. Rarely do any of them feel ‘Forced’ upon the player in any way. It’s critical that if you do implement these viral opportunities that you make them feel genuine, and inviting for the player, or they’ll feel pressured and never want to share their experience with friends.

Measure and Track Your Results
It should really go without saying, but unless you have hard numbers to back it up, your instincts on how successful your FTUE really is, are likely wrong. Measuring your install funnel and identifying what % of players make it through each step/quest in your first session is extremely critical. If you have hooks established for every single guided click action, you can quickly determine where your sticking points are for your new players, and how effective you are when making adjustments. If you’re tracking all your quest start and quest completion points, you’ll also be able to tell what quests players are getting stuck on. If you’re tracking each quest task, you’ll know what tasks are too hard/complicated etc.. Put significant efforts towards tuning, modifying, and maximizing your FTUE conversion (players who become regular players) and you will stand the best chance at retaining players and growing your application.

Hopefully these tips will be of some use to those of you working in this new and exciting space. Please feel free to follow me on Twitter (@mzamara) or leave a comment if you like what you’ve read!

Choreographing Behavior: 2 examples

Original Author: Rune Vendler 

In this article, I am going to describe two older games that feature behavior systems (or whatever they should be called) that I like and find interesting. Both systems address the gray area where AI meets scripting, and offer solutions tailored to their respective technologies and game design. There are of course many more details to these games than I can cover here, so bear with me when I simplify or leave out details here and there. (I originally had a third example too, from a more modern game made by my company, but I did not get legal clearance at this time to post it.)

#1 Doomdark’s invasion plan

In Lords of Midnight (Mike Singleton, 1984), an epic battle for control over the land of Midnight is waged. In the southern half of the country, the Lords of the Free are holed up in their keeps and citadels. In the northern half of Midnight, the evil Witchking Doomdark has amassed a huge army and is now starting his invasion of the Free. Starting with a few key characters under your control, your task is to unite the Lords, defend the south, and eventually defeat Doomdark.

Lords of Midnight, in all its 8-bit glory

Lords of Midnight was a cross between an adventure game, a role-playing game, and a strategy game, offering multiple ways to be completed, and many ways to play it. It is probably best remembered for its “Landscaping” graphics technology which built static world views out of billboards, but in this article, I am going to take a look at something else: how Doomdark’s armies move south, conquering the fortresses on their way, and ending up at the citadel of Xajorkith for the final siege.

The land of Midnight

(Map originally from Crash magazine, found on the internet)

The Doomguard

Doomdark has at his disposal 128 regiments, the Doomguard. Each regiment starts at a specific hardcoded map location and with a specific activity already assigned to it. There are four types of activities:

  1. Follow a specific character – these regiments move towards the current locations of specific characters in the game.
  2. Wander – these regiments just wander around.
  3. March to location – these regiments march towards a particular map position as long as that location has its “interesting” flag set. If it’s cleared (e.g. all key characters leave that location), they will stop in place. (If the flag is set again, they will continue their movement towards the location.)
  4. March to waypoint – these regiments make up the majority and are the fun ones that I am going to talk about.

In addition to their specified activity, all regiments are able to react to something happening close by: if they spot an enemy within a certain distance, they will temporarily suspend their goal to engage them, and so on.


So, when the game starts, the regiments of type 4 start marching toward their respective target waypoints, reacting to any Free armies they pass underway. Most waypoints are locations of Free keeps or citadels, so there is likely to be a battle once they get there. What happens once the battle is over, and the regiment has reached the waypoint?

Each waypoint holds indices to two other waypoints (112 in total, IIRC), and the regiment randomly picks one of the two and starts heading towards that new waypoint. That’s pretty much it. This graph of waypoints is the essence of the invasion plan. It was created by hand and hardcoded into the game. I did a quick plot of it, and here’s what it looks like:

The black blobs are the waypoint nodes, and the arrows indicate two other waypoints that the node connects on to. Notice that a waypoint can have one or two of these indices pointing back to itself (shown by a number inside the waypoint indicating if one or two of the indices loop back). This can be used to “park” regiments at strategic locations once captured. Here is the waypoint data for Doomdark’s home citadel:

Waypoint #6, name=”The Citadel Of Ushgarak”, position=”29,8”, connections=”6,14”

So, an army arriving at Ushgarak has a 50% chance of staying for another day (if it picks the first connection), and 50% chance of heading for waypoint 14. That’s pretty much it! The behavior of the regiments are a function of their base AI and this graph. The random choice of next waypoint guarantee that the game will be subtly different on every playthrough. Hack the waypoint data, though, and you can create a very different invasion!

#2 Tactics in SWOS

Sensible World of Soccer (Sensible Software, 1994) is a much celebrated football/soccer game that has seen many revisions and remakes over the years. While it is very action-focused and fast paced, it also manages to have a tactical side to itself, and a key component of this is the team tactics. Players choose a tactic for their team, e.g. a 4-4-2 formation, and this tactic defines how the different characters will position themselves and move around the field. Here’s how it works:

The soccer pitch, with the tactics areas overlaid

The pitch is divided into 5*7 = 35 areas, and for each field player on the team (10 in total), the tactic contains a table for where he should be when the ball is in each of these areas, i.e. “if the ball is in area N, this player should be at position X,Y”. So a tactics file is 10 players * 35 entries for each player. We can move the players around the field with this algorithm:

  1. Figure out which area the ball is in
  2. For each player,
    1. Look up his target position for when the ball is in that area
    2. Move him towards that position

This can then be supplemented with some AI that overrides the player’s actions when he is close to the ball (e.g. seek to control it), in possession of the ball (e.g. either move towards goal, shoot at goal, or pass to another player), or close to an opposing player who has the ball (e.g. tackle).


I put together a quick demo to show the system in action. Here’s a YouTube video showing it:

Code by me, all graphics found on the internet

Here’s what you’ll see:

  • A ball – this is moved around by me with the keyboard.
  • Field players – they will run towards their target positions, found with the algorithm above and indicated by a white blob connected to them by a line. (Often, a player will be on top of their blob, in which case you can only see the player.) The red team’s home is at the bottom, the black team at the top.

In sequence,

  1. A single player being driven around by the ball.
  2. The same player, but with bilinear sampling of the tactic data, so his target position moves smoothly instead of jumping when the ball changes area. (I keep filtering on for the rest of the video.)
  3. A full team moving around
  4. Two teams both moving around.

Both teams play the 4-4-2 tactic, so when the ball is around the middle of the field, you can see the symmetry. (Apologies for this, it might have been more fun with varied formations.)

The code driving the players around is not much more than 20 lines. Creating a tactic in the SWOS editor is certainly a bit of work, but look on the internet, and you’ll find lots of people creating custom tactics.

Why do I like these systems?

I think I like these two systems because:

1. They use very little abstraction.

We can probably all agree that unnecessary abstraction is a bad thing, but it’s a lot harder to agree on what exactly meets that criterion, and thus interesting to see how different developers have approached the matter. Both solutions are fairly specific: they solve only the problem they need to, within the domain at hand. To me, they very much come across as systems focused on their end result rather than the potential AI exercise.

2. They are simple

I think both systems are examples of combinations of simple code with clever (but equally simple) data. You can type up a test-implementation in no time, and because of the low level of abstraction (leading to very limited hidden state), you can get pretty well defined results with a minimum of debugging.

3. Their real power is in their data

A lot of the perceived intelligence and complex behavior resides in the data, so to speak, not in the algorithm itself. This is great, because it allows us to play with the data and observe the differing results. This ability to choreograph the behavior through “painting the data” is a great feature of the systems, as it’s one way to get around the friction that often occurs when you need to script some part of an (in principle) autonomous system.

4. They are fun to work with

Perhaps not a particularly highbrow point, but they are systems that it’s fun to think about, expand on, code up and play around with. 🙂

I like looking at these old games. There is often an element of nostalgia involved when looking back at classic games written on simpler technology, and perhaps a tendency to consider the technical (and design) solutions of those games “hacks” when viewed in light of our processing power today. There is a lot of good stuff in there, though, and can often be applauded for their effective and economical approaches, and it can be worth it to spend some time taking your favourite classics apart!

Time and being Memorable

Original Author: Pete Collier

Time and being Memorable

Time and being Memorable

Thoughts on the Interview Process in the Game Industry

Original Author: Wolfgang Engel

The interview process in the game industry follows fairly common standards. Here are a few thoughts on the process of hiring a person. The knowledge covered on this page is partly from my own experiences -mostly on the side of employers- and from anecdotal evidence. This text is considered an invitation to share your thoughts / advice below.

Let’s start by dividing the interview process in three stages:

  1. Pre-Production – everything that happens before the interview
  2. Production – the actual interview
  3. Post-Production / Probation


Let’s assume the interested parties found each other via an ad or the recommendation of a friend. Now a first conversation happens between some senior people on the side of the employer and the person who wants to accompany those people. Although an ad might have stated what kind of knowledge and tasks would be required to do the job, it is important at that stage to find out as much information as possible about each other and the future challenges.

There needs to be a well defined job description, where all the expectations can build on. The industry uses standard terminologies to describe certain task groups like AI Programmer, Graphics Programmer, Lead Programmer, Technical Director, Producer, Technical Producer, Animator, Lead Animator, Designer, Lead Designer, Design Director etc.. It is good practice to stick with those terms. They represent a common level of organization.

After a relaxed round of first conversations, it is now a good time to ask for second opinions about the potential employer / employee. It is practice to find out how an employee was perceived at previous employers by asking him for references or calling up people at the previous employer. To evaluate a potential employer, there are websites and communities that track how often a company lays off people and under which conditions and there are first hand descriptions how the working conditions are. Some companies even release press releases about this to please their investors with this kind of data. Some company websites have pictures of the working environment and quotes from the people that work there.

If everything goes well, an interview is setup. In most cases that leads to the interviewee flying in. Booking coach is probably the norm for interviewing people, but some people are oversize. It is good practice to ask if they require an aisle seat. More senior people can expect business class flights, especially for long distance trips. Most companies have policies that describe who is allowed to fly business, first class or coach. In case there is only the coach option it is good practice to mention those policies to senior people.

Picking a Hotel for an interviewee can be a critical part of the interview process. A cheap Hotel without proper heating during a cold time makes even a die hard candidate regret coming by. Even if the position is not very senior, small things like a friendly message placed on the table are a good way to make people feel comfortable. Hotels usually have a free service for this. The seniority of the person hired should be reflected in the way he is accommodated. Companies typically have long-term relationships with Hotels. Those relationships can be used to welcome a potential interviewee.


Like in all parts of our life, behavior patterns are showing the level of social capabilities and the overall vibe of people in the company and the interviewee.

It goes without saying that people need to be polite and friendly, for example offer water and make a guest feel comfortable. A candidate who applies for a job where he is required to interact with a large amount of people in the company needs to be vocal and being able to spread the right vibe that fits into the company.

It is a mistake to start an interview by investigating the interviewee about the question, if he really worked on the games he says he worked on. This kind of investigation should have happened before the interview.

The same is true for questions about the reasons why so many people were laid off at the  potential employer.

Being polite also means that someone is interested in responses. For example, if someone goes through the effort of expressing an opinion, valuing his opinion by for example at least friendly nodding to him is a social ability not uncommon in most cultures. Cutting off responses represents the other end of the spectrum.

The interview is also a bad time to behave arrogant or cocky … maybe you see this person again.

A good starting point for an interview is to ask someone if he can show off some of his previous work. Most senior people will show up with something that makes their claims accountable like screenshots, power-point slides or even source or demos.

Giving the -certainly nervous-  interviewee the chance to talk about an area where he / she feels comfortable will be appreciated by both sides.

For the person who interviews, now is also not a good time to show superior knowledge. Throwing in a few short and entertaining stories that allow the interviewee to smile, laugh and contribute something is certainly a better way to get to know someone. After all a huge part of the interview process deals with finding out if someone can fit into an existing team. A creative team in the game industry is a completely different story than -for example- a sales team. The human factor in the relationship between people plays an important role. They have to create something together, while a sales person is on his own out in the field and comes back with a number and relies on a relationship with a potential customer that only lasts a few hours face-to-face time, a creative team stays together for years and has to overcome all the things that come up when humans have to live in a small space together. There is a complex social network in place that defines the relationships between those humans and it is important to keep the team running with all the constantly changing love/hate -and in-between- relationships on board. People on the team might even deal with difficult personal relationships and you end up with a mixture of chaos and randomness typical for family or close friends scenarios.

In that context I like the following quote from the book “Team Leadership in the Game Industry”: “As will be seen, a major cause of people leaving a company is the perceived poor quality of their supervisors and senior management. The game business is a talent-based industry -the stronger and deeper your talent is, the better chances are of creating a great game. It is very difficult, in any hiring environment, to build the right mix of cross-disciplinary talent who function as a team at a high level; indeed, most companies never manage it. Once you get talented individuals on board, it’s critical not to lose them. Finding and nurturing compentent leaders who have the trust of the team will generate more retention than any addition of pool tables, movie nights, or verbal commitments to the value of “quality of life”.”

Apart from the soft skills there are also hard skills required. In case of a junior position, there are usually programming tests prepared. If the person doesn’t know the right answer, it is good practice to move on to the next question. Interviews are not a good time to start teaching someone a certain topic. Senior people, most of the time need to be hired to extend the knowledge available in the company. They need a chance to show what they can add to the company.

On the Director / Manager level, it is good practice not to let future employees interview their future boss. This could destroy a future working relationship upfront and doesn’t help the team building process. It is like starting a conversation on religion or politics. Someone will be unhappy at the end of the conversations. On this level interfacing with the rest of the team of leads to “feel the vibe” and define the expected interfaces used to interact  in the future is much more important.

An interviewee needs to appear vocal and self-motivated. There shouldn’t be any harm in stating that the answer to a question is not known to the interviewee because there should be other opportunities to show knowledge or it is just not the right job. Bringing in a notebook to demonstrate work previously done is a great way to prove what an interviewee can bring to the table. For a programmer there is nothing that talks louder than a demo and source code and an explanation how it was done. If no one wants to see a demonstration, maybe it is a good time to consider applying for a different job.

In general, if any party feels that the job is not appropriate or the candidate is not suitable, stop the interview and apologize. Don’t waste time with a job interview that won’t lead anywhere.

After a successful day of interviews, there comes usually the part where details of the future contract are negotiated. It is good practice to defer the salary question and think for a night about it. In any other case, the interviewee is expected to open the negotiations with a base salary. A good way to start is to tell what you made before and what you want to earn in the future and why you think your contribution to the company will be worth it. In general most people have a rough idea what is paid somewhere else. As far as I know the gamedeveloper salary overview is cut off at $200,000.

Post Production

If all the negotiations went well, a contract will be send out to the future employee. Losing people by sending out the contract too late seems to happen more often than one would think.

There does not seem to be an official probation period in the US, but it is good practice -similar to European Countries- to observe the new hire closely for the first 3 – 6 months. This is a good time frame to verify if the employee and the company are a good match. Working with someone for 3 – 6 months is the best way to find out if he or she fits into a company. A new hire should be prepared to wait for a certain period to see if the new job is a good fit.