Agile – Specialisations Still Matter

Original Author: Lee Winder

Scrum Prohibits all Specializations’. The part that stuck me was the following:

I understand that Scrum has been applied mainly to software products and that the elimination of “specialties” means that the database programmer, UI programmer and QA engineer should all be able to perform each other’s roles equally. This is valid.

 

Now I’m concerning myself with only the technical side of an agile team but I’ve seen this raised in a number of different agile circles. In those cases there seems to be the impression that swapping a database, physics or audio developer with any other specialisation like UI, animation or graphics and an agile team should be able to roll up their sleeves and perform the different roles to the same level with the same level of outcome.

To me, this is emphasised in how the product backlog is often used, which is a priority and risk ordered document that doesn’t take into account the skill set of the team that’ll be working on the final product.

Processes such as pair programming, constant re-factoring and code reviews (to name but a few) seem to be seen as ways to not only communicate intent and project information but also skillset and ability across an entire discipline.

 

So What Do Specialists Bring?

But we have specialist developers for a reason. They are great at what they do, they understand the area in which they work and they know how to get the best results in the shortest amount of time. They have a passion for the area they are focusing in which usually means they’ll go a step further to research their area and keep up with developments which other developers may not have the time or the understanding to do.

By spreading your talent thin and assuming that people can fill each others shoes leads to the following issues

  • You are not respecting the knowledge, skill, experience and passion that a specialist can bring to their work and as a result not respecting the developer themselves
  • You’re reducing the impact these people can have on a team and it’s often the experienced specialists that inspire younger members of the team into an area they are interested in
  • The ability of those specialists to learn more about their area and pass that onto others is drastically reduced.
  • The ability for the team to push their development boundaries will be indirectly reduced as everyone on the team aims for the ‘generalist’ role to fit in

 

What About Pair Programming?

Now I’m a massive fan of the various agile techniques out there. Pair programming is an excellent mentoring, development and training tool but it won’t allow one developer to fit into the shoes of another. True, they will have a better understanding of the tools, pipeline and systems being developed which will allow them to fill in, but it won’t transfer the amount of background experience the specialist has.

The same goes for code reviews, constant refactoring and feature discussions. It spreads the knowledge which reduces the risk to the project should the specialist not be around when needed, but the core experience and drive that made the specialist who they are simply cannot be replaced by dropping in a new developer.

 

But Everyone Does A Bit Of Something Every Once In A While?

Of course, sometimes people do need to jump into another developers shoes (illness, staff turn-over, hit by a bus etc.) but this is not the same as expecting a people to be able to fulfil each others roles equally. We can take steps to decrease the impact this will have on the team using the processes mentioned above but it will not allow those specialists to be inter-changed as the project continues development.

We need specialists in any development field because it’s these people that can push their respective fields in directions we might not even be able to imagine. By treating them as interchangeable we might be gaining flexibility to schedule our staff, but we’re losing something far more important and vital to a development team and the products they are creating.

As I said to some one (in 140 character or less of course) when pointed out that people have done this, and even the author of the original post has done it (see the comments)

I’m sure he has done it, I’ve done similar, but it doesn’t mean we did both with the skill of an expert of either.

 


A question in regards to storytelling

Original Author: Chad Moore

Last week I attended an “Acting for Animators” training session from Ed Hooks. Mr. Hooks has been offering this class since around the time PDI was developing the film “Antz”. Ed is an actor by profession and was brought in to PDI to teach their animators acting principles. He’s written two books on the subject and since evolved the class to address animation in video games to a certain extent. This is a very informative class for anyone involved with animation and I think, anyone charged with telling stories. I’m not a Game Designer, but I feel Game Designers would benefit from the course greatly.

Sorry I don’t mean to sound like a commercial for the class, but that’s the background needed for my question.

Ed, like all good actors speaks of empathy a lot. The difference between empathy and sympathy is important to note.

  • Sympathy is when you feel sorry for someone.
  • Empathy is when you identify with another person’s feelings.

To feel sympathy or empathy you must observe the actions of another person. You can’t feel sympathy or empathy for yourself. Sure, you can feel sorry for yourself, but that is an emotional reaction to your own situation.

If you’re still with me, let’s apply this to games. The player controls the character in a game, they are making the choices and not observing them. Therefore, the player cannot have empathy towards the character he/she controls.

I think that is a barrier to great storytelling in games. We’ve evolved ways to try and get around this; the escort quest and cutscene cinematics are primary examples. Escort quests can get a little tiresome however, save IMHO, Ico. I really was emotionally engaged with protecting that character for the duration of the game. Cutscenes are, well, cutscenes. They are many pros and cons… too many for one short article.

My (two part) question is this… How much is this barrier blocking immersion, fun and ultimately empathy? Can you point out examples that show how this is overcome?


Trouble with Triangles

Original Author: Sam Martin

In my previous post, I wrote about why there is Black & White 2. My headline summary would be that computational geometry is both a powerful and complicated tool, and you should therefore seriously consider it but approach with some caution. Identifying the problems that are safe to tackle and those that are still research problems is not obvious. Some simple sounding extensions to well known algorithms can quickly take you into no-man’s land. But the pay off for a geometric approach can be significant. Picking the right problem together with the right algorithm can produce highly efficient and compact solutions. In this post, I’ll cover a few areas to avoid.

For the sake of brevity, but ignoring many details, the navigation systems we built at Intrepid and Lionhead looked broadly similar to other mesh-based navigation systems, such as PathEngine. There was a mesh that described the navigable search, and actors found paths and traversed them. I would still prefer this approach of using a navigable mesh if I was to return to navigation tomorrow. The common alternatives of storing only blocking geometry or using a graph of path nodes may be simpler, but we got significant additional mileage out of having an explicit query-able representation of the free space, so I believe it’s worth the extra complexity. There were a typical list of design criteria for each version of the navigation system, but the most important one for this discussion was that both of them should be “robust”. Both in the sense that they never shattered the player’s immersion with daft looking AI, and in the sense that all game code is mission critical and therefore cannot be too expensive, too cpu hungry, and should never ever crash. Good navigation should be invisible.

Accuracy didn’t appear in the design criteria, but the term “accurate” in the context of computation geometry is frequently another way of saying “functional”. Inaccurate geometric algorithms don’t just return fuzzy results, they tend to crash. This is an inherent issue with the computational approach. In most geometric algorithms the decision-making aspects place assumptions on the state of the geometric data for correct and efficient operation. If a triangle is flipped due to an inaccurately computed position, an algorithm that depends upon the winding order to operate will fail. In general, topological changes in your geometry will cause algorithms to fail, and inaccuracy in computing vertices can give rise to topological changes. So even if your application can tolerate a reasonable degree of numerical inaccuracy, in order for many computational geometry algorithms to work in a game context, the requirement to be robust also implies the need for accuracy.

Floating point arithmetic was not designed for the kind of accuracy requirements computational geometry require. They are non-associative ((a*b)*c != a*(b*c)), tend to be affected by optimisations, and the accuracy of a given computation is impractically hard to predict ahead of time. There is an interesting paper by Jonathan Shewchuk, Robust Adaptive Floating-Point Geometric Predicates”, which describes an approach to making floating point computation robust for certain geometric calculations, but it’s non-trivial, not completely portable, and I would therefore hesitate before recommending it. Otherwise, there are 3 rough (non-exhaustive) categories of solutions:

  1. Adopt some form of ‘infinite precision’ representation;
  2. Restrict your calculations to a particular integer range;
  3. Avoid scenarios where you need to answer this question.

It should be clear that the most preferable solution is to avoid situations where accuracy becomes a stability issue, but if you adopt this approach too strictly you effectively rule out the majority of computational geometry routines and the efficiency they bring.

Some routines are not inherently problematic. Convex hull algorithms, or triangulation algorithms can be made completely robust by working with integer coordinates (and may even be fine with floats for some uses). The most troubling problems involve accurately computing intersection points, which may be required as an inherent part of an algorithm (e.g. computation solid geometry (CSG), aka. boolean geometry operations), but also as pre-process to remove overlaps before another simpler algorithm can accept the input. For example, most 2D triangulation algorithms require non-overlapping line segments as input, so even if they are easy to make accurate, they may also have just pushed the problem further up the pipeline.

The reason intersection points are a problem is that even if you start with your vertices at integer coordinates, an intersection point of two lines will not in general be an integer. If you fail to represent the intersection point accurately you may introduce a topological change. The remaining solutions therefore boil down to whether you try to accurately represent the intersection, or whether you force a topological change in a way you can predict and manage.

I’m not a big fan of ‘infinite precision’ representations for use in games. They are extremely convenient in some settings, but they are expensive and are not necessarily a fix for robust operation by themselves. In order for such a representation to be robust you need to be able to guarantee that the amount of precision you require is bounded. If your algorithm generates new vertices as it goes along, and then loops to generate further vertices based on the previously generated ones you may find you require ever increasing amount of precision. If you can bound the amount of precision you require, then there are a few papers in the games industry press that discuss workable solutions, such as  ”Using Vector Factions for Exact Geometry“, by Thomas Young in Game Programming Gems 3, for example.

For Black & White 2, the player effectively had the ability to draw on the navigation mesh. They would place building and draw roads, arrange dynamic objects such as rocks and trees, and cast landscape-altering miracles. All of these operations triggered CSG-like modifications to the underlying mesh. We would resolve all line intersections, re-triangulate the affected areas of the map, flood fill regions with navigation codes, and update the concurrently navigating villages and armies. The result was an efficient solution, particularly compared to an early multi-res grid approach, but was a complex system even by recent standards.

To make our approach robust, we had opted for a 16-bit 2D integer representation and “snapped” all intersections onto this grid as they occurred. The snapping was based on “Snap Rounding”, which is a means of intersecting a set of line segments in a way that can handle the topological changes that the rounding will introduce. The snapping kept a tight bound on our precision, even after continued modification by the player, and allowed us to make guarantees about the accuracy – and therefore robustness – of the system. The 16-bit resolution turned out to be more than sufficient, even for the large terrain in a God-game. An attractive side effect of the approach meant we could reduce the resolution without effecting the robustness of the code, which also allowed us to simple means of stress testing corner cases.

However, alongside the importance of accuracy for correct operation, the second great Achilles’ Heel of computational geometry is that it can be hard to compose. Joining algorithms together in a pipeline is not hard, but combining algorithms together can involve developing a new algorithm. Snap rounding by itself is fiddly, but not a major challenge to a good programmer. Delaunay triangulation by itself is also fiddly but will not induce excessive panic. An incremental form of Delaunay Triangulation that uses snap rounding to resolve intersections and also maintains application-generated triangle attributes, global connectivity information and concurrent searches is – unsurprisingly – greater than the sum of it’s parts, and at least a man-year of effort.

The result worked very well, met the (typically) ambitious game design, and I expect it was more efficient than the simpler alternatives. But game development is a tough business. I think you need a very special case to justify this kind of effort, even if the compromise is hard to take.

 


Quasi compile-time string hashing

Original Author: Stefan Reinalter

Technology/ Code /

One scenario that is quite common in all game engines is having to look-up some resource (texture, shader parameter, material, script, etc.) based on a string, with e.g. code like the following:

Texture* tex = textureLibrary->FindTexture("my_texture");
 
  ShaderParameter param = shader->FindParameter("matrixWVP");

For today, we are not so much concerned about how the key-value pairs are stored, and how the look-up is done internally (read this excellent post instead), but rather how to get rid of any hashing done at run-time. You might be surprised at how good C++ compilers have become when it comes to evaluating expressions known at compile-time, so let us put that to good use.

Assuming all our textures are stored in a table using some kind of (Hash, Texture) key-value pair (be it in a std::map, a simple array, or something more advanced), the code for finding a texture would vaguely look as follows:

Texture* TextureLibrary::FindTexture(const char* name)
 
  {
 
    const unsigned int hash = CalculateHash(name);
 
    // lookup the texture in some container using 'hash'
 
  }

So even for constant strings, CalculateHash() will still be called, and the hash has to be generated each and every time this function is called with a constant string. This is something we would like to get rid of.

The first step is to ensure that each and every class dealing with hashes makes use of the same functionality, while still retaining the same syntax for users of those classes. So instead of taking a const char* as argument, we could define a simple class holding nothing more than a hash internally, which will be passed to the function by value instead:

class StringHash
 
  {
 
  private:
 
    unsigned int m_hash;
 
  };
 
   
 
  Texture* TextureLibrary::FindTexture(StringHash hash)
 
  {
 
    // no more hash calculations in here
 
    // lookup the texture in some container using 'hash'
 
  }

This way, our hashed string class can tackle the problem of distinguishing between constant and non-constant strings, and we can be sure that no hash calculations take place inside the function. But still, how do we make sure that constant strings are not re-hashed every time?

A simple hash function

Let’http://isthe.com/chongo/tech/comp/fnv/” target=”_blank”>FNV-1a hash with the following implementation:

unsigned int CalculateFNV(const char* str)
 
  {
 
    const size_t length = strlen(str) + 1;
 
    unsigned int hash = 2166136261u;
 
    for (size_t i=0; i<length; ++i)
 
    {
 
      hash ^= *str++;
 
      hash *= 16777619u;
 
    }
 
   
 
    return hash;
 
  }

Looking at the above FNV-1a hash implementation, let’s try to figure out the resulting hash values for known strings (don’t forget about the null terminator):

  • “” = (2166136261u^ 0) * 16777619u
  • “a” = (((2166136261u^ ‘a’) * 16777619u)^ 0) * 16777619u
  • “ab” = (((((2166136261u^ ‘a’) * 16777619u)^ ‘b’) * 16777619u)^ 0) * 16777619u)

The algorithm’s offset and prime are compile-time constants (I used 2166136261u and 16777619u in the example above), so these expressions really are constant.

Helping the compiler

All we need to do is give the compiler some help, which can be achieved by providing concrete implementations for strings of different lengths. Let’s put those into our StringHash class:

class StringHash
 
  {
 
  public:
 
    ME_INLINE StringHash(const char (&str)[1])
 
      : m_hash((2166136261u^ str[0]) * 16777619u)
 
    {
 
    }
 
   
 
    ME_INLINE StringHash(const char (&str)[2])
 
      : m_hash((((2166136261u^ str[0]) * 16777619u)^ str[1]) * 16777619u)
 
    {
 
    }
 
   
 
    // other implementations omitted
 
  };

In case you’re not familiar with the syntax (and admittedly it’s one of the more awkward ones in C++), the constructors take references to const-char-arrays of sizes 1, 2, and so on. Providing different implementations for strings of different length tremendously helps the compiler/optimizer to fold the constant expressions into the final hash. In addition, we force each constructor to be inlined using ME_INLINE, which is a platform-independent variant of e.g. __forceinline (Visual Studio 2010), passing an additional hint to the optimizer.

Spotting the pattern

Many of you may have already spotted the (offset^constant)*prime pattern in the above examples, and indeed we can easily implement our constructors by utilizing the preprocessor:

#define PREFIX(n, data)		((
 
  #define POSTFIX(n, data)	^ str[n]) * 16777619u)
 
   
 
  #define ME_STRING_HASH_CONSTRUCTOR(n)									
 
  ME_INLINE StringHash::StringHash(const char (&str)[n])							
 
    : m_hash(ME_PP_REPEAT(n, PREFIX, ~) 2166136261u ME_PP_REPEAT(n, POSTFIX, ~))				
 
  {													
 
  }
 
   
 
  class StringHash
 
  {
 
  public:
 
    ME_STRING_HASH_CONSTRUCTOR(1)
 
    ME_STRING_HASH_CONSTRUCTOR(2)
 
    ME_STRING_HASH_CONSTRUCTOR(3)
 
    ME_STRING_HASH_CONSTRUCTOR(4)
 
    ME_STRING_HASH_CONSTRUCTOR(5)
 
    ME_STRING_HASH_CONSTRUCTOR(6)
 
    ME_STRING_HASH_CONSTRUCTOR(7)
 
    ME_STRING_HASH_CONSTRUCTOR(8)
 
   
 
    // other constructors omitted
 
  };

Did it work?

With this in place, let’s take a look at the assembly code generated by the compiler to see whether our trick really worked. We will use the following simple example for that:

printf("Hash test: %d", StringHash("").GetHash());
 
  printf("Hash test: %d", StringHash("test").GetHash());
 
  printf("Hash test: %d", StringHash("aLongerTest").GetHash());
 
  printf("Hash test: %d", StringHash("aVeryLongTestWhichStillWorks").GetHash());

The resulting assembly code (Visual Studio 2010) looks like this:

printf("Hash test: %d", StringHash("").GetHash());
 
    01311436  push        50C5D1Fh
 
    0131143B  push        offset string "Hash test: %d" (13341ECh)
 
    01311440  call        printf (1328DD0h)
 
  printf("Hash test: %d", StringHash("test").GetHash());
 
    01311445  push        0AA234B7Fh
 
    0131144A  push        offset string "Hash test: %d" (13341ECh)
 
    0131144F  call        printf (1328DD0h)
 
  printf("Hash test: %d", StringHash("aLongerTest").GetHash());
 
    01311454  push        444D1A47h
 
    01311459  push        offset string "Hash test: %d" (13341ECh)
 
    0131145E  call        printf (1328DD0h)
 
  printf("Hash test: %d", StringHash("aVeryLongTestWhichStillWorks").GetHash());
 
    01311463  push        6D9D8B39h
 
    01311468  push        offset string "Hash test: %d" (13341ECh)
 
    0131146D  call        printf (1328DD0h)

As can be seen from the generated instructions (push 50C5D1Fh, push 0AA234B7Fh, push 6D9D8B39h and push 6D9D8B39h), the compiler/optimizer was indeed able to fold every string into its corresponding hash at compile-time, completely eliminating all traces to any StringHash constructor.

Finishing touches

All is well for constant strings, but what about non-constant ones? Of course we might want to use some kind of non-hardcoded string (e.g. a std::string, or strings coming from a file) every now and then, and need to provide a constructor for those as well:

class StringHash
 
  {
 
  public:
 
    StringHash(const char* str)
 
      : m_hash(CalculateFNV(str))
 
    {
 
    }
 
   
 
    ME_STRING_HASH_CONSTRUCTOR(1)
 
    ME_STRING_HASH_CONSTRUCTOR(2)
 
    ME_STRING_HASH_CONSTRUCTOR(3)
 
    ME_STRING_HASH_CONSTRUCTOR(4)
 
   
 
    // other constructors omitted
 
  };

Now we’re in trouble. C++ overload resolution dictates that const char (&str)[N] is as good a match for any of the constructors as const char*, because every array decays into a pointer automatically. This means that our class no longer works, because every constructor call is ambiguous:

// error: constructor overload resolution was ambiguous
 
  printf("Hash test: %d", StringHash("test").GetHash());

What we need to do is make the overload resolution process jump through another hoop for const char* arguments, so that constant string are considered first, and non-constant ones second. This can easily be achieved by using the following trick:

class StringHash
 
  {
 
  public:
 
    struct ConstCharWrapper
 
    {
 
      inline ConstCharWrapper(const char* str) : m_str(str) {}
 
      const char* m_str;
 
    };
 
   
 
    StringHash(ConstCharWrapper str)
 
      : m_hash(CalculateFNV(str.m_str))
 
    {
 
    }
 
   
 
    ME_STRING_HASH_CONSTRUCTOR(1)
 
    ME_STRING_HASH_CONSTRUCTOR(2)
 
    ME_STRING_HASH_CONSTRUCTOR(3)
 
    ME_STRING_HASH_CONSTRUCTOR(4)
 
   
 
    // other constructors omitted
 
  };

By making the constructor take a ConstCharWrapper instead of a const char*, we have altered the outcome of the overload resolution process. All constructors taking a reference to an array are now considered better matches by the compiler, because the constructor taking a ConstCharWrapper has to go through one implicit conversion, making the other constructors win in the case of constant strings. Similarly, non-constant strings can only be converted to a ConstCharWrapper implicitly, again disambiguating overload resolution.

Note that in order for this to work the ConstCharWrapper constructor is non-explicit on purpose.

Conclusion

Using the StringHash class introduced above, you can turn run-time hashing of constant strings into quasi compile-time constants without having to change calling code. I say “quasi” because the results of course cannot be used as a true compile-time constant (e.g. in a switch-statement, or as a non-type template argument), but rather relies on the compiler/optimizer instead. Therefore, it should be pointed out that this works in Visual Studio, as well as all major console platforms (Xbox360, PS3, Wii).

In addition, employing this trick reduces the size of the executable, because the constant strings are not used anymore and will therefore never end up in the read-only section of the executable, resulting in maybe a few extra KB of memory to use. This could also be beneficial for other applications making use of the above, e.g. compile-time string encryption, because the source strings are nowhere to be found.

 

Update: As suggested in the comments, you don’t need to use the preprocessor in order to define different constructors, but can use template meta-programming for that.

One such possible template solution is the following:

template <unsigned int N, unsigned int I>
 
  struct FnvHash
 
  {
 
    ME_INLINE static unsigned int Hash(const char (&str)[N])
 
    {
 
      return (FnvHash<N, I-1>::Hash(str) ^ str[I-1])*16777619u;
 
    }
 
  };
 
   
 
  template <unsigned int N>
 
  struct FnvHash<N, 1>
 
  {
 
    ME_INLINE static unsigned int Hash(const char (&str)[N])
 
    {
 
      return (2166136261u ^ str[0])*16777619u;
 
    }
 
  };
 
   
 
  class StringHash
 
  {
 
  public:
 
    template <unsigned int N>
 
    ME_INLINE StringHash(const char (&str)[N])
 
      : m_hash(FnvHash<N, N>::Hash(str))
 
    {
 
    }
 
  };

Instead of writing several different constructors, we only need one templated constructor for constant strings now. This constructor in turn simply calls the Hash() function of the class template FnvHash. As can be seen, the FnvHash::Hash function just calls itself recursively, working its way from the end of the string to the beginning. The partial template specialization FnvHash<N, 1> serves as a mechanism to end the recursion.

The resulting assembly generated is the same, though it should be noted that the template solution probably is more taxing for the compiler, and depending on the compiler you use, you might have to alter options like inline recursion depth in order to make it work for longer constant strings.

Game design, B-minus, should try harder.

Original Author: Andrew Hague

A few weeks ago, a friend of mine and I were talking about common clichés in games and we started listing all those funny little things that seem to make their way into lots of the video games we play.  Use any search engine to find them.  However, he thought there was more to it and we shouldn’t be forcing players to do all those dull things, simply because all games do them.  Designers of games should try a bit harder.  He sent me an email of all the lazy design elements in games and I thought I’d post it on our company forum.  It got some responses, good but mostly bad, yet it has driven him on to send me another email explaining his thinking.  Rather than post it on the company forum, I have his permission to post it up here.  So here it is, unedited.


 

A while ago, I had a talk with Dr. Andrew Hague (who is posting this item for me) about game elements I’d seen so often in the last 35 years (that’s not a misprint, I’m 55 and played my first game, a text-based Lunar Module Lander sim, on a mainframe teletype in 1976), that at least FTTB I’d like never to see them again.

Partly for amusement, I wrote up the elements as a hit list of “things it should be possible to design games without”.  It wasn’t intended to refer to any particular genre, although due to age and health problems I don’t have the reactions for some, and this post is about single-player only. I’ve certainly tried most genres on most platforms over the years, and technically I’m very happy with current-gen hardware.

When Andrew posted the list on the discussion board at the games company he works for, it was not popular with his colleagues, being variously described as “obnoxious”, a “tirade”, and an objection to any game I happened not to like, whereas I’ve liked many games with these features some time in the past, I’m just sick of the sight of them now.

So this post attempts to explain why I’m objecting rather than just what to. I think other than cliché (Zombies, Nazis, boss battles, mazes, crates…), most of the list derived from two main issues:

ONE: Insufficient compaction of rote

Consider the ‘Eagle Tower’ in the original Assassin’s Creed (neglecting the metastory justifications): Altair starts without a map but if he climbs to high points he can memorise the surrounding area and gradually stitch one together from the views. You/he add a view as follows:

Arrive at the base of a tower.

Hold down High Profile/Free-Run/forward (occasionally shifting to left or right) to climb the tower

Shuffle around at the top to crouch on a ‘perch’.

Press Vision to watch a helicopter shot circle the tower and be told there’s a map update.

There is a shortcut ‘Leap of Faith’ for getting down, but even that requires the player to orient the character exactly first, rather than assume that he’d only jump off a vantage point several hundred feet up in the direction of a soft landing…

…there are variations: towers with odd mount points, or a guard to kill at the top, but fundamentally that’s it. The whole thing could be replaced by a street seller selling a map segment at a stall on the base of the tower, one selling a map of the whole city on arrival, or a design that admits an Assassins’ Guild would likely have decent maps to begin with. There are many of these towers in the game, so repeating the mechanical exercise for each one could absorb a couple of hours of playing time, without generating more than moments of actual gameplay.

Other examples of shorter, but similar, rote sequences common in other games:

— breaking a vase/opening a box to get at a pickup; having to holster weapons to collect it

— having to pick up every object in a pile separately

— moving a box to reach up to a broken ladder

— door opening pantomimes (keycard slots, locks to pick, windlasses to turn, …)

— having to travel to a ‘shop’ each time you want (need) to update inventory

The limiting case is what I call ‘Samuel L. Jackson’ gaming, where you absolutely, positively, have to kill every last MF in the room (and break every breakable object) because one of them will be carrying something (or concealing a switch) without which you can’t go on. Even worse, fancy randomised death animations result in the something being dropped over a cliff, or under the tracks of the tank you’re supposed to use it on — Nolan North’s ad-lib “what kind of a ****wit design forces a restart for doing the right thing?” didn’t make the final edit…

Now I know, for example, that for most of the 90s, platformers from the original Prince of Persia to its 3D successors would expect players to safety-walk to an edge, hop back for the running jump, and then hold down the Grab key for the entire following ledge shuffle, but at the time they were seeing something new and cool in exchange. No longer. This game’s innovative custom code is the next game’s middleware subroutine, but the associated mechanical button-pressing often doesn’t diminish in the same ratio as the code to invoke it does.

To quote a review that was posted as I was putting the finishing touches to this, “There’s nothing wrong in decoding a lock with a cryptographic sequencer, or blasting through a weak wall with explosive gel, or even opening up a shutter door with a quick blast of your remote electric charge device. But when you have to do all three in a row for the umpteenth time, you start to think that perhaps a simple door knob would have sufficed.” wHOLeY Repititious, Batman…

Stuff gets old. Zipf’s Law operates. In games, not so much. This needs fixing.

TWO: Hair-shirt attitudes to problem setting

There are probably theses out there on how choices that were originally random, or worse (e.g., the QWERTY keyboard), become so established people forget the alternatives and act as if they’re the only possible answer. Here are some examples I see as gaming choices-too-often-mistreated-as-axioms:

(a) any solo game with anything resembling a player avatar must have a storyline, no matter how thin, to justify the gameplay. Find the treasure; reassemble the amulet; defeat the wizard; rescue the princess; …

(b) the storyline must be monolithic, as in a novel or play or film, rather than an anthology of related short stories, or a music hall/vaudeville programme, or standalone episodes of a TV show, or just a list of one-line jokes

(c) the player has to experience the storyline serially as if it were a live performance, despite it actually being a pre-recorded one which in other similar media has chaptering, skip, and search functions.

(d) the player is constantly tested on whether they’re paying sufficent attention, and barred from the rest of the content till they get a pass score if not, despite having already paid for the entire performance. 

Finish level N or you can’t play level N+1; complete the fetch quest; beat the boss; solve combinatoric inventory puzzles by exhaustion; or you can’t access the rest of your property, that you’ve paid for.

Is it any wonder most games don’t get finished any more? Would the designers of such games accept a CD or DVD or even a media download in which they couldn’t select, skip, and fast-forward over the bits that didn’t interest them? If so, why do they expect players to work through their dull or unpleasant bits when they won’t even sit for other peoples’?

It may be that the whole idea of ‘winning’ at solo games has historically been overplayed because the preponderance of testosterone in early generations of both players and designers led to the idea the game was a contest between them. No-one ‘beats’ a book’s author by reading it, a film’s director by sitting through it, or a composer by listening to their music — although I admit there are exceptions that prove the rule in all three categories — hi, Jean-Luc 😉

If you want to peek at hidden cards at Solitaire, you’re free to do so, the game mechanics don’t stop you, and anyway you can’t cheat when the only person affected is yourself. As the computer gaming demographic widens towards the card playing one, perhaps the range of difficulty levels should open up at that end too?

If a player can’t/doesn’t want to solve a puzzle or find a pickup, why should they have to look for a walkthrough? (Jonathan Blow’s argument might have been valid had he given his game away as art, but once he took peoples’ money it’s grubby commerce, and the customer is always right…) If they can’t/don’t want to beat a boss, why should they have to download a save, assuming the platform deigns to let them? Why don’t the controller’s media trick play mappings work in cutscenes, when, apart from menu-to-skip, most buttons don’t do anything at all?

Shortening playing times or offering DLC add-ons isn’t the answer, that’s just including a tiny number of skips at point-of-purchase, instead of systematically offering of them at all points of play. Dumbing down the basic content isn’t either; that just alienates the hardcore instead of the casual. Selling skip codes as DLC is just criminal, and I wouldn’t touch any game that tried it (and didn’t touch those that have)

So why not just ditch the Neitzschian “that which does not kill me makes me stronger” attitude to game roadblocks, by simply extending the range of difficulty levels to include the same kind of skips for content as cutscenes? In many cases, players will actually play (and enjoy, and come back for more of) more of your games by skipping the hardest material, and actually getting to the end, than they do currently when they trade-in at the first barrier they can’t get over…

Postscript

I now think my original list should have been titled “things it should be possible to design games without forcing the player to do“…

—–END

 

 

Exposing Social Gaming’s Hidden Lever

Original Author: Betable Blog

In our last post, the slot machine.

 

slot machine banner

See if this sounds familiar to you:

To play the game, you put currency into the machine. You then pull the knob and wait for the result. When the result is presented, you are rewarded with a cacophony of exciting sounds, attention-grabbing images, and some form of currency. Often times, this winning helps you progress towards a larger goal. You also have the opportunity with each play to win a rare prize of significantly higher value than the value of the currency you contributed to play the game.

That’s a slot machine, right? Wrong. It’s the basic action loop of FarmVille.

Here is the description again, but this time, with specific details:

To plant a crop, you must first spend resources on the seeds. You then plant the seeds and must wait for them to grow. When you harvest the seeds, you are rewarded with a cacophony of exciting sounds, attention-grabbing images, and some resources. Often times, these resources help you progress towards a larger goal. You also have the opportunity with each play to win a rare prize of significantly higher value than the seeds that you purchased.

That sounds more like FarmVille now, doesn’t it?

People have often argued that Zynga’s games lack gameplay depth, but make up for it in addictive, accessible mechanics. Jeff Tseng, the co-founder and CEO of Forbes article. So what exactly is the psychology of gambling? How did Zynga leverage gambling mechanics to build a massive gaming empire?

The Random Reward Schedule results

Zynga’s success has much to do with their skillfully executed manipulation of the human brain. One such method is known as the Wolfson Brain Imaging Centre in Cambridge used fMRI brain scanning to measure patterns of brain activity when playing a game that involved gambling. He found a reliable pattern of brain activity when humans receive money as a reward for winning. It should be no surprise that the region of the brain (the striatum) that responds most to gambling also responds to natural reinforcing stimuli like food and sex.

This is nothing new, as numerous industries such as advertising have relied on the allure of sex and money for years. Zynga has simply managed to successfully leverage these same psychological cues to unlock a massive new market of people that had never considered themselves video gamers.

Paradoxical as it seems, many of Zynga’s “We’re making mass-market entertainment everyone can play” says Brian Reynolds, Zynga’s chief game designer.

 

zynga characters

Zynga combines mass appeal, addictive gambling mechanics, and an aggressive viral marketing strategy to achieve incredible growth. Their stylish, highly approachable games help them avoid the stigma of gambling while appealing to precisely the audiences that are the most avid gamblers. Zynga’s core paying audience is core audience of slot machine users.

However, to turn this into a multi-billion dollar empire, Zynga had to convince the millions of players that they now had gambling with play money to put real money into the game. The key to doing this was to build compelling virtual equipment or enhancements for players to use in-game. Social gaming players know that they are spending real money on a game, but they do so because of the perceived value of the virtual goods they are acquiring in the game. In this way, gambling and social game players are also similar, because both players know that they are losing money as they play, but they do so because their perceived value and enjoyment is worth the expense. To quote the University of Cambridge’s article on the psychology of gambling, “at its heart, gambling is a rather paradoxical behaviour because it is widely known that ‘the house always wins’.”

 

virtual goods market growth

This is one way to explain what people have been struggling to understand for years: why do more and more people pay real money for virtual goods that have no tangible value? In a way, these users are not unlike the millions of players per year that go to play slot machines until they are out of chips. They are not playing to win or even to hit the Jackpot – they are playing for the thrill of the game. The money they set aside to play is simply the cost of a game they enjoy.

This comparison is striking when put in the context of virtual currency in social games. Players who purchase virtual currency often spend it quickly because once it crosses over, it isn’t seen as money any more. The converted currency simply becomes a tool for playing the game, which is almost word-for-word how professional poker players describe money in the real world.

 

zynga is a gambling company masquerading as a gaming company

Zynga proudly states that they are an $15-20 billion valuation in their pending IPO.

No matter the volume of Zynga Poker chips a player earns, or FarmVille resources a player accumulates, their real money has been exchanged for virtual currency, just like an other cash-for-goods transaction. The biggest thing that unequivocally separates social gaming from gambling is that the players have no ability to tangibly recoup the money put into the game. By giving players the ability to win back their investment of time and money in real-money rewards, that would quite literally be a game changer.

The Specter of Doubt

Original Author: Heather M Decker-Davis

I recently endured a horribly stressful deadline and near the final hour, I found myself face to face with a crippling phantom of sorts. It’s something that’s neatly tucked away within us all, waiting quietly for a moment of weakness: self doubt.

This troublesome specter is most likely to rear its ugly head when there’s a lot riding on what you’re doing and you’re running out of time. The higher the stakes, the lower the blows self doubt will inflict. The trick is identifying it. This may sound silly, but it’s not always apparent! You might get stuck in the rut of making small, meaningless changes and not being happy with anything. You might outright utter the words, “I’m never going to finish this,” without even making the obvious connection.  There are countless symptoms, but they all generally have stalling out and depression in common.

However, if you can identify the problem, you can certainly regain control.

Realize that most everyone, across every discipline, occassionally feels less-than-confident about their work and the effect is amplified when you’re under a lot of pressure. Take a deep breath and realize there’s nothing inherantly wrong with you. You’re human.

If you’re feeling overwhelmed, identify small victories, like finishing one section of something, scripting one small feature, completing a minor asset. Each small stepping stone is progress. You might make a list of these smaller parts and check them off. This keeps you focused on dividing and conquering.

Force yourself to stop over-analyzing details. Make general passes with your work to block in your basis requirements, move on to the next critical aspect, and if you have time, go back in and refine. Nudging something back and forth endlessly or changing a value by a hundredth of a unit fifty times straight is often more time-consuming than it’s worth. Ask yourself if anyone is really going to notice before obsessing.

Don’t beat yourself up over time management during the last stretch. Even if you could have scheduled more efficiently or worked miracles early on, analyzing your shortcomings the day or so before your deadline doesn’t help you finish the task at hand. Shelf your criticism for a retrospective and redirect your attention to the present until your work is done.

Lastly, don’t give up. Don’t stop. Do take normal breaks for sanity, of course, but don’t quit altogether!  If you stare at your screen, lamenting that it won’t get done, then it certainly won’t! Set your sights on the goal. It’s better to meet that deadline with your best efforts than have nothing to show at all. At the very least, you can rest assured that you gave it your all.


Native Client for Dummies

Original Author: Erwin Coumans

Today’s Chrome web browser update means you can develop, sell, download and play 3D games written in C++ right from the Chrome Web Store, without plug-in.

This is a quick walk-through on how to develop a basic 3D Native Client app. You can try out my test app from the Chrome Web Store here.

Alternatively you can visit http://bulletphysics.org/nacl but in that case you need to enable Native Client by typing about://flags in the Chrome address bar. Chrome Web Store Native Client apps don’t require flags.

Either way, make sure your Chrome About Box shows version 15 or later.

Some features of the simple test app:

  • OpenGL ES2 3D rendering with textures
  • Mouse interaction
  • Load binary files using a file chooser
  • Use of middleware: Bullet Physics, libjpeg

Download the SDK

You can download the Native Client SDK and all tutorial and build system files in a single self-contained zip file here (64 MB). It should work out-of-the-box under Windows. This tutorial also builds under Linux or Mac OSX but that is left as an exercise for the reader (or leave a comment for help).

Setup your build system

Just unzip the zip file (64 MB) in a folder without spaces in the name and you are all set to go under Windows.

The Native Client SDK comes with a gcc-based compiler toolchain. You need to tell your build system the location of the C compiler (CC), C++ compiler (CXX) and linker (AR). Above zip file includes make.exe and Makefiles that work under Windows out-of-the-box. There is no need to install Cygwin or MinGW. I used premake to generate the Makefiles.

Add some salt and pepper

An API called Pepper is used to provide hooks between Native Client and web browser services such as HTML5 and 3D. This test application was a port from a simple Win32 application. I used the Tumbler NaCl example as a starting point. The original version is located in buildnaclnacl_sdkpepper_15examplestumbler. In order to ‘port’ it to Native Client in a web page, the following changes were made. Most of the source code is shared and portable, to make debugging easier.

  • Program entry point: The Windows application used WinMain as a program entry point. For Native Client the entry point is the constructor of your module.
  • All of the OpenGL ES initialization of matrices, shaders, textures, vertex and index buffers are the same, except for creating the context.
  • Keyboard and mouse input. Instead of a Windows event you register for keyboard and/or mouse events and implement the event callback.
  • Loading files from the local file system is not possible for obvious security reasons, so there is no ‘fopen’ command. You can let the user pick a file using the browsers build-in File Chooser, load the file in memory and use Javascript to pass the file to the Native Client module. Binary files can be base64 encoded by the browser, and decoded by your Native Client code. This works well in practice. In the future you can call the File Chooser directly from NaCl C++ code.

Build the executable

Click on build/nacl.bat and wait. Here is the content of the batch file:

set NACL_TARGET_PLATFORM=pepper_15
 
  set NACL_SDK_ROOT=%CD%/nacl/nacl_sdk
 
   
 
  premake4 --with-nacl gmake
 
  cd nacl
 
   
 
  set AR=%NACL_SDK_ROOT%%NACL_TARGET_PLATFORM%toolchainwin_x86_newlibbinx86_64-nacl-ar.exe
 
  set CC=%NACL_SDK_ROOT%%NACL_TARGET_PLATFORM%toolchainwin_x86_newlibbinx86_64-nacl-gcc.exe
 
  set CXX=%NACL_SDK_ROOT%%NACL_TARGET_PLATFORM%toolchainwin_x86_newlibbinx86_64-nacl-g++.exe
 
   
 
  set config=release32
 
  make
 
   
 
  set config=release64
 
  make

This builds two Native Client executables, 32bit x86 and 64bit x64, they are located in buildnaclnginx-1.1.2htmlNativeClientTumbler.exe and buildnaclnginx-1.1.2htmlNativeClientTumbler_64.exe

The web page know how to load this module using a manifest file called tumbler.nmf :

{
 
    "program": {
 
      "x86-64": {"url": "NativeClientTumbler_x64.exe"},
 
      "x86-32": {"url": "NativeClientTumbler.exe"}
 
    }
 
  }

The index.html web page can load the Native Client code using this tumbler.nmf file. Check out tumbler.js for details.

Test the application

Click on build/nacl_runtest.bat and confirm that the nginx web browser is allowed to wait for connections.

Native Client applications hosted in the Chrome Web Store should run without special flags or settings. In order to run NaCl modules in web pages outside of the Store you need to enable Native Client manually. Just type about://flags in the Chrome address bar and enable the Native Client setting.

Now open Chrome and type http://localhost/index.html and if all goes well you can play your app.

Debug the application

You can use ‘printf’ debugging using the Javascript console, but in most cases it is easier to maintain a Windows version and perform debugging under Visual Studio. You can compile the tutorial under Visual Studio by clicking on the build/vs2008.bat file and open the GLES2_BulletBlendReader_Angle project. This project emulates OpenGL ES2 under Windows using a DirectX wrapper called Angle.

Deploy to the Chrome Web Store

You can make your C++ Native Client module available for free or for sale in the Chrome Web Store. A one-time registration fee of just 5$ is used, probably to stop spam. You have to create a manifold.json that provides the information about your application. This includes the name, description and where the web page and assets are located. Here is the manifest used for this tutorial:

{
 
      "name": "Bullet Physics NaCl Test",
 
      "version": "0.8",
 
      "description": "This Native Client module runs a C++ Bullet Physics simulation.",
 
      "app": {
 
  	    "launch": {
 
  	       "local_path": "index.html"
 
  	    }
 
  	  },
 
  	  "icons": {
 
  	  "16": "bullet_icon16.png",
 
        "128": "bullet_icon128.png"
 
      }
 
  }

Notes

Most of the Google NaCl SDK relies on having Python installed, and I wanted this tutorial to work without external dependencies such as Python, Cygwin or MinGW etc. I used the following replacements:

  • Python was used for the scons build system. I replaced scons by Makefiles, because I think Makefiles are more standard and simpler to adapt to other build systems. I used premake4 to generate the Makefiles to keep it self-contained.
  • Python was also used to download and update the Native Client SDK. To avoid this, the zip file already contains the Native Client SDK and the build system knows its location. If you have Python installed you can still run this updater, it is located under buildnaclnacl_sdkupdate_naclsdk.bat in the zip file.
  • Finally, Python was used for the web server. I replaced this with the tiny but powerful nginx web server.

I recommend checking out the links below if you are interested to learn more.

Enjoy!

Useful Links

http://gonacl.com (it has some links to other middleware that support Native Client such as FMOD, Ogre and Unity)

http://code.google.com/chrome/nativeclient

https://groups.google.com/group/native-client-discuss

http://code.google.com/chrome/webstore/articles/apps_vs_extensions.html

Mathematics for Game Developers

Original Author: John-McKenna

Mathematics for Game Developers

Part 0: Fundamentals

Last time I made the great mistake of leaping straight in at numbers, forgetting some of the important background.  So today, instead of the integers I promised, I’m going to cover sets.

Sets are ubiquitous in mathematics.  Many definitions are of the form ”a whatsit is a set of doodads with this additional structure”, and as we glimpsed last time, they can also be used as the basic atoms out of which everything is built.

Definition

Fortunately, they are simple things, as long as we keep the discussion fairly informal.  A set can be regarded as a collection of things.  A set is a thing, so sets can contain other sets.

Given a particular set and a particular thing, the thing is either a member of the set or it is not.  The symbol U-S.

But as Bertrand Russell was preparing his book Principles of Mathematics for publication (in around 1902, I believe), he discovered a problem with this definition.  There is nothing stopping us defining a set which contains all sets that do not contain themselves.  If this set is a member of itself, then it is not a member of itself.  And if it is a member of itself, then it isn’t.

This paradox was a serious problem.  Russell delayed publication by a year while he tried to find a way around it, eventually deciding to publish anyway with a warning, and his work on a theory of types (a not particularly satisfactory attempt at a solution) as an appendix.

By the mid-1920s, work by a number of mathematicians (most significantly Zermelo and Fraenkel) finally produced a solid foundation.  The Zermelo-Fraenkel axioms, as they are known, are rather technical, and require more background than I have the time or patience to provide here.  But the basic idea is relatively simple: start with the empty set, and then repeatedly build new sets out of the ones you already have.  At each step, you can only use sets that had been built in previous steps, so you can never have sets that contain themselves, and Russell’s Paradox is avoided.

I intend these posts to stay relatively informal, so naive set theory, as it is called, should be good enough for us.

And next time… probably no integers, sorry.  I have to introduce you to relations first.

Along came a spider…

Original Author: Kyle-Kulyk

Like that title?  I came up with it myself.  It was either that or something about a tangled web.  I think I made the right decision.

My last blog entry on #AltDevBlogADay was concerning my road to becoming a game developer (although, according to my wife, I shouldn’t be calling myself that until I at least make a dollar off a game we create).  I thought I would take this opportunity to share a bit of the history behind what will be Itzy Interactive’s first release.  It’s a mobile game titled Itzy, and we’re excitedly working toward an early November launch.

School Daze

After my breakup with the brokerage industry, I ended up back in school fulltime at 35, studying programming and enduring the chatter of a class full of 18-20 year olds (apparently, manga isn’t that new thing the kids are into.  Oh, they’re into it, they’ve just been into it for a while).  As part of the new program I was enrolled in, after the first semester we were able to specialize our course selection.  I rolled the dice, crossed myself and chose game programming against my better judgement, knowing I’d hate myself if I didn’t at least try.

During the summer break after that first year our instructors asked us to give some thought to game ideas we could put into practice for the next year.  Being an older gamer, I thought back to the games that had resonated with me growing up and decided to take inspiration from the games of my youth.  I created a proposal for a spider themed game based loosely off the Taito arcade title, Qix.

For those not familiar with Qix, the objective of the game was to fill a rectangular playing area by drawing a line with your player across the screen until you finished, at which time the enclosed area would automatically fill in with a solid color.  Drawing the line left the player exposed.  If an enemy touched the line or player while drawing, it would kill the player character and abort the line.  As a kid, I loved this game and I’m not quite sure why.  There was something strangely rewarding about something as simple as drawing a line across the screen and seeing a huge area fill in.

My idea was to take the same principle, but apply it to a spider and create environments to fill that fit that theme.  A spider isn’t going to spin webs across a blank rectangle; it’s going to spin across tree branches or over your garden shed.  And if we’re going to follow a spider across multiple environments, would it kill us to tell a bit of a story along the way?

Artistically I wanted to keep the color’s limited – using dark colors and shades of greys as we’d seen in such games as Limbo or Pixeljunk Eden.  The web fuel that keeps Itzy spinning would be collected by eating a multitude of brightly coloured, alien fireflies that would also influence the color of his webbing, and using these two ideas, the fireflies and the webs, we would bring color to a bleak landscape.  As it turned out, rainbow webbing looked terrible, but my core ideas had found root.

What’s in a name?

So, Itzy, as a concept, was born.  I wanted to keep the game casual and light-hearted and I wanted to keep the controls and gameplay straight forward so the player could easily sit back and enjoy spinning giant webs across multiple environments.  For the life of me though, I couldn’t think up a name for my spider hero.  I had intended from the start to name the game simply after its title character, but what to name it?  I also wanted the character gender neutral if possible – but Pat the Spider?  Drew the Spider?

Frustrated, and a little hungry, I turned to Facebook.  I asked my friends and family to come up with a name for a cute spider.  The wife of an old, university friend suggested Itzy, and I felt it was perfect.  It was cute, easy to remember, played off the whole “Itsy, bitsy spider” nursery rhyme.  I had my character’s name.  Later on, it also stuck with us as a company name after our initial, Canadian themed studio names such as Angry Beaver Studios were ruined by Urban Dictionary (thanks for nothing, Urban Dictionary).  Itzy, again, seemed catchy, cute and memorable and Itzy Interactive came into being.

To breathe life into my character, I enlisted the help of my six-year-old niece who graciously agreed to give up an afternoon to spend with her uncle saying “Ok, again.  Now again.  A little higher.  Not that high.  Ok, that’s great.  …and once more.”

I’m still happy with the results and I’m especially fond of the sound she came up with when asked “What sound does a happy spider make?”

Apparently, it’s “Weew!”

Growing Pains

My team at school were able to start on the first level for a little over a month as part of a school project and Itzy started to take shape.  After graduation, Itzy Interactive began working on Itzy in earnest and we faced down a myriad of problems.  Our game was slated for the phones so Itzy needed pathfinding to move to where the player pointed on the screen.  With our limited understanding of game programming, our pathfinding didn’t seem able to search for paths “up”, only on the ground.  We overcame this problem by rotating the entire level when Itzy approached the base of a climbable object so the climbable object was now the “ground” we were scanning.  That opened up a world of physics related and performance issues, not to mention the transition between ground and tree while the world rotated was, to put it mildly, awkward.  Also, we struggled with making the webs and creating the meshes to fill the webs dynamically to correspond to Itzy’s created web shapes.  Then there were the terrible performance issues initially on both mobile platforms.  No one likes to see a game tested run at 0.7 fps.

All these problems were overcome in the following 6 months and as we tightened up the mechanics of the game we were then able to switch focus to the gameplay itself.  Building webs didn’t pose as much of a challenge as we hoped, so we introduced enemies and big fireflies that, when stuck, can ruin existing webs if not eaten in a timely manner.  Itzy’s enemies force Itzy to use powers of camouflage to continue web building after the danger has passed.  This led one programmer’s father-in-law to exclaim while playing “Give me a way to kill these buggers!” and a power-up system was born.

The game, Itzy, hardly resembles the game that ended up on that proposal document during the summer of 2010, but that’s a good thing.  Through the efforts of talented artists and programmers, many of whom worked as volunteers, Itzy has changed and I’m proud of the product we’ve created.  That pride I feel is more rewarding than anything I accomplished in my decade slinging mutual funds and placing trades.  I just hope it adds up to monetary compensation as well as a feeling of pride (pride, while great, doesn’t pay our mortgages very well) so we can continue to do this, and I can meet my wife’s criteria for being able to call myself a “game developer”.

Facebook page and give us a like.