From the Codeface, a Bug Fixing Story

Original Author: Thaddaeus Frogley 

From the Codeface, a Bug Fixing Story

It was March, 2009, and I was working on a high profile multi platform psychological horror game. We were “in QA”, Milestone 12, and my code was holding up well. I had been at 0-bugs for most of the week, and had spent most of my time testing, and helping whoever I could with debugging whatever problems were bouncing around the team.

One particularly tricky bug from that time stuck in my memory.

It was Wednesday when I got the email from the project’s technical lead.

Our memory profiling system had been spitting out a warning on one of our target platforms that indicated a memory trample. Both the technical lead, and the programmer who implemented the system were snowed under, so I was asked to take a look and find out what the problem was.

Now, at this stage there were two possibilities: either (a) there was a memory trample, but for whatever reason we could only see it on one platform, or (b) the memory profiler code is not working correctly on that platform.

My first step was to put a breakpoint on the line of code that spits out the warnings and run the code in a debugger on the target platform.

Execution stops, and I examine the code in some more detail. The code that fires the warning works as follows: When a pointer to memory is about to be freed it is passed to the memory profiler, which calls a “GetMemSize” function to determine the size of the allocation, then fetches a value from the last 4 bytes of the block and checks if it is the expected value. If the value read isn’t as expected a warning is printed.

The callstack does not make me happy. The code freeing the memory is my code – it’s my generic array class (like std::vector, but awesome). This is code I’ve brought with me and has been in production use on shipping projects for years. It is unlikely, in my opinion, that this code is broken, but not impossible – so I keep looking.

I look at the code using the array – is it doing anything untoward? The array class has assertions on the element access to trap bounds errors, but the iterators do not. The code using the array class does not (that I can see) use iterators. It uses: push_back, pop_back and operator[]. All calls with debug guards to trap misuse.

So I take a step back. The profiling code itself is quite new, so I look at it in more detail. I check where the trapped value is supposed to be set, and find something that might be a lead – a complex nest of preprocessor conditional compilation.

Our memory manager is layered – Micro-Allocator, Turbo-Allocator and then the system (vendor/OS) allocator. The memory profiler is currently turned off for allocations that go via the micro and turbo allocators, so maybe there is a mistake in the way the #ifdefs have been structured that means the value is being checked, but not set up at all.

This seems like a good lead at first, and I follow it for a while, but it turns out that the #ifdefs are all set up correctly. The allocation in question is going via the system allocator, and the profiler is inserting it’s pad value at the end correctly.

Back to square one.

The problem is happening at start up, which is deterministic, and the OS uses a fixed memory model that means allocations always end up at the exact same address run after run. Time for some hardware breakpoints.

I run the game again, find out the address of the value the system is complaining about, and set up a breakpoint-on-write, then restart the run.

I filter my way through the breakpoints hits, checking if what is writing there should be, and find nothing. Nothing relevant at all. Nothing writes to that byte. It is unused. The value is not touched.

What on earth?

So I backtrack. Again. Double check my numbers. Did I have the right address? Yes. Break point the set up function. Breakpoint the creation of the array. Single-step though lots of code. The allocator is setting up the boundary value – but not in the same place as it is being fetched from!

So, a moment of clarity – it isn’t the value at the end of the array getting stomped, but the address the code is getting for the end of the array that is wrong!

Perhaps it’s the value at the start of the array getting stamped?

New hardware breakpoint, at the start of the array this time. Run. Watch were we stop. The breakpoint hits in system malloc and system free. It makes sense. The callstack looks right. Hits in malloc, then free, malloc, free, malloc, free.

Nothing that shouldn’t touch that memory was touching it. I see a bunch of mallocs and frees and then the system falls over.

What on earth is going on?

And then I notice it. The system that is using the array is an XML parser. There isn’t one array – there are loads of them.

This is the mind boggling part: The moment the call to free stamps on the size value of array ‘A’ was not the moment that the memory owned by ‘A’ was freed. It was when some other array was freeing memory.

So: We have a two blocks of memory allocated by the operating system. At the start of each one, we have a 4-byte int for the total size of the block. At the end of each one we have the guard value. We know the location of the guard value by checking the size of the memory block, but when we free ‘B’ the size of ‘A’ changes.

Nice.

Lets have a look at GetMemSize. For the platfrom in question GetMemSize(ptr) returns ptr[-1]-5.

Note the vendor in question does not provide a memstat/memsize function so this is the result of deduction and reverse engineering.

And it’s wrong. The version of the function for another platform by the same vendor has a hack of a work around that does:

1
 
  
if (memSize % 2 != 0) memSize = memSize + 1;

Oh Hello.

So, it seems like the system memory manager uses those first 4 bytes for more than just the size. After all, the memory allocations are all 4 byte aligned, so those first 2 bits aren’t really needed, and it seems like they are used for something else, perhaps to do with the availablity of neighboring blocks for use with realloc?

Who knows.

But the fix now is clear, mask off the bottom bit, and we are good.

1
 
  
UInt32 memSize = (Ptr[-1]-4)&~1;

Perhaps I should’ve masked off the bottom 2 bits?

Who knows, it all seemed to work after that. And the game shipped in December without major bugs.

This story is a rewrite of one I originally posted here:

Allocation standards

Original Author: Darren-Vine

When we started developing Elephant Memory Manager we had to be sure we covered the standards.  C malloc/free (and realloc) and C++ new/delete have subtle variations in how they deal with various allocation conditions.

When it comes to implementing your own allocator, using the default system allocator or any other option, it is good to know the quirks and pitfalls of each.  This way you can quickly resolve issues that rely on the standards – no matter how silly they may seem*.

Where this becomes important is when you redirect calls to malloc and new to your own allocation routines.  Because each act slightly differently some libraries can fail where they once worked fine.  Sometimes this can be painful to track down.

Malloc and New

C++ new should always return a valid address.  When it can’t the standard dictates that it should throw an exception.  Most games don’t use exceptions.  Some platforms don’t even support them correctly.  As a fallback, most often new will return NULL instead.  The nice thing with new is, assuming you have the memory, you can forego any error checking.  You know the pointer is valid and that it has allocated the required size.  It is not possible to allocate 0 bytes.

C malloc is slightly different.  Malloc returns NULL when an error occurs.  Unlike new, no error is immediately raised and checking for a valid pointer is recommended.  Generally in a game, an assert does the job.  Where malloc gets complicated is with malloc(0).  The standard dictates that it can return a valid pointer OR a NULL pointer.  The valid memory pointer is often an allocation of around 16 bytes as well, so 0 byte allocations actually consume memory!  NULL is fairly counter productive if you think about it as it generally signifies an error, often out of memory.  A malloc(0) pointer still has to be removed by calling free as well.  Thus free(malloc(0)) will always function regardless of the malloc return.

Why would you want a malloc of 0 bytes?  Good question and one I have no answer for.  What I can tell you however, is that quite a few pieces of middleware try to allocate 0 bytes and expect a valid pointer returned.  Here we recommend that you return some valid pointer.  Returning NULL will have you scratching your head for one of the more common pieces of middleware out there when it crashes in no mans land.

In my experience a malloc(0) always returns a valid pointer on all platforms.  It has been this way for the last 10 years but that shouldnt be relied on.  This maybe because most use DLMalloc or a variation of for their core allocation.  The valid pointer should not be accessed either according to the standard.  This makes sense as some allocator’s return a special address for this situation.

Free and Delete

In both standards these functions agree with one another.  free(NULL) and delete NULL will do nothing.  Most people implement each free/delete like so:

if(pPointer)

{

delete pPointer;

pPointer = NULL;

}

The thing is that the conditional is totally redundant.  You are just making work for yourself.  Where I think this arose from is a poor understanding of the standards in custom replacements.  This meant the conditional was needed to avoid anything nasty happening.

So do you still need SAFE_DELETE?  When just using delete/free I would argue that you don’t.  At least you do not need the conditional test but the clearing of the pointer is always helpful.  I would say that is one for your coding standards.  This may not be safe practice for ->Release() style functionality, where a form of SAFE_DELETE may still be required.

Realloc

Realloc is fairly simple but one that catches people out.  realloc(NULL,) functions exactly like malloc.  It will return a valid pointer .  realloc(ptr, size) should resize the allocation though it may relocate the pointer.  realloc(ptr, 0) works just like free.  In all error conditions the standard says that the original pointer should remain untouched.

With realloc(malloc(0), 0) here NULL or a valid pointer could be returned from malloc.  If NULL is returned from malloc then you should get a NULL out of realloc.  If a valid pointer is returned then it should call free and return NULL.  That is the standard as far as we have been able to ascertain.  This means that regardless of the malloc(0) return you should always get NULL out.  Note that I say should!

How did we implement it?

I think malloc(0) is a bit silly*.  The C++ new standard change would seem to agree.  We allow it, if and only if you specify that is what you want.  Otherwise we raise an error immediately.  We allow this on a heap by heap basis.  When it is allowed we allocate 16bytes and return it as a valid address.  This works with all known and tested libraries across many systems.  Returning NULL does not.

DLMalloc returns a valid pointer that consumes 16 bytes of memory each time for an allocation of 0bytes as well.  The standard system provided allocators all return valid pointers as well to our knowledge.

We also raise an error on NULL frees as well.  Again this is customisable, but why would you ever want to try to free a NULL pointer?  Generally if this happens an error has occurred further up the chain.  Here, actually breaking away from the standard can aid debugging.  Every allocation normally has a matching deallocation so it doesn’t, in most situations, make sense to silently allow this condition.

When it comes to running out of memory a NULL return should raise an error in a lot of situations.  Again customisation is the key.  If you are streaming data you may very well depend on allocations failing.  This way you know you need to make room or perform some other function to free memory.  In this situation the standards can cause issues.

Summary

I remember being totally shocked when I found out malloc could allocate 0bytes.  That was 12 years ago.  Depending on how you deal with it can cause problems further down the road.  I suspect this guy bumped into the same issue and that prompted him to ask the question.

When implementing your own versions of these functions it is good to follow the standards already laid down.  It can save a lot of work when you need to integrate with other components.  In other situations the standards may be a hindrance.  Unless you know them though you can’t plan for them.

It’s entirely possible I have some of the standards wrong here.  Just thought I would slip that disclaimer in.  Realloc changed subtly from C89 to C99 for instance.  What I have described here has worked with everything we have tested so I don’t think you can go far wrong.

* These are my opinions on the standards.

 


The two worlds – a project retrospective

Original Author: Rob Ashton

I’m finally back in the world of games development after a long sojourn selling my soul for cold, hard cash in the world of active directory and corporate software. I’ve managed to pick up development of my multiplayer WebGL game and push it to some form of completion too (It’ll be up later this week at an un-specified location pending load tests).

image

That’s the commit history for the project, starting off as an exercise in trying to do some TDD  with some basic WebGL code to somehow ending up building a game of hovercraft, neon-blue grids, and explosions (I’m not good with assets, so making everything glow seems like the cheapest way to make it pretty).

image

What has been interesting throughout this project (for me) has been the at times subconscious attempts to carry on ‘doing things the way I do it in most of my day to day software’, and finding out that some of the rules don’t apply in this particular space – and some surprising ones do. (or at least, in this specific game in this specific space).

First to go, TDD

Ah yes, TDD – this is a principle I’m beginning to lose my love for in both worlds for a multitude of reasons, but when I started this project I thought I’d applying this way of working to this side of the fence . I created a walking skeleton of the basic technology I was to be using (JavaScript, a browser, WebGL, a server with NodeJS) and started to write tests for each feature as I I wanted to add them.

This started falling over pretty quickly as most of the first features required fundamental integration tasks to be carried out before writing the higher level code (manipulating the DOM, playing with WebGL, loading shaders, connecting to servers) – each of these things requiring a tiny amount of code applied in just the right order in just the right way for the particular feature. As far as feedback cycles go, it was faster to hit ctrl-f5 in a browser and watch the actual code do what it was meant to do or fail pitifully than it was to try automating the running and assertions on those problems.

In essence, these things could have been solved by heading off into a blank project and doing a formal spike, before coming back to the main project and laying out the tests for them to help with the design. It would have given little value however, as they were already-solved problems and the code would change little if I designed it from the perspective of testability or otherwise. I decided to treat the entire project as one giant spike into this domain and just roll with the punches.

Next up the behaviour itself, this is where the formal TDD approach probably would have been a bit beneficial, but I was already feeling headstrong about my initial progress and wanted to just start spitting out features – this led me down a bit of a dark alley as the code I produced wasn’t written or designed for testability and too heavily relied on there being a heap of external components being present therefore making it too expensive to automate the running of the behaviour. Funny how that happens accidentally even though I supposedly know that a de-coupled system looks like.

I actually sorted that out eventually, arriving at a good, clean, testable design when solving a different problem entirely – making it possible to write tests that looked like this:

test(“Hovercrafts can acquire targets when only one target is around”, function() {
 
  
 
    var context = GivenAnEmptyScene()
 
      .WithAHovercraftCalled("one")
 
      .WithAHovercraftCalled("two")
 
      .WhereNotPointingAtEachOther("one", "two")
 
      .Build();
 
  
 
    var targettingCraft = context.FindCraft("one");
 
    var targettedCraft = context.FindCraft("two");
 
  
 
    targettingCraft.AimAt(targettedCraft);
 
    
 
    ok(targettingCraft.RaisedTargetAcquiredEventForCraftWithId("two"));  
 
  });

Or something along those lines anyway – note how I said “made it possible” – I found that the biggest value tests I could write were the ones describing  complex interactions between entities in the scene (but ignoring external components) but because the system is relatively small and because I’m the only developer and because I was changing everything all of the time to make it more fun to play that they simply weren’t worth writing.

If I had been part of a team or the system was larger and more complex I think it would have been worth it – and the lesson on how to arrive at this point was a valuable one, I’ll be writing tests for my next game at about this level and not worrying about the fluff.

The big difference here is I guess not one of games development, but of mentality and because I’m unused to working on projects where I’m the sole owner and I’m not going to have to ship it off to another developer in six months – I think apart from the core integration code there could be a lot of value in writing tests around gameplay interactions in most games dev, in this I don’t think our two worlds differ all that much.

Messaging

Push, don’t pull – that’s the mantra in most of the software worlds I work on – send a command, forget about it, raise an event, forget about it – deal with your own problems, maintain your own consistency, and let everybody else do the same.

I messed this up big time, turns out dealing with messaging in an interactive simulation across multiple clients and a server can throw up some pretty interesting issues at times, at one point I was cursing my decision to go with a push based architecture because it was becoming such a nightmare to deal with.

Now, other than duplication of data (and there certainly is some, memory is not my bottleneck), the biggest issue was just trying to work out which event had come from where and why it arrived in the order it had and why it had made something happen that shouldn’t have happened.

My mistake, was to consider this system like any other piece of business software I have worked on – when most of the software I have worked on in the business world has been of minimal behavioural complexity. Going back to my above statement on “maintain your own consistency, and let everybody else do likewise”, I wasn’t paying anywhere near enough attention to this and ended up with inconsistency everywhere, I was capturing events at the wrong level, I was dealing with state at the wrong level – I was … well yeah, doing it wrong.

I spent a few days tidying up this mess, and formalising heavily between the different types of messages – commands being external inputs to the system telling it to do something and events being things that have happened, and anything that goes over the wire or between entities are going to be one of these things.

I switched a lot of the code so that entities that raised events also listened to those events to update their own state, and every other unit or subsystem did likewise on receiving those events, effectively turning each event-listener into a de-normalised view around its specific area of functionality.

Turns out I by accident ended up with a system not unlike this: image

 

I then when bootstrapping entities into the world would attach different components to them if they were in the client to if they were in the server, supposedly making it easier to control who had permissions to do what.

Computer says no.

Turns out that this was a recipe for disaster, going back to the part where I talk about “maintain your own consistency, and let everybody else do likewise”, this is completely counter to that goal, the receivers in between the two worlds got fatter and fatter, and pulled more of the logic out of that world and the entities that were driving the game and made it once again hard to work out what was going on. We have a name for this in our general software world, an “anaemic domain model”.

Going back to the messaging aspect above, it ended up being much easier to push everything into the world (including score management etc) and have them as entities that could raise events, and handle other entities events and therefore contain their logic and guard their state more appropriately. To do this in a ‘safe’ manner, I’d just have two methods on an entity

  • raiseEvent
  • raiseServerEvent

And the entity component would be responsible for determining what it considered to be a special event that only the server could raise – the rest of the logic would be completely identical. As I’ve already suggested, having entities listen to their own events to react to change is a part of this – once raiseServerEvent is called on the server and the event automatically proxied through to the client, the code on the client carries on as if it was itself that raised the event.

This model looks something like this:

image

And is suddenly much easier to test and maintain – not to mention that now I have the possibility of running offline games by flicking a flag that allows server events to be raised locally. Perhaps a rename of the method to “raiseProtectedEvent’ Smile

Next steps

My next steps are to go through the 10 issues written down on a pad of paper next to my desk, wait for the domain name to propagate and push it out onto the internet on its own domain name (without publicising it too much as my client code isn’t too optimal and more than 30 players will probably kill most browsers, oops). I’m pleased with the learning achieved with this project and it is a real joy to see that my time building software that isn’t games hasn’t been completely wasted (lots of principles carry across it seems, although it pays to be more mindful of your state).

I have some profiling to do too, I seem to spend 5% of my time in the garbage collector and 15% rendering particles and a whole lot of other things that really shouldn’t be sucking that much time. This will be fun and no doubt result in yet another blog entry about JavaScript performance.

And then onwards to the next game, I have some ideas about what challenges I want to set for myself in this space and a game that I want to build that will set them for me.

The Craft of Game Systems, Part 2

Original Author: Daniel Achterman

Welcome back! My last article introduced my take on the craft of game system design. I identified the goals I believe designers should strive for, and broke down the component parts of systems. This article covers the process I use for designing powerful and flexible systems for games like RPGs and strategy games, and offers tips for each step.

Recap

Once again, the goals for system design are based on helping you design a consistent system that lets you create game content efficiently. A system should be:

  • Comprehensible: The parts of the system and how they interact should be understandable.
  • Consistent: Rules and content should function the same in all areas of your game.
  • Predictable: Designers should be able to predict how the system will behave in new circumstances.
  • Extensible: It should be easy to extend the system with new rules and types of content.
  • Elegant: Systems should strive to create rich situations from simple components.

The three components of game systems are parameters, rules, and content. The most intimidating part of designing a game system is getting started. I recommend breaking the task into the following steps:

  1. Choose the parameters that your game uses.
  2. Design rules that are no more complex than necessary to implement your game vision.
  3. Define the progressions for how parameters change throughout the game.
  4. Design content types that are as complex and interesting as you can manage.
  5. Add new parameters, systems, and content types in layers as needed.

Let’s walk through each of those steps in more detail.

Choosing Game Parameters

The golden rule of choosing what parameters to put in your game is to focus on parameters that impact the interesting choices players make. This is why it’s so important to start by clarifying your gameplay – it informs the parameters your game should have. If your RPG has different combat styles, make statistics to allow players to define how their character fights. Starcraft has two resource types because when to start mining gas and how heavily is an interesting choice that players make.

  • Start with character statistics. Players view games through the lens of their characters. All other content revolves around how it effects characters, such as items, skill trees, and research bonuses.
  • Start with a small number of basic stats. Create only enough stats for players to define themselves within your intended gameplay. Lots of stats are not necessary for interesting gameplay.
  • Consolidate parameters that don’t matter. Don’t include something like “Physical Defense” and “Magic Defense” if players aren’t meaningfully manipulating them to specialize their character or defeat content. “Defense” will suffice.
  • Be transparent. Ideally, players should be able to figure out what a value does and how it’s determined from its name alone. “Armor” and “Strength” are transparent stats to today’s players. “Attack Power” and “THAC0” are not.

The Legend of Zelda is a great example of a game with minimal, transparent parameter design. It’s only character statistic is “health”, which is represented by the simple visual of hearts.


A classic example of minimal, transparent parameter design.

Making Simple Rules

Rules use the values of parameters to determine what happens in your game. To keep your system comprehensible and predictable, design the simplest possible rules that are sufficient to implement your game visions. For instance, many games don’t have damage reduction from armor at all. In D&D, armor simply reduces your chance to get hit. The formula for damage reduction from armor in World of Warcraft is much more complex, but it’s necessary to make high armor values scale correctly in the end game.

  • Keep formulas basic. Formulas should be more multiplication and addition and less exponents and logarithms. This makes it easy for your players to understand how things work, and makes it easy for you to model how your game works.
  • Don’t use a single parameter for too many things. If “Strength” affects both attack damage and health, it becomes more difficult to comprehend its value in different situations, and it will make it more difficult to balance your game, because adjusting one stat will have repercussions in multiple place.

Good rule design is crucial for creating consistent game systems. It’s a difficult challenge to create rules that serve all the needs of your game and are still simple and elegant.

Creating Parameter Progressions

Parameter progressions refer to stat values that tend to change over the course of a game. For instance, character statistics increase and they get gear with higher damage and armor ratings as they level up in RPGs. In strategy games, advanced structures often have increased benefits and costs.

Progressions are crucial to game balance. For a game to be balanced, related values should change at similar rates, like a player’s power in battle and the strength of the monsters he fights. High level things like “player power” are often aggregates of a number of parameters, like armor, weapon damage, stats, and character stats. Understanding how the progressions of those parameters interact is key to making a game system predictable and extensible.

  • Define the rate that you expect stats to increase as players progress. Some stats may increase naturally as players level up (health, core stats) and others may increase as a result of getting better equipment (armor, weapon damage).
  • Use the simplest necessary progressions and formulas. Values can increase linearly, quadratically, exponentially, or in some horrifying combination. Use the simplest progression that does what you need.
  • Make related values change at similar rates. For instance, monster power and player power should increase at similar rates, as should monster power and monster XP value. Don’t make one linear and one exponential, or your game will be unbalanced.
  • Avoid switch statements. If weapon damage increases at one rate from levels 1-10 and a different rate from levels 11-30, it’ll be more difficult to make other content match. Resist the desire to make special case solutions, and focus on consistent systems.

Types of Progressions

There are many kinds of value progressions with different properties. The most common are linear, polynomial / triangular, and exponential.

Linear progressions are the simplest type of progression. Their formula structure is Ax + B, causing the value to increase at a constant rate. They’re simple and easy to understand, and I try to use them wherever they’re appropriate. One downside is that as x increases, the difference between terms becomes less significant. When you have 10 armor, +5 armor is awesome. When you have 500 armor, +5 armor barely matters.
A linear progression

The most common type of polynomial progression is the quadratic progression, which has the structure Ax2 + Bx + C. The rate of change increases as x increases, which gives it many uses in games. It shows up commonly because the product of linear progressions is a polynomial progression. So, if the amount of gold monsters drop increases linearly with level, and the number of monsters players need to kill to level up also increases linearly with level, then the amount of gold players earn from monsters each level increases quadratically..
A polynomial progression

Exponential progressions have the structure C * Ax. They grow slowly at first, then extremely rapidly. They can be difficult to use because they explode so quickly, but they have some interesting mathematical properties that make them a great fit for certain systems, such as experience progressions.

An exponential progression

Creating Interesting, Manageable Content

Content is where the magic happens. Content includes everything from the weapons that players can find and use to the skills and powers they can learn. The golden rule for content is to design content that is as complex and interesting as you can manage.

Content can be simple or complex. For instance, an item bonus that increases the wielder’s chance to get a critical hit by 1% is simple. Its value is immediately obvious to the player and is always the same. An item bonus that increases the wielder’s attack speed by 50% for 5 seconds after he gets a critical hit is complex. Its value is not immediately obvious, and its value is conditional on the character’s chance to land a critical hit.

  • Start simply and add content types iteratively. Don’t start with your amazing idea for a blacksmithing system that lets players reassemble weapons from their component parts. You’ll just be creating complication at the time you most need simplicity. As you build your game and clarify its needs, add new content types, systems, and parameters, and define how they interact with the basics you have.
  • Ensure you can roughly calculate the value of your simple content. For instance, what is the value of 2 points of strength, or a 1% increase in crit chance? Those are both examples of simple content, and being able to estimate the value of something will simplify tuning and balancing.
  • Decide what options players will have for modifying various statistics. In Diablo II, numerous items affect stats like strength and dexterity, but bonuses to skill levels only appear on a few types of items.
  • Don’t allow players to overspecialize. Highly specialized characters can trivialize aspects of your game, so don’t give players the ability to do it excessively. In World of Warcraft, all epic items have bonuses to core statistics like agility and stamina, but there are almost no other sources of those statistics, preventing players from stacking them.
It’s easy to want to design iteratively, but often very hard to do it. Maybe your team’s programmers and artists are busy making the engine and concepting, and you have plenty of time to write a massive a design document that details your entire game. You must resist! Don’t design too far ahead of your game. Find a way to prototype your design, whether in a primitive version of the engine or Flash or XNA or with cards, and put that prototype in front of playtesters. It will pay off down the road.

Content Complexity and the Cost of Balance

The simpler your content is, the more you’ll be able to use math and exact process to balance your game. If it’s complex or conditional, it can be difficult to use math to calculate its value, and the more playtesting and intuitive hands on tweaking you’ll have to do to balance it.

Complex or conditional content can offer players deeper, more interesting choices. If your team has the resources and time to dedicate to balance, make your content interestingly diverse and complex. If it’s just you, be aware of your limitations and keep your content’s complexity in check.

Final Word: References

I got an e-mail from a reader asking for suggestions for more articles or books about the craft of game systems. I’d like to offer a couple links and put out a call for more:

It’s my opinion that Mark Rosewater (The 10 Principles of Good Design.

Ian Schrieber (gamebalanceconcepts.wordpress.com.

If you have an article to recommend, please add it to the comments, e-mail me, or send me a message on Twitter (@DanielAchterman).

Roger Dickey’s Tactics for Game Monetization

Original Author: Betable Blog

Roger Dickey is the creator of Mafia Wars, one of the most successful social games of all time, and was the GM of FishVille at Zynga before becoming an international product advisor in their Japan and China divisions. He recently spoke at our our blog.

Engagement is the heart of your game

Roger Dickey started off his presentation outlining the three R’s of social games:

Reach, Retention, and Revenue.

  • Reach is how many people that your game touches, both through gameplay and also impressions in viral channels such as Facebook and Twitter.
  • Retention is how many people keep playing your game over time.
  • Revenue is money, of course.

Roger argues that if these were in a graph, engagement would be in the center. He compares engagement to the heart of your game because “it is effectively pumping blood to every other part of your game”. Engaged users will help you reach more people by sharing the game with your friend, they will retain longer, and they will be more willing to pay.

“Fun Pain”

One of Roger’s most interesting points was that “fun pain” was the key to social games’ success. Think about how a player needed to click each square to plant or harvest their crops in Farmville. This is a perfect example of “fun pain”, something that is simultaneously entertaining and a little bit annoying. This also gave Zynga the opportunity to upsell the player on pain-reducing items, such as a tractor that clicked four fields at once. These items were extremely popular among players, even though they only existed because it was painful to play the game in the first place!

Grind vs. Spam vs. Pay

Social games work in much the same way when it comes to special items. This is frequently employed with special items that are built via a combination of parts. Typically, you can earn the parts for the special item in three ways:

  1. Grind for them over a long period of time
  2. Spam your friends to have the send you the pieces you need
  3. Pay for the parts that you are missing

Players almost always start with Grind or Spam to kick off their pursuit of the item. However, as the player grows weary of grinding and doesn’t see the response he was hoping for from his friends, he is left with a partially completed item and no use for the parts. Now, the user is willing to pay for the item to be completed.

Estimating your game’s monetization

A simple, rough formula to estimate how much a casual social game could be broken out as follows:

  • An energy mechanic was worth $0.03 ARPU
  • A decorative factor was worth $0.02 ARPU
  • Competitive gamplay was worth $0.05 ARPU

This is a pretty high level estimation that begs the question…

What do people pay for?

Roger found that people pay for the following in social games:

Identity expression

Players will pay for anything that is socially surfaced in the game because you’re presenting your farm, city, or avatar to your friends.

Vanity

This drives demand for exclusive items: people will pay more when there is only a limited number of an item available, or to get things before other players.

Fun

Items that make the game more convenient and tip the “fun pain” scale more towards fun are worth a lot to a wide variety of players.

Exclusive features

Having certain features or aspects of gameplay only become available for a fee can be an effective monetization model, as we see often with freemium software.

Competition

Hardcore players, especially males, will pay to get a competitive advantage against their opponents.

Social value

Your players want their friends to play and by helping them, they increase their friend’s chance of sticking with the game. Once power becomes social, it becomes much more valuable and people pay for it.

Chance

Roger found that random chance was a huge incentive for people to buy. We wrote a blog post that highlights the power of the Mystery Box that covers this in more detail.

Stat Progress

Players will pay for progress or temporary power in a game, especially a competitive one.

Story

Surprisingly, people pay very well to advance the story. They feel a sense of progress when completing quests and will pay to overcome roadblocks in that progression.

Measure everything

Roger was adamant about launching with metrics already in place. Furthermore, he pushed for game studios to record every single click or action that players did in the game. This requires reams of data but is well worth it. Much of the insights that Roger gained about player behavior was from data that he didn’t even know he needed when he build his metrics systems.

Cohorts

Breaking your users into cohorts is an incredibly important tactic for monetizing your game. Roger’s recommended cohorts goes far beyond the typical hardcore vs. casual groupings. He recommended using the following cohorts:

  • Play Frequency – how often do people play
  • Socialness – how viral or willing to share are they
  • Spending profile – how often do they pay for an in-game item
  • Lifetime – how long have they been playing, how long will they play

Mine the theme space

When working at Zynga as GM for Fishville, Roger Dickey and his team spent a good amount of time doing what he calls “mining the theme space”. They read books on fish and put up fish posters around the office to make sure that they were immersed in the world they were creating. That way, when it came time for game design meetings, the team was usually full of new ideas for gameplay and content.

Master plan for monetization

Lastly, Roger emphasized the need for a master plan for game monetization. This means including monetization from the very onset of the design’s inception. To this end, he offered up a number of tips and strategies from his game monetization toolkit:

Negative reinforcement

Your obligation to your creations is a real driver of engagement. You don’t want your fish to die and float to the top of your tank, looking ugly and showing your neglect for all your friends to see.

Fairness

It is worth noting that players care much less about payer vs. non-payer fairness than a game designer would think.

Consumables

When working to launch a competing game in Japan, Roger studied Kaido Royale extensively. In this Mafia Wars-style game, you needed to not only buy a gun to use but also the bullets to fire the gun, and this consumable use system helped the game monetize extremely well.

Energy

“If you give somebody a huge bucket of candy, they’re gonna love the candy for a week, and then never want any again. If you give them 10 pieces a day, they’ll keep coming back for years”.

Premium decorations

“Farmville was at one point mostly a canvas for people to decorate on.”

Territory expansion

In any game where the long term goal is to build, territory expansion is a big part of the game.

Seasonal content

A necessary evil that helps the game retain users by keeping it interesting and dynamic.

Content grab bags

When buying multiple items at once, players simultaneously feel like they’re getting a deal and that they’re buying something more substantial than a virtual gun.

Sponsorship

This is a decent way to increase revenues, just don’t let sponsored content “go all Myspace and take over the whole game.”

Free currency

This doesn’t monetize well, but low level players will purchase free currency to advance faster or complete quests.

Collection completion

This means mechanics like ‘do this 10 times and master it’ in Mafia Wars. “It’s kind of funny sitting there as a game designer and being like ‘our game is already kind of mundane… what if we make everyone do things 10 times?’ Well, it can work.”

First time buyer incentive

To get people to convert from free-to-paid, first time buyer incentive gives players that haven’t purchased a ‘deal’ that gets them over that crucial first purchase hurdle.

Wagering

The ability to wager on the outcome of your game could be a game changer. Betable is the first platform that makes it possible for game developers to implement this in their games.


Roger Dickey’s presentation gave us a ton of insight into game monetization and the psychology behind social game mechanics. A big thanks goes out to Roger for sharing his strategies with our San Francisco Game Monetization meetup.

Review Scores Are Bad! Let’s Fix Them!

Original Author: Andrew Meade

Earlier this week I asked Forrest Smith challenged me to talk about something completely original.

Usually I have a pretty good idea of what I’m going to write the week before my next post, making it just a matter of taking what I’ve written in my head and sending it out to you fine folks. With just recently moving, starting a new job, and school in full force, I’m afraid I can’t switch gears that rapidly, but my next post will rise to Mr. Smith’s call for originality. I’m nothing if not easily goaded into doing things for the challenge, so this week, Mr. Smith, this post isn’t for you.

But for everyone else – HELLO! Let’s talk Review Scores.

Lately reviews have come under serious fire – partly because some reviewers aren’t reviewing with the impartiality that they should, partly because some developers felt that they deserved better, and partly because the entire system is broken. I’m not here to talk about the how, what, and why – I’m here to talk about how we can fix it.

Before we do this, let’s look at irrefutable proof that the scoring system is invalid and useless in its current state to make sure that nobody can come around and yell at us for reinventing the wheel.

Here is the criteria for a 10/10 score in Game Informer –

Outstanding. A truly elite title that is nearly perfect in every way. This score is given out rarely and indicates a game that cannot be missed.

Already I’m having problems. “Nearly perfect?” Traditionally a 10/10 or “Perfect Ten” is used to grade something that has no flaws or imperfections – hence the perceived rarity.

Now let’s look at the most recent 10/10 that Game Informer handed out. It was for The Legend of Zelda: Skyward Sword, and the review was mostly glowing. However, about 10% of the entire review was negative feedback. Let’s look.

Despite my love for it, I can recognize a few elements of this latest Zelda adventure that some gamers are going to dislike. The much-vaunted Skyloft proves to be a fascinating starting locale with tons of sidequests and secrets to discover, but flying to different floating islands takes a bit of time. It’s much faster and generally less annoying than Wind Waker’s sailing, but there were times where the pull of my next object was so strong that I would have gladly accepted a fast warp to that location.

            The vast, open Hyrule Field is replaced by tinier, more disconnected, and more puzzle-centric ground areas leading up to dungeons. Although Skyward Sword is lengthy – my first playthrough took just over 40 hours – the physical size of the game world is smaller than Twilight Princess. As such, the game occasionally tasks you with backtracking through areas you’ve already competed while on fetch quests, but it usually changes the environment in interesting ways or throws out new challenges.

 Interesting review, eh? It seems to me that the reviewer was extremely hesitant to say anything bad about the game – mostly retracting negative statements with counters to invalidate the bad. Bottom line, it doesn’t seem like a 10/10 to me by reading that score – maybe a 9.9/10, but a 10/10? No way.

And here we come to the beauty of this entire argument. Somebody, somewhere, is going to read this and say that I’m being way too critical of the review, that the reviewer was splitting hairs to make it look like it wasn’t lip service, etc. Someone else will read this and say that I am being too generous with my amended score, and suggest maybe a 9.5/10.

You see what I did there? I just made you paint yourself into my corner of right, the tiny little area where I’m infallible and totally rocking an awesome argument. It’s ok to relax – enjoy your stay in Andrew’s Corner of Right – there’s an honor-bar with macadamia nuts and tiny bottles of Southern Comfort, and it’s totally comped…kind of like a free GDC drinking binge.

Ok, so now that I have proven that review scores are 100% subjective and broken, let’s move on to fixing them. And yes, I know that this was information that 99.9% of you already knew, but it’s always important to back up your argument on the Internet!

Here is what I propose. We write a manifesto that we circulate to friends and coworkers. Much like a GDD, this will be a comprehensive document detailing how to fix the problem. Maybe I’m feeling inspired from all the OWS stuff going on, but let’s start a revolution, man!

I’m going to start up a few categories and add in a handful of bullet points to get the doc started, and then you guys take it and evolve it. This is a community document, not just some ranting post where I talk about how awesome I am (although I am pretty awesome if you get me on a good day).

Reviewers

  • Reviewers should not seek revenue from publishers – this includes banners and page backgrounds – this is vital to journalistic integrity.
  • Reviewers should make their reviewer Gamertag known at the beginning of every review, so players can look them up and see just how far into the game they got.
  • Reviewers should keep their reviews 100% player-centric. Don’t talk like developers, and don’t use flashy terminology. Give the reader enough information so that they can judge whether or not the game is something they want to look into, not a dissertation on the state of gaming that screams “I COULDA BEEN A GAME DESIGNER!”
  • Don’t forget that people worked hard on the game. Don’t be a douche.

Scores

  • Scores are banished from the land. This includes stars, numbers, thumbs, little joystick icons, etc. Banished.
  • Reviewers may use a “Game of the Month” system, or something similar to that. These top picks will let the player know what games went over best without arbitrary scores.

Terminology

  • For the love of all that is good and holy, stop saying things like “The controls were floaty”. What does that mean? I mean, we know what it means, but the average player may not equate it to the proper context. For instance, you want the flying in Skyward Sword to feel floaty, but you don’t want gunplay in MW3 to feel floaty.
  • Use clear terminology that tells the player in no uncertain terms what you are saying. Be specific.

Players

  • Don’t razz reviewers for having differing opinions. Their job is to absorb the entire piece and summarize its worth to the individual reading it. You don’t like it, by all means make a thoughtful rebuke, but please don’t go “OMG U R THE SUCKZ”. It makes us all look bad.
  • Don’t take reviewers as gospel. Pierce the veil and make your own decision, because that’s what reviewers are there for – to help your decision, not give you a decision.

Developers

  • Take reviews for what they are – an opinion. Without review scores, there should be no need to freak out over getting an 8 or 9 anymore.

Seems like a pretty decent start, eh? Our medium is a fantastic one. We deserve a little better when it comes to reviews. We deserve it, the players deserve it, and the reviewers… well…they are better than what they have been churning out recently. At least I hope they are.

Anyways, let’s evolve this. Keep adding ideas to it – maybe we can get something going. Maybe we can inspire an intrepid journalist to start the first great review rag of our time. Or maybe I just typed out 1300 words for nothing.

 

So You Wanna Be In Charge?

Original Author: Tim Borrelli

“It is a well known fact that those people who most want to rule people are, ipso facto, those least suited to do it.” – Douglas Adams, Hitchhiker’s Guide To The Galaxy

This blog was cross-posted here.

OK, animators. I know the deal. You guys want to be the next Brad Bird, the next John Lasseter, the next Jennifer Yuh Nelson, or the next Glen Keane.

Well, don’t we all. But here’s the problem. Instead of concentrating on the craft that you so passionately want to blaze a trail in, some of you are concentrating on figuring out how to become the next amazing director. While some of you are going about this endeavor in a respectful and proper manner, some of you are doing it wrong.

In fact, by my completely unscientific count, there are 99% of you doing it right, and 1% of you doing it wrong. It’s not just animators, either. It’s just about every discipline in this and other industries. So to the rest of you, when you read “animator,” fill it in with your specialization!

The 99%, Part 1: The 85%

I’ve interviewed many of you. I’ve asked many of you the stereotypical “what do you want out of your animation career while working here” while trying to fill entry to senior level positions. Almost all of you have responded with “learn game animation, contribute the best I can and be a part of a great team.”

You guys and gals are the easy hires. If you’ve got a positive attitude, a killer demo reel and the position is open, you’ve got a good chance of being picked up. If the position is filled, your reel will stay in the pile for future positions.

You’re going to work hard at improving your craft, no matter what your experience level. You’ll soak up as much information as you can, learn to give and take constructive criticism and build positive relationships with many of your fellow animators in the trenches. You are going to excel in animation and be the people that everyone wants on their team.

You probably don’t need to read on, but I bet you will, because you want to learn.

The 99% Part 2: The 14%

Some of you want it all, and want it now. When asked what your short-term goals are, you answer “To be an animation supervisor!” When asked why, you answer “Because I have great ideas!” or “That’s how I feel I can make the biggest impact!” That kind of ambition is great when expressed and executed respectfully, and many of you understand that.

You guys and gals are a little harder to hire. Your enthusiasm to excel may be off-putting to some potential employers. Others may see a little bit of themselves in you and want to give you a chance. In some cases, the position you are interviewing for just won’t match your goals, and in others it will.

Out of your smaller group, many of you realize that you aren’t going to just be handed the responsibility you want. You work your way through the ranks, learning like sponges, waiting patiently for the opportunity to prove your ability to lead your peers. You step up in team critiques. You seek out opportunities to speak at conferences or start podcasts or conversations to talk about the future of your craft while embracing its past. You may not all excel at animation as well as the 85%, but many of those 85% respect your leadership.

You should probably read on, I promise it’ll help you achieve your goals.

The 1%

Then there are the rest of you. You have a sense of entitlement that is mind boggling. Not only do you want it all, and want it now, but you proclaim you have great ideas and scoff at those who disagree with them. You routinely criticize other disciplines, proclaiming you could do their job better than them. Whether or not your beliefs are true, you have little respect for the path already laid (and those who laid it) and even less patience for earning a shot to prove you can improve that path.

You guys and gals are usually excellent at manipulating people to your side, which can be misconstrued as leadership. Hence, you tend to talk yourselves into getting hired and promoted. Some of you will force people out of the way to move up, others will simply move on to the next studio when a higher position opens up.

There are a few of you who can animate to the level of the 85% or 14%, but not many. You’ve spent more time trying to LOOK good and less time trying to BE good- good at animation, good at teamwork, good at being a leader. You seek praise, not feedback. You are quick to blame, but quicker to take credit for a job well done. You are on a team, but not always a team player.

You probably won’t read on, even though you should.

The 100%

No matter what your goals are with your animation career, and no matter what group you fall into, there are some things you all need to learn. Some of you will listen, some of you won’t, and that’s fine. These things will work themselves out over time. This isn’t so much a checklist of what to do as it is a guideline on how to behave in a professional and social environment.

More importantly, this is my priority list of what I’d like to see in a person looking to become an animation lead, supervisor, or director. Most of this stuff is fairly common knowledge, but unfortunately not common practice:

  1. When giving critique, DON’T give it the way you would want to be given it. Not everyone will respond the same way to feedback. First, learn how to constructively critique (saying something sucks, doesn’t “feel right”, or “you’ll know it when you see it” isn’t constructive). Second, learn how each member of your team most effectively responds to critique. Easy? No, but being a lead isn’t easy.
  • On that note, if you are talking about work done on another studio’s project, publicly calling their work “horrible,” “terrible” saying “it sucks”, etc., is fairly poor form, even moreso if you offer no opinion on how to improve it. Who’s going to want you directing them if that’s how you treat the hard work of people you don’t even know?
  • No matter how good you are, or think you are, there is always someone better. That someone might be on your team. Be OK with that, and look at it as an opportunity to improve YOUR craft. Besides, if you want to be in charge, you won’t be animating as much (if at all)  anymore. May as well get used to it.
  • When someone compliments something you worked on, your first response should be to credit your team. This applies to in-person as well as in interviews, via email, on Facebook/Twitter, etc. Whether you believe it or not, at least ACT like the thing being complimented was a team effort. Chances are it was anyway, and everyone knows it.
    • Conversely, if you make a mistake, own up to it. Being in charge also means taking the brunt of the blame when things go wrong, and if you can’t even admit when YOU made a mistake, few will trust you to stand up for them as their leader.
  • If you don’t like how a teammate or your current lead does things, talk to them about it. Don’t harbor ill feelings or talk about them behind their backs, since the people you talk to will eventually assume you treat them in the same manner. Besides, throwing your team under the bus is not a sustainable career tactic.
  • Self-promotion is important, but don’t do it selfishly or immaturely. You weren’t the only one who worked on your project, after all, so don’t misrepresent the rest of your team by acting like a child. Promote your game, yes, but leave the interviews to the PR and community team to set up. If they want you to be a part of them, they’ll ask.
  • Above all else, treat teammates like human beings, not like resources. If their work is suffering, maybe something in their personal life is distracting them that you can try to help them work through professionally. If someone has the potential to exceed your abilities, encourage them instead of setting them up for failure. If someone pays you a compliment on your work, pass it on to the others who had an impact on that work. If you feel the need to shout to the rooftops about how awesome YOUR work was on a game, remember there are teammates who don’t seek that recognition who worked just as hard as you did, if not more.

    Doing it Right

    If you want to be a director, supervisor or lead,  you’ll need to learn to let go. Let go of being the one who animated the cool thing. Let go of believing that you are the only one that can act out those moves from that passion of yours. Let go of trying to be the best animator on the team.

    Instead, learn to direct the cool thing and allow the animator you directed to shine when it comes out well. Learn to trust and direct your mocap actors. Accept that you aren’t in the spotlight on your team anymore.

    You are no longer the producer of content, you are the ENABLER of those who DO produce the content. It’s not an easy transition to make, especially if you were the go-to guy for a long time. However, if you follow the guidelines above, and find that the transition is right for you,  it’s definitely a rewarding one.


    Policy-based design in C++

    Original Author: Stefan Reinalter

    One problem which often arises during programming is how to build a base set of functionality which can be extended by the user, while still being modular enough to make it easy to replace only certain parts of an implementation without having to resort to copy & paste techniques. I guess everybody of us has faced this problem at least once, and came up with different solutions. There is a powerful and elegant technique called policy-based design for solving this kind of problem, which is what I want to show today by applying it to a mechanism common in game development: logging.

    The problem

    Let us assume that we want to add logging facilities to our game, and for that purpose we build a simple base class called Logger, which can be extended (or completely replaced) simply by deriving from it and overriding a virtual function. Note that we are not concerned about how log messages are dispatched to the different logger implementations, but rather how new logger-classes with completely different functionality are implemented.

    A simple base class for loggers might look like the following:

    class Logger
     
      {
     
      public:
     
        virtual ~Logger(void) {}
     
       
     
        virtual void Log(size_t channel, size_t type, size_t verbosity, const SourceInfo& SourceInfo, const char* format, va_list args) = 0;
     
      };

    As you can see, it’s nothing more than a very simple base class with one virtual function. The arguments are the log-channel (e.g. “TextureManager”, “SoundEngine”, “Memory”, etc), the type of log (e.g. INFO, WARNING, ERROR, FATAL), the verbosity level, a wrapper around source-code information called SourceInfo (file name, function name, line number, etc.), and last but not least the message itself in terms of a format string and a variable number of arguments. Nothing spectacular so far.

    One possible logger implementation could be the following:

    class IdeLogger : public Logger
     
      {
     
      public:
     
        virtual void Log(/*arguments omitted for brevity*/)
     
        {
     
          // format message
     
          // do additional filtering based on channel, verbosity, etc.
     
          // output to IDE/debugger
     
        }
     
      };

    The IdeLogger outputs all log messages to e.g. the MSVC output window by using OutputDebugString(). In order to do that, it formats the message in a certain kind of way, applies additional filtering based on the channel, verbosity, etc., and finally outputs the message. We might want another logger which logs to the console, and one which writes into a file, so we could simply add two additional classes called ConsoleLogger and FileLogger, which both derive from the Logger base class.

    Sooner or later, this is where we run into problems, depending on what we want:

    • A logger which writes to the console, but only filters based on the channel, not the verbosity level.
    • A logger which writes into a file, but doesn’t filter any messages because they are a useful tool for post-mortem debugging.
    • Slightly different formatting in one of the existing loggers, e.g. I like being able to click on log messages in Visual Studio’s output window because they are formatted like this:

      “C:/MyFilename.cpp(20): [TextureManager] (WARNING) Whatever.”

    • A logger sending messages over a TCP socket, without having to copy existing code for formatting/filtering.
    • Many more such features…

    A deceivingly simple solution

    One solution which is sometimes applied to such problems is to put certain parts of an implementation into several base classes, and build a deep hierarchy of classes by using multiple inheritance. In essence, you end up with classes like ConsoleLoggerWithVerbosityFilter, FileLoggerWithoutFilter and TcpLoggerWithExtendedFormatting which multiply inherit from concrete implementations of base classes like ILogFilter, ILogDestination and ILogFormat. Personally, I tend to favor flat hierarchies without any coupling over deep hierarchies where leaf classes are sometimes affected by changes to some base class.

    Furthermore, judging from experience you often need to add a bunch of virtual functions to such hierarchies only to make some leaf class work, or end up copy-pasting existing code because a seemingly innocent change might break some of the existing implementations, which is why I try to stay away from such solutions – they might work for certain problems, but I’ve often seen them break during the development of a product.

    Anatomy of the problem at hand

    Let us take another look at the problem, this time from a different angle, leaving out implementation details like base classes, inheritance, and so on.

    What we essentially want is a mechanism which makes it easy to define new loggers based on existing functionality without copying any code, and without having to write a new logger implementation each and every time. What we want is a mechanism where parts of the implementation could essentially be assembled by writing only a few lines of code.

    By splitting the logger’s responsibilities into smaller pieces, we can hopefully find some orthogonal functionality along the way. A logger essentially:

    • filters the incoming message based on certain criteria, then
    • formats the message in a certain way, and finally
    • writes the formatted message to a certain output.

    If you think about it, these aspects are completely orthogonal to each other, which means that you can exchange the algorithm for filtering messages with any other without having to touch either the formatting or writing stage, and vice versa.

    What we now would like to have is some mechanism for exchanging those aspects with very little amount of code. Such a mechanism can be achieved by making use of templates, like in the following example:

    template <class FilterPolicy, class FormatPolicy, class WritePolicy>
     
      class LoggerImpl : public Logger
     
      {
     
      public:
     
        virtual void Log(/*arguments omitted for brevity*/)
     
        {
     
          // pseudo code-ish...
     
          if (m_filter.Filter(certain critera))
     
          {
     
            m_formatter.Format(some buffer, criteria);
     
            m_writer.Write(buffer);
     
          }
     
        }
     
       
     
      private:
     
        FilterPolicy m_filter;
     
        FormatPolicy m_formatter;
     
        WritePolicy m_writer;
     
      };

    The above is a very generic logger which passes on the tasks of filtering, formatting and writing messages to certain policies, which are handed down to the implementation via template parameters. Each policy does only a very small amount of work, but by combining them you can come up with several different logger implementations with only a single line of code, by means of a simple typedef, as shown later.

    Example policies

    Before we can discuss the pros/cons of this approach, let us quickly identify what some of the policies might look like in order to gain a better understanding of how such a system works:

    Filter policies

    struct NoFilterPolicy
     
      {
     
        bool Filter(const Criteria& criteria)
     
        {
     
          // no filter at all
     
          return true;
     
        }
     
      };
     
       
     
      struct VerbosityFilterPolicy
     
      {
     
        bool Filter(const Criteria& criteria)
     
        {
     
          // filter based on verbosity
     
        }
     
      };
     
       
     
      struct ChannelFilterPolicy
     
      {
     
        bool Filter(const Criteria& criteria)
     
        {
     
          // filter based on channel
     
        }
     
      };

    As you can see, each policy takes care of filtering messages based on certain criteria, nothing more.

    Format policies

    struct SimpleFormatPolicy
     
      {
     
        void Format(const Buffer& buffer, const Criteria& criteria)
     
        {
     
          // simple format, e.g. "[TextureManager] the log message";
     
        }
     
      };
     
       
     
      struct ExtendedFormatPolicy
     
      {
     
        void Format(const Buffer& buffer, const Criteria& criteria)
     
        {
     
          // extended format, e.g. "filename.cpp(10): [TextureManager] (INFO) the log message";
     
        }
     
      };

    Writer policies

    struct IdeWriterPolicy
     
      {
     
        void Write(const Buffer& buffer)
     
        {
     
          // output to the IDE
     
        }
     
      };
     
       
     
      struct ConsoleWriterPolicy
     
      {
     
        void Write(const Buffer& buffer)
     
        {
     
          // output to the console
     
        }
     
      };
     
       
     
      struct FileWriterPolicy
     
      {
     
        void Write(const Buffer& buffer)
     
        {
     
          // write into a file
     
        }
     
      };

    Discussion

    By dissecting the problem into different aspects, we can now implement very small functions/structs called policies, which can be assembled together in any way we wish by using just a single line of code. Some examples:

    typedef LoggerImpl<NoFilterPolicy, ExtendedFormatPolicy, IdeWriterPolicy> IdeLogger;
     
      typedef LoggerImpl<VerbosityFilterPolicy, SimpleFormatPolicy, ConsoleWriterPolicy> ConsoleLogger;
     
      typedef LoggerImpl<NoFilterPolicy, SimpleFormatPolicy, FileWriterPolicy> FileLogger;

    Advantages

    • With only 3 filter policies, 2 format policies, and 3 writer policies we are able to come up with 3*2*3 = 18 different implementations, just by using a simple typedef.
    • Each part of an implementation can be replaced separately, so if you e.g. add a TcpWriterPolicy you can combine it with any other filter or format policy, without having to resort to copy-pasting.
    • Pieces of policies can be assembled in any way we want.
    • New policies can be build using existing policies simply by combining them.
    • If any combination of policies does not exactly do what you want, you can still implement your own logger by deriving from the Logger base class, without being forced into a certain inheritance structure, except the single virtual function in the base class.
    • Simple, flat hierarchies, no multiple inheritance used.
    • One virtual function call instead of several ones (multiple inheritance of several interface classes).
    • Unit testing becomes a lot more easier because each policy has exactly one responsibility.

    An example of how to build new policies out of existing ones:

    template <class Policy1, class Policy2>
     
      struct CompositeFilterPolicy
     
      {
     
        bool Filter(const Criteria& criteria)
     
        {
     
          return (m_policy1.Filter(criteria) && m_policy2.Filter(criteria));
     
        }
     
       
     
      private:
     
        Policy1 m_policy1;
     
        Policy2 m_policy2;
     
      };
     
       
     
      typedef LoggerImpl<CompositeFilterPolicy<VerbosityFilterPolicy, ChannelFilterPolicy>, ExtendedFormatPolicy, IdeWriterPolicy> FilteredIdeLogger;

    Drawbacks

    • Template parameters are part of the class name, leading to longer class names, in turn making the code harder to debug if you’re not used to it.
    • Increased compile times if you’re not careful. I would recommend using explicit template instantiation for required logger implementations.

    Conclusion

    Policy-based design is a powerful tool which can be used in several situations, not just the one shown here. It can lead to extendable and modular classes if applied correctly, but it isn’t a silver bullet which can forever solve our architectural problems, hence it shouldn’t be applied blindly.


    Game Thinking

    Original Author: Martin Pichlmair

    I recently attended a talk by a gamification proponent who presented a fragmented and ill structured theory on what gamification can bring to a product. He arrived at the right conclusions, but due to the long and winded road he took there. the audience was generally unwilling to accept the fine finale of his talk. In the end, he dismissed gamification as the surface-scratching marketing tool that it currently is, proposing a focus on “game thinking” instead. Because he failed to come up with a convincing definition of that phrase, I thought I should step in and deliver one. Mine is based on “design thinking”[1], a term popular in design theory. I’m quite familiar with design theory because it was the foundation of all our research at the university department I researched, taught and worked at before entering the exciting world of the games industry.

    Design as a Process

    It is an engineer’s dream to structure the design process of a product into discreet steps leading from idea to finished product. Many disciplines of engineering, be it software engineering, bridge building, or product engineering, shared this dream. Take a problem and solve it by dividing it into sub-problems that can be tackled in succession. The steps are: generate an idea, make a plan, follow the plan, ship. At worst, the product is mistaken as the process, with “the plan being organized so as to make the structure of the design process reflect the structure of the sub-components of the resulting design product.”[2], which is then called the product-process symmetry.

    Yes, the waterfall model is a famous result of this thinking. But there’s a reason why most game studios utilize agile methods instead of the waterfall model. Games do not lend themselves to this style of strict planning. This is especially true if they are innovative, but even games with a marginal degree of innovation might have development challenges that are difficult to take into account in advance. The same is true for product engineering. I’m not very familiar with bridge building, so I won’t talk about that. In design, iteration became the new paradigm, and with it came a plethora of new design and development methods – from rapid prototyping and pair programming to interaction sketching.

    Wicked Problems

    The core of games is interactivity. Games are rules systems that only flourish upon interaction. The dynamics and aesthetics of play is what we design them for. In design thinking, the notion of “wicked problems” was introduced to describe those mostly user-centric problems that are so hard to solve in a step-wise problem-solving manner. Rittel and Webber described wicked problems in 1973 as problems where the following list of conditions is met:

    1. There is no definitive formulation of a wicked problem.
    2. Wicked problems have no stopping rule.
    3. Solutions to wicked problems are not true-or-false but good-or-bad.
    4. There is no immediate and no ultimate test of a solution to a wicked problem.
    5. Every implemented solution to a wicked problem has consequences.
    6. Wicked problems do not have a well-described set of potential solutions.
    7. Every wicked problem is essentially unique.
    8. Every wicked problem can be considered a symptom of another problem.
    9. The causes of a wicked problem can be explained in numerous ways.
    10. The planner (designer) has no right to be wrong.[3]

    Think of a game mechanic and you can see how nicely it fits this description. I’ll give an example. We’re currently testing out different single-player mechanics in Chasing Aurora. One of the design challenges we’re facing is developing physics-based 2D flight for a bird-man. Since flight is your main means of traversing a level and the challenges are platformer-ish, the player has to have a lot of control over movement. On the other hand a lot of the challenges are built on wind-streams that affect your movement. We’re iterating the flight movement component again and again and again and again to strike the perfect balance between control and being at the mercy of the elements. Now let’s look at the above list. (1) The description of the problem is informal and incomplete. (2) We will not know it when we built the perfect control scheme just bounce against the fluffy invisible wall of diminishing returns because there is no absolutely (3) right solution to this problems. (4) There isn’t even a formal test we could apply to test if our solution is perfect. All we got is unreliably human game testers. (5) If we implement a different solution it affects the whole gameplay. (6) There is no point in describing all possible control schemes for character movement, we just need to make it (7) deep, feeling fresh and perfectly fitting to the game as a whole. (8) Reminder: The whole game can be regarded as a wicked problem. Point (9) is impossible to tackle without venturing into philosophy. I leave that as an exercise for the reader. And finally, (10) the game will simply not fulfill the player if we fail.[4]

    The reasons why wicked problems are so prevalent in the game development process are many, but the most obvious are:

    • Game development is bound by countless constraints: Genre, technology, player expectation and physiological limits, to name a few. Constraints call for creative solutions.
    • Interactivity is at the core of games: Nearly every aspect of a game manifests as a dynamic system bound to user interaction. Interactive components always need to be tested with players.
    • All systems and parts of systems are connected: Health and health packs, health packs and lifting strength, lifting strength and inventory display, inventory display and button mapping, button mapping and device drivers, device drivers and graphics cards, graphics cards and shader versions, shader versions and post-processing FX, post-processing FX and particle systems, particle systems and healing magic, healing magic and health. Solutions to one problem affect a host of other problems and their solutions.
    • Innovation needs experimental setups: Every innovation is a risk. Despite the ongoing cloning and the prevalence of straight genre works, there’s a huge amount of innovation in the field.

    The standard game design and development problem solving toolkit consists of tools to overcome wicked problems: Scrum, prototyping, MDA, game testing from pre-alpha/internal to beta, interaction sketching, even patching and the rampant board game obsession. Tackling wicked problems is key in game design. And we’re doing it every day.

    We’ve Already Solved These Problems, Let’s Tell The Others

    Game development never fell into the product-process symmetry trap in the first place. We’ve solved a lot of design problems in this industry that other similarly structured areas are still having problems with. And if you look at the current trend of gamification and how horribly designed most of the applications of gamification are, it is clear that the challenge is not only bringing game mechanics to other areas but revising company structures in other industries so that they are able to tackle problems as game design tasks. Hand over the keys instead of opening the doors. To truly gamify a service, processes in the company that run the service have to be adapted. New processes, tools and methods have to be introduced. Just adding a points-based rank system does not do the trick. While restructuring the company, the whole process of user interaction has to be restructured, too. I am no friend of gamification, but if you do it, you better do it whole-heartedly. If you add achievements to a web service, make them as supporting to your design goals as possible (after all, achievements play an important support role in modern gaming), as a prize for risking something, as a tool to foster changing the style of play, rewarding exploration and experimentation, as a means of comparing your progress to other player’s progress. Awarding points for arbitrary interaction is not gamification and reduces the whole concept to a marketing trend. Structuring interaction dynamics via game mechanics can make products better.

    If gamification was about introducing game/design thinking to new areas, it would not feel so much like a marketing stunt. I firmly believe that a lot of products – and their design processes – could profit from the agile design methods we use in game development*. And a lot of interactivity can be rendered more satisfying when it is game-designed.

    [1] Lawson, B. (1980): How Designers Think. Third and revised Edition, Architectural Press, Oxford, 1997.

    [2] Gedenryd, H. (1998): How designers work. Ph.D. dissertation, Cognitive Studies Department, Lund University, Sweden.

    [3] Rittel, H. & Webber, M. (1973): Dilemmas in a General Theory of Planning. Policy Sciences, Vol. 4, Elsevier Scientific Publishing Company, Amsterdam.

    * If you look at the most successful recent product launches, like the iPhone, and think about how many iterations they went through before being released you can see design thinking in action. Also, I think what sets Apple apart from other companies mostly the fact that they do not release every iteration of a product to the market. It’s not like they make better hardware than Samsung. Samsung just releases crap mobiles additionally to their great and well thought-out products.

    ** … and it will not sell, and not have a decent metascore, either.


    Open-mindedness 101

    Original Author: Julien Delavennat

    Hi everyone n_n

    I initially wanted to post about education. But I have too much stuff to say on that topic for a short article. Another time maybe.

    I’m on a pretty tight schedule at the moment, so this is yet another small off-topic article. Well, nothing is ever off-topic for designers.

    Anyway, today I want to talk about Open-mindedness, because it is a basis for understanding beauty, and what we can present our audience with.

    TL;DR:

    • Open-mindedness is a quality that makes you want to like the things that you don’t like.
    • Hating things is easy. Liking them is much harder, but much more rewarding. You have some life experience so you’re probably already aware of this n__n
    • It helps you evolve and expand your comfort zone, and enables you to enjoy everything you’re not familiar with.
    • Whatever the trolls say, the masses aren’t retarded. If you find that something other people like is bad , it’s not necessarily a problem with the quality, it’s probably more a question of open-mindedness and understanding what people enjoy and why it’s relevant to their needs.

    Recently I realized most of the people I know have the same way of imposing their tastes as I had when I was 13. That is, if something wasn’t aesthetically nice for me, I would know why, and say how it could be improved to please me more. That’s what everybody does, right ? You like certain things, and if something isn’t within that range, you say you don’t really like it, don’t you ? Well, that’s one issue I want to address: Taste isn’t innate, it changes with time.

    If you don’t like something, that doesn’t mean this thing can’t please you, it only means you’re not liking it. See the difference ? What happened to me is, a long time ago, I would listen to Soilwork’s first two albums (melodic death metal), hear the screaming vocals, and think “Why would anyone listen to that ?”, “Why would anyone even make that ?”.

    I spent plenty of time listening to music on Youtube, and on every single song, one of the top comments would be “THIS IS AMAZING. IS THE BEST. THIS IS THE BEST SONG EVER.”. … Why ? I didn’t find the songs were that amazing, so why did these people find them so great ? One line of thinking that can be found a lot in trolling is: “I don’t like this. It’s terrible.  And everybody who likes this is obviously a moron”.

    Long ago I thought the same: “The masses are retarded”. So one thing I could have thought would have been “These people find those songs amazing because they don’t have any taste. I have taste.” But the sheer amount of videos with these kind of comments made me think. “There’s got to be something to like in there, and I’m just not getting it.”

    Paradigm shift indeed. Things stopped being ugly. Suddenly, it was me that just wasn’t able to like things. I wanted to understand what people found so amazing with all the things I found average. And finding out I did. Now I listen to Soilwork’s ChainHeart Machine album and think “This is some pretty awesome quality music”. Soilwork is even one of my favorite bands now (Sworn to a Great Divide is one of the best albums ever, trust me :D).

    Taste evolves. The troll line of thinking of “I don’t like this therefore it’s terrible” is like the line of thinking saying that “skill is innate”. Heh, everybody here knows it isn’t. It’s not because you think something isn’t worth your time that it can’t be. It can be. If you try. The paradigm shift I was talking about earlier is the following: “It’s not the song that’s bad. It’s me, because I can’t appreciate it.”

    Being able to like things is a quality. It’s called open-mindedness. Non-open-minded people are often convinced they are right, and think others don’t have anything to teach them. That’s why they troll the things they don’t like. They don’t want to recognize it’s them that are not making an effort. Open-mindedness is wanting to like things.

    Like the people who know that skill is acquired, open-minded people know that you have to want to learn. You have to want to like the things you don’t. That is, expanding your comfort zone.

     

    Next time I’ll try talking a bit about why the comfort zone is important to understand to be able to make art, and how it is linked to artistic risk.

    And to anything subject to judgment really. Like video games.

    Anyway, have a nice day/night and see you next time n_n