Mastering the compulsion loop

Original Author: Matt Yaney

Mastering the compulsion loop

Last year I was speaking with a potential publisher of one of my company’s projects when they asked me if my group was familiar with the Compulsion Loop. You know, the Compulsion Loop talked about by Zynga and a stated big part of their secret sauce to success. I let them know we were very comfortable with the compulsion loop and would do it justice. They were surprised and reassured with our confidence on such new methods.

A few days later I was looking at an RFP we were considering responding to when I saw the compulsion loop talked about again. Since then, it has shown up in a variety of media postings and grabbed a considerable amount of attention.

The amazing thing about the compulsion loop is that it isn’t new. The theory was presented to the scientific community in 1957, a year before Tennis for Two was played on the oscilloscope. The theory is commonly known as mastering the compulsion loop

3…2…1… planned!

Original Author: Savas Ziplies

No matter what we do, if we are Agile or fall down the water… if we are senior or junior… if it is big or small… for nearly everything we do we have to define tasks and estimations to plan the days, weeks and months to come. Again, no matter what, this (especially first) planning is in most cases (and from personal experience most means 90%) pretty far from what is really required in the end. The other 10% split up into 1) the ones that planned good but not 100% correct, maybe used “proven” methodologies such as PERT or just estimated +30% and 2) the ones where the planning perfectly fit the development (again in my experience normally 1%-3%).

So, what you could say is to just “do it” like the 1%-3% did. This would normally be the way to go if theirs worked out. The thing is, from everything I have seen in project planning over the years: It just worked because of luck!

I think it is fair to say that I learned project planning pretty much from the practical side, always failing what I learned theoretically. No mater how much time I spend planning big projects, setting up tasks, goals, milestones, reviews, reworks, … it never got into this 1-3 frame.

Even with a more agile-driven approach, small sprints, good daily tasks, weekly reviews and time consuming remodelling of the plan: If I sum up what had to be reworked every single week I was as far away as with the initial waterfall plan. All goals got achieved and “somehow” it worked out but it is disappointing for the one who planned to see his estimations being more a guideline than a workplan.

Based on that experience I started thinking: What are the reasons for such divergence? What am I planning wrong? What do I have to change to fit the developers needs? And that is what strucked me: The Developer!

…to be busy!

With all IT projects I had to work on, the main time is consumed by the developers, the engineers, the architects of the (mostly) software projects. Of course Game Design, Art, etc. have to be taken into account but are often more parallel to what goes wrong more often: The actual development or implementation! (no question, thinking lean everybody should care about downtimes because of unfinished output/input)

As a developer myself that has to plan for others, estimate work, thinking about the production, milestones etc. none of the “theoretical” methodologies really worked out for me but just took my time. And in most cases this time is very limited. Estimations have to be given instantly to evaluate feasibility, plans have to be set-up initially to have a higher model to work and further estimate on. So, time is of the essence not only in the plan itself but also for the time to create it. And if I have to rework it all the time (real-life) I do not want to spend too much time in that phase (no time for building up charts with optimistic, pessimistic and realistic plans…).

…should be enough!

In a coincidence Jake Simpson gave a pretty good impression of this wonderful land, where everything works out. It is known as Should Be Land. This is normally the land where the estimations come from, too. From developers that should estimate their tasks, should give away an idea how long each of it could take to make a plan that also has to tie in with other departments (lean everywhere). If such an estimation fails because “36 hours should be enough!” more often others that depend on you are delayed, too.

Especially inexperienced developers, juniors and fresh “hackers” from the backyard tend to underestimate especially the requirements in correlation with others, to plan interfaces, to build adapters to dock onto others and so on. Nevertheless, seniors aren’t better in general. All people that “program” stuff normally just plan the programming time… and they do not want to plan too much time as the developer is often assessed based on his Cph (Code per hour) output and not based on his quality of code, re-usability, extendibility or tests. The results are in many cases optimistic estimations with no or little time to even plan what you are going to develop.

…am no developer!

Another often misleading planning element is that (many) project managers, scrum masters, gantt-junkies, … do not have the best development background. Therefore, the estimations given are taken as fixed. Experienced managers add an amount of 30% and plan it in. This is unfortunate as even the best estimation cannot just be coped by some time-addition if essential points that are requirements for good development are missing.

One of Two of Three

Besides complicated methodologies or the adding of just 30%-50% of time to an initial estimation given, I split it up into the three tasks I want to see as an output from a developer: The implementation (or coding, hacking, programming, refactoring, …), the planning and the tests!

  • The development is the actual implementation of the task. It may be the creation of a user-system, achievements, tool, crafting, … whatever comes to mind
  • The planning is the structuring of work, the evaluation of patterns, architecture and interfaces to follow during development and precedes it accordingly
  • The testing is no QA process but the personal testing of code, writing of (unit-)tests, maybe even playing the created and succeeds the development

Now, instead of adding a specific amount to a given estimation I add tasks to the estimation. My input is the implementation estimation from a developer. Based on that I add two thirds as planning and one third of that as testing resulting in the three tasks of implementation, planning and testing with a weight of 1/3 of 2/3 of 3/3. For example, if an estimation is 9 hours, I add a task for planning with 6 hours and a task for testing with 2 hours.

Yes, the result is a very keen estimation but the important part for me is that it covers mandatory tasks that are often forgotten and is also able to compensate for possible misjudgement, unforeseen circumstances, … as the package is given as one. The creation of these tasks remind the developer what he “should” do, and the derived estimations compensate for possible problems as well as they fit the real necessity for the other tasks (at least in my experience).

The tasks are important as normally you do not start hacking instantly. To evaluate existing code, interfaces and elaborate what architecture or pattern to use is often more practical and a necessity in general before starting to implement (to think something through before starting programming). To already know what the result should be helps the implementation. And the testing part may be the coders worst nightmare but again a requirement.

The most important point for me is: It’s easy! I can easily derive it in my mind, do have a most-likely accurate estimation (future may prove me wrong ^^) and won’t forget the importance of planning and testing.

If you follow up different approaches the weighting can also be adapted either by mixing tasks or changing the base weight. For example, if you are following a Test-first approach you can either switch the planning and testing tasks, as the testing in TDD also compensates planning partly. Or you can change the base to 4 and plan 1/4 of 3/4 of 4/4 meaning for our example implement 8 hours, test-first for 6 hours and plan for 2 hours (bare with me as I selected easy to calculate estimations).

What base to use depends on personal experience, the project and just the most important gut feeling. For me myself a third for general estimations and a fifth (1/5 of 2/5 of 5/5) for more specific tasks paid out. But all in general split up into my three main tasks I instantly have an estimation ready that fits at least my real-world.

…should work!

Please keep in mind this has no theoretically proven background but my experience over the years experimenting with different approaches and using the methodologies given in literature. Everything depends on your environment and personal likes and dislikes. It “should” work for other instances, too. I used it in several personal standalone and living project estimations and at least for now it was fitting best.

In my environment, with the time given and the amount of work to do this approach works. It is never really off the track, it reminds people about planning and testing besides the actual hacking and helps me to easily keep track about developments without spending too much time in overblown concepts that do not fit my personal habits or the “real” developer.

Of course, there are also drawbacks, such as too little/too much planning. If you split up e.g. User Stories to have tasks such as: Build divideByZero() function; Create class object; Write SQL Statement for querying all users; … you will end up in unnecessary tasks because of the simplicity. In such cases, the User Story “should be” the one to estimate and divide onto the tasks or you reduce the base and introduce a zero/x task.

Therefore, this may not be the 100% 1-3 approach but it fits me best and therefore leads me into that frame more often as the important thing is the variance that fuels this approach… and that can make it work for you, too!

Also published on

Teaching the Toaster to Feel Love [Part 2]

Original Author: Nick Darnell


Welcome to part two of my series of posts on teaching a toaster to feel love, or introduction to machine learning for the rest of us.

If you missed part one, you can read it here.

Last time I thought in part two I would be able to start posting some code.  However after writing a good chunk of the code I wanted to include in part two, I realized it was way too much to cover in one post.  So I’m going use this post to introduce some function calls and classes, but focus mostly on the process.


A fairly popular SVM library is LibSVM, which you can download on this page here.  There are several variations of that library in many languages.  It’s what I will be referencing in this and in coming posts.


The first and most important step when using an SVM is to figure out what constitutes a good feature vector.  If you remember from part one, a feature is just a floating point value with importance to the programmer but is just another number in a dimension for the SVM.  The feature vector is the collection of these features, constituting a single example.

The features that your feature vector is composed of is the real secret sauce, everything else in the process is more or less mechanical.  Sadly it also means I can’t tell you what to put into your feature vector because everyone’s problem needs a different vector.  However, there are lots of tactics that apply in all cases that you should remember.


You need to remember to normalize your feature data.  I don’t mean just normalizing the feature vector like you would a float4. I mean, each feature should be normalized either from 0..1 or –1..1 (not both).  For two important reasons, numerical stability and weighting.

Numerical Stability – We don’t want to get into situations where we might be hitting up against the 32bit limit with very large numbers during the testing process.

Weighting – We also don’t want one dimension to greatly out weigh any other dimension accidentally.  Because an SVM is attempting to find the hyperplane with the greatest margin that divides the data, having one dimension in the feature vector run from 0..1000 while everything else runs from 0..1, would mean that almost all your data would be classified based on that one feature because it might find a hyperplane that gave it a margin of 500 in that one dimension while every other feature’s margin added together doesn’t even approach 500.  So that feature more than any of the others would be what determined classification.


Not all kinds of data that you may want to include in a feature vector exists naturally as a number.  Sometimes you need to transform a category into a feature.  Your first thought might be to divide the category number by the number of categories and voilà, a floating point number.  However, this is not the best approach.  The better approach is to represent each category like you would if you wanted to represent them as bit fields, and dedicate one dimension for each category.  So if there were 3 possible categories, you’d represent the first category with 3 features, 0, 0, 1, the second category as 0, 1, 0, and the third as 1, 0, 0.


It’s important to remember that a machine doesn’t have insight.  If I gave you position and time data, you could tell me other information like velocity and acceleration.  However an SVM doesn’t know what the data is, so it can’t derive any sort of insightful observations about it.  So it’s important to remember to rephrase the same data in different ways that tell the machine new things about the same data.


The order of the features does not influence the process, however you need to keep your ordering consistent.


Your feature vectors should all have the same number of features.


The second step in the SVM process is training the SVM model using the feature vectors you’ve built containing all the useful features you think will help distinguish one thing from another thing.


To train an SVM you need 2 sets of data; the training set and the validation set.  While you can use just a training dataset, it’s not advisable as it can lead to a problem known as ‘over fitting’.  Which is just a fancy way of saying, you’ve created an SVM that ONLY recognizes the training data and doesn’t handle variation very well.

The most common and easiest training methodology to use for beginners with an SVM is known as a grid search.  The process is pretty simple, to determine the best support vectors we’ll iterate over a series of parameter magic number combinations (Cost (C) and Gamma) until we find a pair that seeds the process of dividing of our data the best.  The kernel type we’ll be using to divide our data will be the Radial Basis Function (RBF) kernel.  You could devise any kind of kernel you wanted to divide the data, however there’s a bunch of existing kernel functions people tend to re-use because they work in a wide number of situations.

The Cost (C) parameter represents the penalty for miss-classification in the training process.  The higher the cost, the more rigid/strict the trained model will be for the training data.  Meaning that while you may see fewer miss classified items (for data that resembles closely the training set), the model will be much more likely ‘over fit’ the problem.

Gamma is a parameter that’s used directly in the RBF kernel function.

The first step in the training process is to create an svm_problem object.  It holds all the training data in the form of svm_nodes.  Then we define our svm_parameter, which represents the kernel type and all the other parameters used in the training and creation process of our svm_model.

To determine the best parameters in the grid search we’ll use the svm_cross_validation method that’s built into SVMLib.  Which allows us to take the ground truth training data and see how well it would be classified given the resulting model from the selected Gamma and C parameters.

Now, you don’t have to vary JUST the C and Gamma parameters you could try different kernel functions.  But the more things you vary in the grid search the longer the training process is going to take.  Which, depending on the size of your dataset may be hours.  It all depends on how many different combinations of parameters you are willing to try.

Side Note: Should you ever run into a training time problem, I highly recommend looking into ways to running the training process in distributed environment.  The training process for an SVM is VERY amenable to being parallelized.  There are also some GPU (CUDA) implementations of the SVM that I’ve read about but not yet tried.  You can always use one SVM implementation for finding the best parameters and another for runtime usage.

Once you’ve determined the best C and Gamma based on the miss-classifications from your svm_cross_validation grid search process.  You’ll just need to call svm_train to generate the final model that you can save out and re-use at runtime to classify datasets that you want the SVM to identify for you.

Once you have your model you’ll want to run your validation dataset against it to see how well it runs on data that wasn’t used to train it.  If you find that the model performs very poorly you’ve over-fit to the training data.  So you may want to save the N best C and Gamma parameters you found and actually generate N final svm_models and pick the one that performs the best on the validation dataset.

At the very least, having a validation dataset allows you to make sure there’s no regressions in the unit tests of your code.

Next Time

Next time I should be able to walk you through the code easier and just reference concepts introduced in this post.

Cross-posted to my personal blog.

Should Be Land

Original Author: Jake Simpson

Should Be Land

Have you guys ever noticed that game developers tend to live in this mythical land of ‘Should Be”? It’s quite a fascinating place, where things happen the way people think they should, rather than the way they actually do.

Lets face it, lots of developers live in a world where everything works the way they think they should – based on personal experience and ‘logic’ (which, strangely enough, tends to reflect their own personal belief set because everything they think is right, ergo it’s logical, can’t you see that?) and if it doesn’t, well, it should, and I’m going behave as though it does, to force everyone else into my way of thinking.

The problem with being smart is that the smarter you are, the easier it is to justify whatever quirk / approach / idea / belief-in-the-world-at-large you happen to have. “But of course people will by this! Who wouldn’t want a zombie unicorn simulator?”

The reality is, of course, that quite a lot of creative people would rather live in the world-as-they-think-it-should-be than in the real world because the real world adheres to rules that either don’t suit them or are weighted against them.

Now I don’t know about you but it seems to me that the people who have the most success in the world are those that really understand how it is, in all it’s screwed up beauty. The reality is that it’s not what you know, it’s who you know. The more you subscribe to the fact that “it shouldn’t matter”, well you are right but you are going to be right all by yourself in the corner over there getting more and more bitter about the way the world is and protesting to anyone that listens that “This is all bullshit and it shouldn’t be like that”.

If the world does operate according to “it’s who you know” (and I think we all have ample evidence that it does), then isn’t the smart move to get out there and meet people? Get out to GDC and hang out in the W bar and just talk to people. Stop complaining that it’s not the way it should be and get with the fact that reality is bogus and sometimes you have to do shit you’d rather not even though it’s bullshit.

If you have code that doesn’t work, but should, you don’t just proclaim “well, it should work” and push it out into the world (well, unless you are Microsoft, obviously:) ). No, you figure out what’s broken in your API’s and either fix it or figure out a way around what’s broken and generally work with what we have to get to where we need to go. But when it comes to speeding, well we all do 5 mph above the speed limit because “the limits are bullshit anyway, right?”.

There are many game designers who seem to operate in Should Be Land – who feel that people shouldn’t mind their convoluted UI, because “They need those options, right?”, or that this control mechanism “Totally works, and people will love it” while everyone else is just shaking their heads going “What is this guy smoking?”.

Should Be Land permeates quite a lot of things, and to be sure, while I am making light of it, in some circumstances it’s a good thing. You cannot change the world until you start treating the world the way you want it to be. Gay rights would not have come as far as they have without the gay community just deciding that the world should treat them a certain way and expecting that out of everyone.

The real problems come when your version of Should Be Land comes into conflict with reality because 99 times out of a 100, you’ll loose. Should Be Land regarding items that really don’t matter (the band Chicago is the BEST BAND IN THE WORLD!!!!!!!) generally paint you is a harmless eccentric, or charming even. At worst it brands you as slightly delusional but still, not something to worry to much about. But when it collides with either the law of the land or popular perception of the type that economically affects you, then it starts getting a little more serious.

Speeding limits Should Be 10 mph faster than they are, but living in that reality means speeding tickets, regardless of how bullshit that may be. People should have a sense of humor about that dirty joke, but some people do not, regardless of how much they should do and living in that reality just means that you’ll be having some uncomfortable meetings with HR. STL should be a good way to write C++ code, but the reality is that lots of people don’t really know how to use it and what’s going on behind the scenes, so most of the time it’ll screw you more than it will help. People Should know the implications of what they do before they do it, but most of the time they won’t, and you’ll be the one picking up the pieces. And believing that all your design is a gift from god and everyone is going to love it usually means relegation to the budget bin 2 weeks after launch.

Every Should Be Land assumption needs to be treated to a heavy dose of reality (And we ALL have our should be land issues – even if you don’t think you do – in fact particularly if you don’t think you do.) and we all need to start recognising when this is occurring.

Rule of thumb? If you say something and everyone disagrees, you probably want to re-evaluate. That can be really painful if it’s something you totally believe, but generally (and I say generally, there are times when you are right and no one else is) re-evaluation is required. Of course the more invested someone is in whatever belief it is, then the harder this is to even see, let alone do but regardless, if you don’t even try then it’s probably safe to say that problems are coming your way. And worse still, if you make a proclamation and no one challenges it or has anything to say about , even when you ask, then there’s definitely a problem but no one wants to talk to you about it because they are scared of the way you’ll react.

The ability to see the world as it actually is – which I think most people can do – and then deal with that reality – which I think quite a lot of people have issues with – is one that will only help. Being right is great, but often being right is not what’s required. Being effective is what’s required and the two are not equal.

Being right all by yourself in the corner sucks. It Shouldn’t Do – people should appreciate the being right all by itself – but it does, so get with the program and stop living in Should Be Land. Everyone else will thank you for it.

No Comment

Original Author: Francesco Carucci 

When I interview a new C++ programmer, my favorite question is not about const-correctness or virtual tables, but: “Do you comment your code?”. I still ask about virtual tables, but asking about code comments always springs some interesting discussions and gives me a good insight about the level of care the aspiring programmer puts in crafting his code to not only make it correct and fast, but also to make it easy to work with. I witnessed puzzled faces wondering if that was a trick question (yes it is), and one unlucky candidate pushed his luck to a very bold “which good programmer doesn’t comment his code!”. Me.

I don’t know if not commenting my code makes me a good programmer, but it surely forces me to go a long way to make it so my code is as clean and as self-documenting as I can possibly make it.

But I lied: it’s not true that I never write any comment, a more precise principle I adhere to as much as possible is:

  • Comment only what can’t be expressed in code

Which is very little in my experience.

More precisely my adversity to comments stems from the DRY principle:

  • Don’t Repeat Yourself

Intentions in code should be expressed once and only once: code comments are very often a blatant violation of the DRY principle, which usually leads to several nasty problems and inefficiencies. For once, whenever there is a duplication in code of any form, changes to that bit of code must be done at least twice. Comments are no exception: to be remotely useful they must be kept in sync with the code they describe, but, being us programmers very creative and intelligent people (and lazy), it’s guaranteed we will sometimes forget to update comments when we modify, fix, update a bunch of code. A wrong comment is not only useless, it’s actually harmful. In a see of green what I’m interested in is the code: that’s where the meaning is, that’s where the bug is if I’m looking for one. Comments are noise, and often wrong. If the code is so complicated  that can’t be understood without comments, my suggestion is to rewrite it and make it clear, our fellow workmates will thank us for the extra effort and love.

So when it comes to comments, I follow one guideline very strictly:

  • Whenever you feel the need to write a comment, write a method instead

I’m a sucker for code that reads like English, it’s self explaining and tells me right there everything I really need to know about itself, in a nice executable form, with no duplication.

An interesting consequence of applying this guideline extensively is that code tends to nicely divide itself in small units of behaviour, one per method, that are easy to reuse, further reducing the overall amount of code (that will inevitably spawn a bug at some point). No Comment is for me a powerful incentive to factorize code.

But there’s never a free lunch, and very rarely the perfect solution or methodology: I’ve found out that my style of layering abstractions over abstractions, keeping methods short and easily understandable, makes reading code difficult for a certain class of programmers who need to see the inner details of an algorithm  in order to understand it fully: my style slows them down, since they have to jump back and forth to have the full picture, including details. These programmers are often very bright and you want to put them in the most comfortable position to weave their magic.

It’s an issue in my style that I haven’t fully resolved, and I’m very open to suggestions here: I believe it’s a matter of finding the sweet spot between abstractions and details, and it probably depends on the team and the actual people we have to work with. At the end of the day the goal is the best productivity possible for the whole team.

Design Cheat Codes

Original Author: Claire Blackshaw

Disclaimer: I’m not claiming games which use these mechanics are in any way lesser. I’m merely highlighting these as easy ways to sauce up a game. Think of it like add ketchup / mayo / vinegar to your chips

Working on several projects which have been technically limited, or constrained by some unbreakable conditions I’ve come to appreciate these quick and simple ways to sauce up your game. Note that not all sauces work with all games.

Make it Kinematic

Movement is fun, controlling a kinetic object is juicy. Give items in your game mass and direct control.

Compare these two situations.

  1. Player selects a unit, then selects a destination, then confirms selection. Unit then moves along path.
  2. Player directly controls unit with key controls
  3. Player controls the unit acceleration and the unit has mass and inertia

I guarantee if you prototype those three cases in a sandbox and give 10 people controllers then 3 will get the most play time, followed by 2 then 1. The simple truth is players like the feeling of weight and kineticism. A simpler real world example is give a child 3 objects: a light foam ball, a baseball and a jelly ball.

Secondary Motion

Do all player actions cause secondary actions? So menu buttons have not only a hover and click state but an animation or action. When the player jumps, shoots or activates something in game they cause an action, then ensure that secondary actions happen around that action. For example if a gun fires, get some smoke, or bullet trail. A jump can have a nice animation on the avatar, dust at push off and landing. The activation of a button can press in then light up, the simple act of separating these makes the player more satisfied.

Particles and Shaders

Yes we have all seen particles, bloom and cheap shader effects done to death, the reason? They work! Do you have a well balanced aesthetic in a gorgeous game? Then do not heap it on top, it will bury your work. But feeling bit lack lustre? Well then particles and a few cheap effects like rim lighting will help.


Does your game need physics, no? That’s great news you can use physics to add secondary motion and sparkles. You don’t need to worry about syncing it up over a network or worries about minor glitches because all it’s doing is making things look better.

Leaderboards & Awards

Will they make your game better, probably not? Will they mean player’s play longer or are more likely to make a purchase, challenge friends or talk about your game… YES!

Read this article here about implementing leaderboards (thanks to David Czarnecki).

Brighter Colours

Well yes we like shiny things. Unless you’re producing a high quality well polished art style well then going for brighter bold colours, with clean lines will almost always increase eyeballs to player conversion.

Also bright bold, friendly cartoon colours have the broadest reach and appeal.

Remove Options

Okay this one is not so much sauce as just general good advice. I’m adding it here because it’s easy to do and almost always helps. Make a list of all you features, options, abilities ect… Measure the usage in play tests. Remove the bottom third of the options. A simple thing, and it feels like your subtracting, which you are, but your adding value.

Local Multiplayer

Okay if you planned it from day 1 and are confident you can deliver online multiplayer, then do it. It’s a great option but it’s not something you add to a project as sauce. Local Multiplayer however has none of those headaches. Whether hot-seat, simultaneous or asynchronous its’ all good.

If your AI is flaxy or your balance is weak, well the meta-game that evolves from multiplayer can often save you from that.


These are just some of the cheap sauces you can add to a game which will spice it up. They are not a substitute for a quality product which is well designed. They are good trusted spices and sauces to spike the dish with. Don’t believe me; look at PopCap, Zynga, Mini-Clip and all the other casual games performing well.


Original Author: Eddie Cameron


There’s one question that we as devs don’t bother asking, don’t know the answer, or choose to ignore it, “Why should my game be made?”. All of us have a stake in the success and advancement of the industry. All of us have a passion for games, because we’re definitely not in this for the money (as I type this I’m at the glamourous day-job that supports my very un-profitable indie dev career). So why don’t we ask why?

Many mainstream games are aften criticised for being pretty generic. This can partly be blamed on marketing departments, terrified that the game they have to sell won’t have any fancy bullet points. But, no matter how tempting it may be, they can’t take all the blame. It’s easy to lose track of a game’s idea once dev mode kicks in, and finish a game that may be well made, but misses the mark on being something worth remembering. These are our Call of Duties and Medal of Honours (I know they get a lot of shit, but it’s the price of their super-high profile). Both are very well presented, but beyond the obvious “Because it’s fun” or even worse, “Because it sells”, why were these games made? We can do “fun” and we can make profitable games, so why can’t more games prove they can do more?

However, it’s easy for me to pick on the mega-studios, when they are so distant from my one or two man projects. Indie devs need to ask the question too. We don’t have shareholders, and usually no bosses to please so what’s our excuse? Countless flash platformers, if you’re lucky they might have a cute gimmick. What’s the point? Having different spaceship sprites for a top down shooter isn’t enough reason to make it. Even some of the more overt ‘art-games’ (ridiculous term) seem guilty of losing purpose sometime during development, falling into the trap of art-ness without expression.

While working on a game beyond the concept stage, whether as an artist, designer or programmer, we should keep that question in the back of our minds. Not only does this mean that assets/design work will be aware of the game’s goal, but it will give advance warning if the goal needs to evolve along with the game. We don’t need a separate genre and audience for Art games, but we do need to put authorial ‘meaning’ into more games. Simply through the self awareness that comes with questioning ourselves (and a little bit of talent I suppose) we can keep themes/messages/purpose alive through aesthetics and mechanics.

This doesn’t, however, mean that developers need an agenda to force though a game. In fact, this often leads to clumsy political posturing (Homefront/Modern Warfare). But, a rough idea or theme for a game can crystallise over development into something robust and well understood, both by players and the team.

This may all sound pretty vague and waffly, and it is. Kinda. So how do you go about asking why a game should be made? We don’t need a ‘mission statement’ type directive, to be ignored by everyone actually working on a game. So nothing as general (and dull) as “This game will advance the medium and create a positive experience for the player” Instead, let it evolve naturally from the concept. As an example (from the Global Game Jam theme of extinction), say a team wanted to do a game about the death of the dinosaurs. What reasons could the team work for? Maybe the game should educate children about the events, or it should be a satire about pollution, or maybe it should offer the one true 1st person dinosaur simulation. (These ideas are free to any lucky takers!)
This team decides to make a pollution satire. Keeping this in mind, mechanics are developed that revolve around playing dino-president, trying to keep as many creatures alive as possible while the sky dims with dust. Artists paint bleak landscapes of dying life. Coders code. And it was good. At some evaluation point, the team again asks why their game should be made. With the production as it stands, they decide to throw away the dinosaur theme. The new purpose they decide on is “To satirise the current political system by showing a near future enviro-apocolypse” Some assets and mechanics have to be canned, but because all team members had a clear purpose to what they were making, most still fit with the new theme. In time, the team releases a best-seller, makes millions, and saves the world.

So. Why don’t more developers keep such a simple question at the back of their minds? Many may not see the point, they have a rough idea (WWII FPS) or gameplay gimmick (you can BEND TIME) and as long as new stuff fits into that and works, it’s in. Others may ask too far into development, where it simply isn’t feasible to change the game too much. Worse, some may just realise they don’t have an answer anymore with the game they’ve already poured so much time with.

A seemingly simple question can have a rather large affect on a finished game. For those claiming that they already ask why, have a pat on the back, but make sure to listen to the answer throughout development.

Note: This is cross-posted to my personal blog,

Rhetoric for Engineers

Original Author: Chris Hargrove

Rhetoric for Engineers

Today I’m going to talk about rhetoric, a.k.a. the “art of persuasion”.  If you’re an engineer/programmer, this is probably something you don’t normally think about; in fact the very idea of the subject may leave a nasty taste in your mouth.  If so, then you are exactly the kind of engineer I’m talking to.

Whether you like it or not, the art of persuasion affects you.  It affects interaction with your peers, it affects the lasting impression your work has on others, it affects your career growth (especially if you’re thinking about leadership or management at all in your future), all kinds of stuff.  So it’s worth talking about.

Now I’m not going to sit here and go into long tutorials about the Western philosophy of rhetoric, diving into logos, ethos, and pathos and all of that.  From an engineering point of view, all of those issues are “implementation details”; you can read about them at your leisure.  Instead, my goal here is to get you interested in the subject in the first place, and that starts with learning to respect the subject.

Respect for rhetoric doesn’t come lightly to many engineers.  It’s usually seen as the domain of salespeople, marketing, and lawyers, all groups that engineers usually barely tolerate and sometimes outright despise.  There is a philosophical chasm that needs to be crossed in order for these two sides to see eye to eye, but it can be crossed.

To set the stage, I’m going to bring up a little book by Neal Stephenson called “Anathem“.

Pop your head up

Original Author: Paul Evans

There are times when people just want to get their head down and get their stuff done.   As a coder myself, I know the importance of having a stretch of uninterrupted time to finish a thought when coding.  Switching tasks abruptly can ruin the rhythm and flow of the creative process – it can take time to regain that momentum once you have been disturbed.

Constant interruption slowing down completing your tasks might cause you to seek solitude.  Solitude is useful at times and a good producer / manager will shield you, as a creator, from interruptions or interference when what you have to finish is more critical than anything that you could be interrupted for.  The danger is that prolonged isolation causes segregation – information becomes filtered from what you need to know to what someone else thinks you need to know.  Communication happens on the side of the listener, but a listener has no chance of communicating if they are being shielded from what is being said.

The funny part of all this is that a person can be in both situations at once.  They can be told too much and expected to be reactionary to one type of thing and not find out about something else that would help them be more effective.  This kind of dysfunction can occur internally in a team, but is perhaps more likely to occur between different teams where information exchange is naturally more guarded.

To counteract too many interruptions, try learning the power of saying no – do not take on more than what you can do.  Make sure if you are being constantly interrupted that this is accounted for in your estimates.  Let the person know that if you do x, that will mean y will have to wait.

If you are feeling left out of the loop make sure you take time to talk to people.  Build some time in to your day to keep in touch – instead of waiting at your desk for a long build to complete, go interact with someone.  Studios and companies are defined by their people and their interactions; without people you just have some office furniture in an empty building.  If you are work for a big company, during lunch try introducing yourself to different people now and again.  If you are a programmer that always hangs out with programmers, why not try and make a connection with an artist.  Try and dispel any “them and us” if you start to feel it.

Connecting with your colleagues can be a very rewarding endeavor – you begin to be able to introduce people to each other that can solve each other’s problems.  It enables you to help more people, because even if you cannot help someone, you know someone who might.  Connecting people might mean effort is not wasted or duplicated, and help work to be co-ordinated.  You can get a better understanding of how a company works as a whole and through these small social actions make things better.

I have heard it said that many important conversations happen during smoking breaks.  Relationships are sometimes forged during drunken nights out.  Personally I don’t smoke and tend not to drink very much… but you do not have to smoke or drink to excess to join in these things  now and again.  Making new connections deserves some effort – people who seem to have many strong connections worked hard at it.

For small studios and companies it makes sense to make room in your calendar for local and industry events.  The gaming conferences I have been to have had some amazing lectures.  A person who just went to those and did not try and socialize missed out on an opportunity though.  For smaller companies in particular it is vital to have trusted peers you can share ideas with and swap tips; so many important things happen as a side effect of the main event.

At the end of Mike and Matt’s recent #AltDevBlogADay podcast (March 17th) they stated an aim for #AltDevBlogADay is to be like a perpetual game conference.  Although the web can never be a substitute for physically meeting people, it is a worthy goal and participating in it does feels a bit like a conference.  If you are interested in game development and want to try and engage with more professionals, try a guest post on this site.  Get your voice out there and start some discussions about something that is important to you – it could make a positive difference to your career!

Collaboration and Merging

Original Author: Niklas Frykholm

(This has also been posted to the BitSquid blog. We are looking for a tools programmer.)

Games are huge collaborative efforts, but usually they are not developed that way. Mostly, assets can only be worked on by one person at a time and need to be locked in version control to prevent conflicting changes. This can be a real time sink, especially for level design, but all assets would benefit from more collaborative workflows. As tool developers, it is time we start thinking seriously about how to support that.

Recently I faced this issue while doing some work on our localization tools. (Localization is interesting in this context because it involves collaboration over long distances — a game studio in one country and a translation shop in another.) In the process I had a small epiphany: the key to collaboration is merging. When data merges nicely, collaborative work is easy. If you can’t merge changes it is really hard to do collaboration well, no matter what methods you use.

Why databases aren’t a magic solution

A central database can act as backend storage for a collaborative effort. But that, by itself, does not solve all issues of synchronization and collaboration.

Consider this: if you are going to use a database as your only synchronization mechanism then all clients will have to run in lockstep with the database. If you change something, you have to verify with the database that the change hasn’t been invalidated by something done by somebody else, perform the change as a single transaction and then wait for the database to acknowledge it before continuing. Every time you change something, you will have to wait for this round trip to the database and the responsiveness of your program is now completely at its mercy.

Web applications have faced this issue for a long time and they all use the same solution. Instead of synchronizing every little change with the database, they gather up their changes and send them to the database asynchronously. This change alone is what have made “web 2.0″ applications competitive with desktop software.

But once you start talking to the database asynchronously, you have already entered “merge territory”. You send your updates to the server, they arrive at some later point, potentially after changes made by other users. When you get a reply back from the server you may already have made other, potentially conflicting, changes to your local data. Both at the server and in the clients, changes made by different users must be merged.

So you need merging. But you don’t necessarily need a database. If your merges are robust you can just use an ordinary version control system as the backend instead of a database. Or you can work completely disconnected and send your changes as patch files. The technology you use for the backend storage doesn’t matter that much, it is the ability to merge that is crucial.

A merge-based solution has another nice property that you don’t get with a “lockstep database”: the possibility of keeping a local changeset and only submitting it to others when it is “done”. This is of course crucial for code (imagine keeping all your source files in constantly mutating Google Documents). But I think it applies to other assets as well. You don’t want half-finished, broken assets all over your levels. An update/commit workflow is useful here as well.

Making assets mergable

If you have tried to merge assets in regular version control systems you will know that they usually don’t do so well. The merge tool can mess up the JSON/XML structure, mangle the file in other ways or just plain fail (because of a merge conflict). All of these problems arise because the merge tool treats the data as “source code” — a line-oriented text document with no additional structure. The reason for this is of course historic, version control systems emerged as a way of managing source code and then grew into other areas.

The irony of this is that source code is one of the hardest things to merge. It has complicated syntax and even more complicated semantics. Source code is so hard to merge that even humans with all their intelligency goodness find it taxing. In contrast, most assets are easy to merge, at least conceptually.

Take localization, for instance. The localization data is just a bunch of strings with translations for different languages. If one person has made a bunch of German translations, another person has made some Swedish translations and a third person has added some new source strings, we can merge all that without a hitch. The only time when we have any problem at all is if two people has provided different translations for the same string in the same language. We can solve such standoffs by just picking the most recent value. (Optionally, we could notify the user that this happened by hilighting the string in the tool.)

Many other assets have a similar structure. They can be described as “objects-with-properties”. For example, in a level asset the objects are the entities placed in the level and their properties are position, rotation, color, etc. All data that has this structure is easy to merge, because there are essentially just three types of operations you can perform on it: create an object, destroy an object and change a property of an object. All these operations are easy to merge. Again, the only problem is if two different users have changed the same property of the same object.

So when we try to merge assets using regular merge tools we are doing something rather silly. We are taking something that is conceptually very easy to merge, completely ignoring that and trying to merge it using rather complex algorithms that were designed for something completely different, something that is conceptually very hard to merge. Silly, when you think about it.

The solution to this sad state of affairs is of course to write custom merge tools that take advantage of the fact that assets are very easy to merge. Tools that understand the objects-with-properties model and know how to merge that.

A first step might be to write a merge program that understands XML or JSON files (the program in the link has some performance issues — I will deal with that in my next available time slot) and can interpret them as objects-with-properties.

This only goes half the way though, because you will need some kind of extra markup in the file for the tool to understand it as a set of objects-with-properties. For example, you probably need some kind of id field to mark object identity. Otherwise you can’t tell if a user has changed some properties of an old object or deleted the old object and created a new one. And that matters when you do the merge.

Instead of adding this extra markup, which can be a bit fragile, I think it is better to explicitly represent your data as objects-with-properties. I’ve blogged about this before, but since then I feel my thoughts on the subject have clarified and I’ve also had the opportunity to try it out in practice (with the localization tool). Such a representation could have the following key elements.

  • The data consists of a set of objects-with-properties.
  • Each object is identified by a GUID.
  • Each property is identified by a string.
  • The property value can be null, a bool, a double, a vector3, a quaternion, a string, a data blob, a GUID or a set of GUIDs.
  • The data has a root object with GUID 0.

We use a GUID to identify the object, since that means the ids of objects created by different users won’t collide. GUID values are used to make links between objects. Note that we don’t allow arrays, only sets. That is because array operations (move object from 5th place to 3rd place) are hard to merge. Set operations (insert object, remove object) are easy to merge.

Here is what a change set for creating a player entity in a level might look like using this model. (I have shortened the GUIDs to 2 bytes to make the example more readable.)

create #f341

change_key #f341 “entity-type” “player”

change_key #f341 “position” vector3(0,0,0)

add_to_set #0000 “entities” #f341

Note that the root object (which represents the level) has a property “entities” that contains the set of all entities in the level.

To merge two such change sets, you could just append one to the other. You could even use the change set itself as your data format, if you don’t want to use a database backend (that is actually what I did for the localization tool).

I think most assets can be represented in the objects-with-properties model and it is a rather powerful way of making sure that they are mergable and collaboration-friendly. I will write all the new BitSquid tools with the object-with-properties model in mind and retrofit it into our older tools.