Networking, APIs and C preprocessor abuse

Original Author: Ben Carter

This may sound a bit niche, but I thought that I’d talk a little bit about a simple design for a packet-based networking interface that we came up with on a project a while back, since it’s both a relatively elegant solution in its own right and a demonstration of some tricks for bending the C++ preprocessor into doing your dirty work for you. So, without further ado…

Basically, the problem this sets out to solve is simple – we have two machines on a network, and we want to provide an API for the user to send data between them (in practice there might be more than two, but extending things is simple once you have the basics working). Since a lot of different bits of random code will be doing this (particularly as this system is intended to handle communication for debugging purposes as well as “real” game code), we want to make the API as simple as possible, and keep the potential for errors as small as possible. “Simple” often equates to “small”, so what’s the minimal set of functionality we can get away with?

Well, what the end user wants to do are two things – send data, and receive it. For our purposes we’ll assume that the data in question is some sort of fixed-size structure (variable size things are a fairly easy extension for later). Since we’re in C++-land, it makes sense that we’d represent that as a class (or struct if you prefer). So the user wants to put some data into a class, and then send it across the network to the other machine, where they will do something to receive it. So our API will need send() and receive() calls…

…Or will it? Whilst at first glance receive() makes sense as a function for the user to call, in practice that can be a bit of a PITA. It opens up two big problems – firstly, on the user end, they have to poll every frame (or suitable time period) to see if data has arrived and deal with it. Secondly, and even more awkwardly, it opens up API design questions like “what happens to data that no-one is trying to receive?” and “what happens if the receive call is expecting a different type of data from what actually arrived?”. So maybe we should turn the receive side on it’s head, and say that the receive function is something the user implements, and it gets automatically called when data arrives. There are some timing-dependency implications for that (if, for example, a packet is received half-way through the code doing some other processing), but a little bit of common sense with scheduling and mutexes can generally resolve that. (And, as timing problems go, the “what if my receive function has a race condition against the processing code?” problem is far less scary and hard to deal with than the “what if data packet A has a race condition against data packet B?” problem, IMHO, which is where you seem to end up if you go down the road of having user-triggered receive calls).

So, we’ve got our data class:

class PlayerPosition
 
  {
 
  public:
 
      float m_x, m_y;
 
  };

And what we want, ideally, to be able to do with it is something like this:

PlayerPosition update_packet;
 
  update_packet.m_x = g_player_x;
 
  update_packet.m_y = g_player_y;
 
  update_packet.send();

…and at the other end:

void PlayerPosition::receive()
 
  {
 
      g_player_x = this->m_x;
 
      g_player_y = this->m_y;
 
  };

That’s about the minimal amount of code we could possibly use to achieve the results we want, and since it reads pretty cleanly and looks sane, let’s aim for that. It is worth noting that one more small API requirement has crept in here, though – in the send() example, we’re sending data which is on the stack. Hence, it will be necessary for the send() function itself to copy to another buffer if it wants to do the actual processing on another thread or similar… that’s a little inefficient in some cases, but to be honest the alternative (forcing the user to ensure that the data passed exists until the send() processing is complete) is so painful to actually use that it doesn’t seem practical. (For what it’s worth, in our system we implemented a two-tier solution to this – small packets get copied to a buffer and send() returns immediately, whilst large ones stall the fiber calling send() until the data has been flushed).

So, what can we do to implement this? Well, the first thing that springs out is that somewhere between the declaration and actual usage, the PlayerPosition class magically acquired a send() function. That seems like a pretty simple thing to implement, so let’s start there – we can inject the code we want if we add a macro to the class:

class PlayerPosition
 
  {
 
      PACKET_CLASS(PlayerPosition);
 
  public:
 
      float m_x, m_y;
 
  };

This is a trick I’m very fond of – you can achieve an awful lot by sticking a single macro at the top of a class with the name as a parameter, and as a bonus unlike a lot of macro hackery it doesn’t actually look too bad aesthetically either. With this in place, the macro would look something like this (the trailing “private:” is there to ensure that we don’t leave the “default” scope for the class in the wrong state after being used):

#define PACKET_CLASS(classname) 
 
  public: 
 
      void send() 
 
      { 
 
          g_NetworkManager->send(this, sizeof(classname)); 
 
      } 
 
  private:

I’m presupposing here that the network manager has some way of sending raw data in packet form – that’s out of the scope of the current discussion so we’ll just hand-wave past it. So this looks like it should work… we’ve added a send() function to our packet class which sends the contents over the network. All is well!

…All is well, except for the fact that the other end has no way of knowing what the data we just send actually was. The user code wants us to call the correct receive() function for the type of data that was sent, so without that we’re pretty much snookered. What’s needed is a unique identifier that both systems can use to hook up the types correctly. The problem, though, is that we don’t have anything except the class name to work with, and whilst in theory we could send that across the network as some sort of variable-length byte stream, that’s going to be pretty inefficient all-round. So we want to assign a more compact identifier to our packets.

This is where we pull out another thing in our C++ bag of tricks – static initialisers. If you have a global static object with a constructor attached, the C++ runtime will execute it before main() gets called. Now, normally putting any significant code in these is a recipe for disaster (as other subsystems will not have had a chance to initialise yet, and the ordering they are called in is essentially random), but we can use this to construct a list of packet types.

First off, since the packets themselves are neither global nor static, we need a class that is for our constructor to hang off. Something like this should do the job:

class PacketClassInfo
 
  {
 
  public:
 
      PacketClassInfo()
 
      {
 
          m_next = s_first_packet_class;
 
          s_first_packet_class = this;        
 
      }
 
   
 
      PacketClassInfo* m_next;
 
      static PacketClassInfo* s_first_packet_class;
 
  };

As you can see, all the constructor does is to add this instance of PacketClassInfo to an internal linked-list. We can then iterate through this at our leisure once the system has started up properly. To actually make one of these objects for each packet class, we need to expand our macro a little too:

#define PACKET_CLASS(classname) 
 
  public: 
 
      static PacketClassInfo s_packet_class_info; 
 
      void send() 
 
      { 
 
          g_NetworkManager->send(this, sizeof(classname)); 
 
      } 
 
  private:

…this declares our static class, but in order to actually construct it we need an instance in a CPP file somewhere, rather than in the header. That gets a bit awkward – ideally, it would be best if we could avoid having more than one declaration necessary for a packet class, but without using tricks like #including the header file twice (with different macro declarations), C++ doesn’t provide any mechanism for us to do that. So we’ll bite the bullet and add this macro:

#define IMPLEMENT_PACKET_CLASS(classname) 
 
      PacketClassInfo classname::s_packet_class_info();

Which is simply placed in an appropriate CPP file to add the packet class static data (we’ll be making more use of this macro for other purposes later, too).

This gives us our linked-list of packet classes, so now we can think about the ID numbers themselves. Since we’ve handily added this static PacketClassInfo class, the logical place to put them would be in there, so let’s add one:

class PacketClassInfo
 
  {
 
  public:
 
      PacketClassInfo()
 
      {
 
          m_next = s_first_packet_class;
 
          s_first_packet_class = this;
 
          m_packet_id = 0xFFFFFFFF;
 
      }
 
   
 
      u32 m_packet_id;
 
      PacketClassInfo* m_next;
 
      static PacketClassInfo* s_first_packet_class;
 
  };

As you can see, I’ve initialised the ID to 0xFFFFFFFF – this gives us a guard value we can test against in the send() code to check that we aren’t trying to send something before the packet initialisation has happened. And next up is that initialisation itself – a simple enough task given what we have now:

void InitPackets()
 
  {
 
      u32 current_id = 0;
 
   
 
      PacketClassInfo* current_packet_class = PacketClassInfo::s_first_packet_class;
 
   
 
      while(current_packet_class)
 
      {
 
          current_packet_class->m_packet_id = current_id++;
 
          current_packet_class = current_packet_class->m_next;
 
      }
 
  };

As you can see, in this case we’re simply assigning indices to each packet type, starting from zero. This is fine if you know that you will always be communicating between two copies of the same executable, but if there is a chance they are different the indices will probably not match up (as they are dependent on static initialisation order, which can change even with a simple re-link). Better schemes for real use are to sort the packets into name order first, or even better to use a hash of the name itself as the ID. Implementation of those is left as an exercise for the reader, though. And so, with ID in hand, it becomes trivial to modify the macro so that send() passes this through to the communications system:

#define PACKET_CLASS(classname) 
 
  public: 
 
      static PacketClassInfo s_packet_class_info; 
 
      void send() 
 
      { 
 
          g_NetworkManager->send(s_packet_class_info.m_packet_id, this, sizeof(classname)); 
 
      } 
 
  private:

…And with that, we’re done with the implementation of send() – (over) half of the system is complete! So, onto the receiving side. We’ve got one basic problem to solve here – how we take a packet ID number and turn that into a call to the appropriate receive function?

One approach to this would be to search through the linked-list of packet classes to match the ID number, but efficiency-wise that isn’t particularly fantastic, so let’s make use of the fact that our ID numbers are linear indexes and build an array to map ID back into the appropriate PacketClassInfo pointer instead (you may note that if you choose to use a hashed value for the ID instead, a different approach will be needed here). All of this can be done quite conveniently inside our InitPackets() function:

#define MAX_PACKET_TYPES 64
 
  PacketClassInfo* g_packet_class_info[MAX_PACKET_TYPES];
 
   
 
  void InitPackets()
 
  {
 
      u32 current_id = 0;
 
   
 
      PacketClassInfo* current_packet_class = PacketClassInfo::s_first_packet_class;
 
   
 
      while(current_packet_class)
 
      {
 
          ASSERT(current_id < MAX_PACKET_TYPES);
 
          g_packet_class_info[current_id] = current_packet_class;
 
          current_packet_class->m_packet_id = current_id++;
 
          current_packet_class = current_packet_class->m_next;
 
      }
 
  };

In this instance I’ve simple specified a hard-coded maximum number of classes – obviously a dynamic array or similar may be more appropriate in the real world (or not – there is a pretty good argument that in simple cases like this the extra thinking/coding time of “doing it right” outweighs any practical advantage from saving a few bytes and avoiding the occasional need to bump the maximum number).

So, the next problem on the list is this – how do we get from a PacketClassInfo pointer to an actual call into the receive function? The simplest answer is to add another member to PacketClassInfo – this time a function pointer:

typedef void *PacketReceiveFunction(void* packet_data);
 
   
 
  class PacketClassInfo
 
  {
 
  public:
 
      PacketClassInfo(PacketReceiveFunction* receive_function)
 
      {
 
          m_receive_function = receive_function;
 
          m_next = s_first_packet_class;
 
          s_first_packet_class = this;
 
          m_packet_id = 0xFFFFFFFF;
 
      }
 
   
 
      u32 m_packet_id;
 
      PacketClassInfo* m_next;
 
      PacketReceiveFunction* m_receive_function;
 
      static PacketClassInfo* s_first_packet_class;
 
  };

This demonstrates one very useful trick when employing static helper classes – passing data as a parameter to the constructor allows you to put together all sorts of helpful information inside the macro and then save it off for later use. A great use of this (left as an exercise for the reader) is to use the preprocessor’s stringize operator (##) to store the name of the packet class as a char* for debugging purposes. So, with the new PacketClassInfo, our macros become:

#define PACKET_CLASS(classname) 
 
  public: 
 
      static PacketClassInfo s_packet_class_info; 
 
      void send() 
 
      { 
 
          g_NetworkManager->send(s_packet_class_info.m_packet_id, this, sizeof(classname)); 
 
      } 
 
      static void ReceiveFunctionStub(void* received_data); 
 
  private:
 
   
 
  #define IMPLEMENT_PACKET_CLASS(classname) 
 
      PacketClassInfo classname::s_packet_class_info(classname::ReceiveFunctionStub); 
 
      void classname::ReceiveFunctionStub(void* received_data) 
 
      { 
 
      }

As you can see, the receive function is pointing to ReceiveFunctionStub(), which is currently empty. The reason for this stub function’s existence is that our actual receive() function is a member function, which cannot be (easily) called via the void* pointer we have to the packet data. So, instead, we call into the stub, and let it do the dirty work of casting to the right type and making the call for us. Adding in that dirty work gives us:

#define PACKET_CLASS(classname) 
 
  public: 
 
      static PacketClassInfo s_packet_class_info; 
 
      void send() 
 
      { 
 
          g_NetworkManager->send(s_packet_class_info.m_packet_id, this, sizeof(classname)); 
 
      } 
 
      static void ReceiveFunctionStub(void* received_data); 
 
      void receive(); 
 
  private:
 
   
 
  #define IMPLEMENT_PACKET_CLASS(classname) 
 
      PacketClassInfo classname::s_packet_class_info(classname::ReceiveFunctionStub); 
 
      void classname::ReceiveFunctionStub(void* received_data) 
 
      { 
 
          ((classname *) received_data)->receive(); 
 
      }

ReceiveFunctionStub() acts as a trampoline to call the real receive() function, which we’ve also added a prototype for in the header (in the spirit of reducing the workload on the user as far as possible). Now all they have to do is implement that function in their code, as in our original desired API design. It is worth noting that whilst in this example we have made receive() a member function of the packet class, depending on the surrounding code design this may not be the easiest for the user – my personal preference is to make receive() a delegate on the packet, allowing the handler to live in another object/system, but that requires quite a lot of support code unless you already have a delegate implementation in your engine.

At any rate, now the final piece of the puzzle is very trivial indeed – gluing together the list of packet classes and the stub function to give a generic packet receive handler the networking system can call:

void ReceiveDispatcher(u32 packet_id, void* received_data)
 
  {
 
      ASSERT(packet_id < MAX_PACKET_TYPES);
 
      ASSERT(g_packet_class_info[packet_id]);
 
      g_packet_class_info[packet_id]->m_receive_function(received_data);
 
  }

After a couple of simple checks to make sure we actually have a packet matching the id requested, this function just calls into the current stub with the data and lets it get on with it.

So putting it all together, what does this all get us? Well, the API is about as close to the simplest model as reasonably possible, with just a couple of macro invocations added to get everything going. The packet ID system is completely hidden from the user, which eliminates any possible errors with mis-casting data pointers and suchlike, and adding a new packet doesn’t take any significant effort (and can be done entirely in the code for the system using the packet – no global enums or switch statements to maintain). Common errors such as duplicate packet names will be caught at compile-time, avoiding some potentially painful debugging situations. Finally, the code is pretty efficient – both send() and receive() have a very small (and constant) overhead over whatever the network layer is already doing.

Aside from the networking itself, hopefully reading this will have sparked some ideas about how a little bit of meta-programming can really streamline the design of many APIs and introduce a lot more user-friendliness without bringing lots of arcane incantations into the code itself.

Many thanks to Yamamoto-san, my partner-in-crime on networking systems design and bottomless wellspring of obscure C++ knowledge!

Even Error-Distribution: Rules for Designing Graphics Sub-Systems (Part III)

Original Author: Wolfgang Engel

The design rule of “Even Error-Distribution” is very common in everything we do as graphics/engine programmers. Compared to the “No Look-up Tables” and “Screen-Space” rules, it is probably easier to agree on this principle in general. The idea is that whatever technique you implement, you face the observer always with a consistent “error” level. The word error describes here a difference between what we consider the real world experience and the visual experience in a game. Obvious examples are toon and hatch shading, where we do not even try to render anything that resembles the real world but something that is considered beautiful. More complex examples are the penumbras of shadow maps, ambient occlusion or a real-time global illumination approach that has a rather low granularity.

The idea behind this design rule is that whatever you do, do it consistently and hope that the user will adjust to the error and not recognize it after a while anymore. Because the error is evenly distributed throughout your whole game, it is tolerated easier by the user.

To look at it from a different perspective. At Confetti we target most of the available gaming platforms. We can render very similar geometry and textures on different platforms. For example iOS/Android with OpenGL ES 2.0 and then Windows with DirecX 11 or XBOX 360 with its Direct3D. For iOS / Android you want to pick different lighting and shadowing techniques than for the higher end platforms. For shadows it might be stencil shadow volumes on low-end platforms and shadow maps on high end platforms. Those two shadowing techiques have very different performance and visual characteristics. The “error” resulting from stencil shadow volumes is that the shadows are -by default- very sharp and pronounced while shadow maps on the higher end platforms can be softer and more like real life shadows.

A user that watches the same game running on those platforms, will adjust to the “even” error of each of those shadow mapping techniques as long as they do not change on the fly. If you would mix the sharp and the soft shadows, users will complain that the shadow quality changes. If you provide only one or the other shadow, there is a high chance that people will just get used to the shadow appearance.

Similar ideas apply to all the graphics programming techniques we use. Light mapping might be a viable option on low end platforms and provide pixel perfect lighting, a dynamic solution replacing those light maps might have a higher error level and not being pixel perfect. As long as the lower quality version always looks consistent, there is a high chance that users won’t complain. If we would change the quality level in-game, we are probably faced with reviews that say that the quality is changing.

Following this idea, one can exclude techniques that change the error level on the fly during game play. There were certainly a lot of shadow map techniques in the past that had different quality levels based on the angle between the camera and the sun. Although in many cases they looked better than the competing techniques, users perceived the cases when their quality was lowest as a problem.

Any technique based on re-projection, were the quality of shadows, Ambient Occlusion or Global Illumination changes while the user watches a scene, would violate the “Even Error-Distribution” rule.

A game that mixes light maps that hold shadow and/or light data and dynamic light and / or regular shadow maps might have the challenge to make sure that there is no visible difference between the light and shadow quality. Quite often the light mapped data looks better than the dynamic techniques and the experience is inconsistent. Evenly distributing the error into the light map data would increase the user experience because he/she is able to adjust better to an even error distribution. The same is true for any form of megatexture approach.

A common problem of mixing light mapped and generated light and shadow data is that in many cases dynamic objects like cars or characters do not receive the light mapped data. Users seems to have adjusted to the difference in quality here because it was consistent.


Stop it, Norway.

Original Author: Adam Rademacher

Anders Behring Breivik wrote about World of Warcraft and Call of Duty 4: Modern Warfare in his manifesto, before he decided to kill 77 people in Norway on 7/22.  Anders Behring Breivik is a raving lunatic who believes that violence is the best way to share his radical and esoteric viewpoints.  And because of this madman and his mention of games, at least two major retailers in Norway are pulling games off their shelves.  This is an absurd mockery of civility and a futile effort that only harms the good people of Norway.Our industry has had more than its share of bad publicity, and I’ll be the first to admit that some of it is our fault, especially in the realm of marketing.  But then there’s an awful lot of it that isn’t our fault, and quite honestly, we don’t deserve at all.  The recent tragedy in Norway, and the tragedies that followed it involving major retailers pulling violent video games from their shelves is just appalling.  It’s the latest in a long history of pseudocultural knee-jerk reactions against games following violent actions and events.  Sooner or later, as a cultural institution, we need to stand up and say enough is enough.

Games are not evil.

Violence has existed since the dawn of human history.  Video games have not.  The prevalence of violent video games is rooted deeply in the origins and marketing of video games to attract young male consumers.  Pound for pound, though, we’re also not a particularly imaginative lot.  There’s a lot of creativity that goes into a video game, but it also swirls around a central point of attraction: Conflict.  It’s in conflict that our creativity breaks down; most games tend to fall back on violence as that oldest and noblest of human conflict.  The important thing to note here is that video games are derivative of humans, not the other way around.

Mario next to Turtle

Holy crap this game is morbid

Humans are not shaped by their media.

I grew up playing Super Mario Bros on the NES, but I never felt it necessary to jump on a turtle.  People are not born empty shells of personality and later “filled in” by the media they consume.  Particularly notable, life-changing events (death, divorce, abuse, etc.) shape a person’s viewpoint because the world has changed forever, but after you’ve killed 100 boars in World of Warcraft, the world is still the same at the end of the day.  There is absolutely no legitimate science to substantiate the claims that violent media causes extremist or even moderate violent activity in people that weren’t already prone to such violence.  People have a long and glorified history of atrocities, long before violent books were printed or violent movies shown, or violent games played.

We deserve some respect.

So now we’re down to the core problem: as an industry, we’re not well understood and that’s partially our fault.  There’s very little communication between our industry and the people who aren’t involved in our industry as gamers, or developers.   We don’t have the $200 million dollar budgets of movies, or the hundreds of millions of sales of books, but we create the most technologically advanced and immersive entertainment experiences ever conceived played around the world.

The answer is in communication.

As I stated above, part of it is our fault for completely lacking in communication with the ‘outside world’.  This responsibility doesn’t just fall on our socially acceptable prized ponies, it covers the gamut from E to AO.  Universities should be conducting research on the positive effects of games, and we should be reaching out to them for that.  The news should be reporting on major game releases the same way it does movie releases.  We should have major critics doing real critical review on the value of games as a whole.  And this is something we need to work on together.

 

[Stay tuned next time for VFX and U part 3.]


KHAAAAAAAAAAAAAAAAN (academy)

Original Author: Jaymin Kessler

Its the best website you’ve never heard of.

 

Originally I had planned to do an update on TREMBLE, my SPU loop pipeliner and optimizer. I added some cool new features like multi-implementation odd/even pipeline balancing macros and a GUI for people to play around with.  However, since I am pretty tied up with SIGGRAPH presentation issues at the moment, I am going to take the easy way out and write a short article on something very near and dear to my heart.

 

We all know some programmer that knows nothing about math.  If past experiences are indicators, we know him extremely well because he is us.  Most programmers I have known (myself included) just didn’t take a lot of math in school, or took some math class and 3 seconds after the class ended they instantly forgot all the meaningless formulas they memorized.  The way its taught in school, math just didn’t seem that interesting / relevant to programming.  Unless you’re bob, alice, or some dude talking about global illumination and raymarching fractals, you can actually get quite far in the industry not knowing how to factor polynomials, convert log bases, or do algebraic simplification.  However, just because you can get by without it, doesn’t mean you should strive to continue your ignorant ways, or miss out on the benefits of training yourself to think about things in alternate ways.

 

Enter Khan Academy. I don’t know whether to call it a video site, school, a system, or a way of life.  At its core, its one person making thousands and thousands of videos on subjects like biology, chemistry, cosmology, physics, economics, math, and programming.  The thing that sets the videos apart from other educational videos is Khan’s focus on intuition and understanding the why more than the how.  Anyone can give you a formula to memorize, but Khan gives you the understanding of why the formula is what it is, so that if you ever forget it you can reason out what it should be.  This gives Khan academy videos that sense of wonder that only comes from a really cool concept suddenly snapping into place.

 

 

 

 

Furthermore, there is a really cool practice system in place that ties into the videos.  Its a google maps navigable tree of exercises you can make your way down, and a sidebar that suggests exercises for you to try based on videos you have watched.  If you have trouble answering a question, it povides a link to the relevant video so you can brush up without breaking your question-answering streak.  It even has a little scratchpad area you can scribble notes on off to the side.  There is even a coaching section where teachers can add students and students can add coaches.  That way teachers can track their students progress and adjust accordingly when in the actual classroom.

 

 

And of course most near and dear to our hearts is the metagame.  You get points for watching videos and doing exercises, and there is even a trophy system (called “badges,” because trophies is too PS3-like?) where you are given various awards for doing things like watching a certain number of videos, answering a consecutive number of questions correctly, and other fun stuff.

 

 

 

 

 

I’ll now close with a list of stuff I love about Khan academy

0) If there is ever something you’ve been curious about or wanted to learn, this is the way to go! Everything is explained in a very easy to understand way, and if there is some prerequisite concept you need to understand the video, you can go watch that video as well

1) Sometimes people lose concentration in class. If you nodded off for a second and missed some important piece of information, its embarrassing to ask the teacher to go back and repeat it. With Khan academy, you can just back up a little and rewatch, or pause while thinking about whats being said to make sure you understand.

2) Khan is human. He makes up examples on the fly, screws things up, makes mistakes, and later on has to make correction videos. Its really an amazing example of someone who is really smart and really good at math, and yet isn’t perfect.  Its makes you feel less discouraged when you, yourself, screw up.

3) The metagame is kinda addictive. If you look at the screenshots above, I have badges for doing addition and subtraction exercises.  Its not because I was unsure of how to add negative numbers, but rather I was trying to catch up to @kalin_t’s score

4) Even if there were no practice exercises, badges, or anything else, the intuition you get from the videos is priceless.  It is really empowering to understand the why instead of just memorizing the how, and that feeling you get when everything snaps into place is just indescribably cool!

 

So, thats it. Get your ass to www.khanacademy.org and start sucking less (or using your newfound knowledge of cosmology to impress girls at parties!)

 

PS… don’t judge me for the small number of watched videos listed in my profile.  I watch them every day at lunch and I’m not signed in 90% of the time 🙂


Hack-A-Thon

Original Author: David Czarnecki

TL;DR

In a week, we’re running an internal Hack-A-Thon at the studio. It’ll be the third one we’ve run. For 24 hours, everyone in the company participates and can work on whatever project they’d like to work on as long it’s somewhat (for a very relaxed definition of somewhat) related to the work that we do. It’s a great time.

HACKITY HACK HACK A ROO

Officially the Hack-A-Thon starts at 3 PM on a Thursday and finishes at 3 PM on Friday. Past Hack-A-Thon projects have included:

– Migrating the software on our continuous integration server

– iPhone application for interfacing to our platform services

– Unity video game where the studio was modeled and the game could be used as a testbed for sending game data to our platform services

– Video conferencing setup

– Internal project theme song, rap and artwork

– Open source testing framework

Before past Hack-A-Thons, I’ve gone home, hit the gym, grabbed dinner, showered and tried to nap. I’ve always been the first one to start before midnight and then it’s straight on ’till morning. Usually I can make it until around 7 AM when I have to take a 20 min nap, followed by breakfast, and then it’s back to coding until the wrap session.

Here’s some tips for making sure your Hack-A-Thon is successful:

1. Be inclusive – Everyone in the organization should participate. Encourage teams to be formed well in advance of the Hack-A-Thon. Let production work with development. Let development work with HR. However people want to form teams or work solo, let it happen.

2. Be open – No secret projects. Make sure everyone’s ideas are known ahead of the Hack-A-Thon. Even if someone wants to work on their project alone, it’s more exciting for everyone when they know someone might be rewriting a core piece of your game engine or simply getting video conferencing working flawlessly in your conference room. If you can involve the local development community, do that too. You never know the talent you might be able to attract by opening your doors for a day.

3. Be free – Be very lax in what people are allowed to work on. Projects should be somehow related to the work you’re doing, but they don’t need to be production-ready at the end of the Hack-A-Thon. Let people explore.

4. Be mindful – It’s the game industry. There are milestones. There are launches. There are conferences. Make sure you plan your Hack-A-Thon on a day that doesn’t preclude half your staff from participating because of a conflict.

5. Be focused – Shut down the tab to GMail or don’t open Outlook. Encourage teams to talk in person. Try and be as focused and productive as you can be in the allotted time. Unless absolutely necessary, e.g. our data center servers are actually on fire, focus on your Hack-A-Thon project and not your regular work.

6. Be scoped – Try and choose a Hack-A-Thon project you can actually scope to the allotted time. It’s more fun when you can demonstrate a near complete project at the end of your Hack-A-Thon.

7. Be closing – Provide a wrap-up session where people get 5 minutes or so at the end of the Hack-A-Thon to demo their project to everyone. And clap at the end of each demo. Even if people don’t complete 100% of what they wanted to get done, applaud the effort.

8. Be publicizing – Talk about the Hack-A-Thon on your company’s Twitter, Facebook or blog. Hint at some awesome upcoming projects that are spawned from the Hack-A-Thon or link to open source projects.

FIN

At our upcoming Hack-A-Thon, I’m going to be working on rewriting the internals of our open source leaderboard code. More specifically, I want to change the API to be more readable and self documenting as well as to take advantage of transactions to get consistent snapshots of leaderboard data. After that I’m going to re-run the performance metrics to see how transactions affects the leaderboard data retrieval. Next up will be updating the public documentation. Finally I’ll release a new library of the leaderboard code. And then I’m going to do all of that for the PHP, Java, and Scala ports. That’s the plan at least.

24 hours of coding.

m/ m/

You can find more hilarity over on my Twitter account,  @CzarneckiD.

A Different Allocator Model

Original Author: Lee Winder

Quite a few years back I started developing a custom STL implementation which has eventually be adopted and used throughout our company. One of the most important things to get right when developing any kind of generic container library is the way memory is allocated. It needs to be lightweight, highly flexible and above all easy to understand so people are willing to experiment it.

But STL Allocators have a bad reputation and it’s for a good reason. They are complex, hard to understand and have some interesting behaviour that seems designed to confuse. As a result, I needed to look at how we were going to provide a custom allocation system that was both easy to use and simple to understand without restricting what people could do with them.

 

A Bit of History

A while back Bitsquid published a post entitled Custom Memory Allocation in C++. This excellent post covered how the BitSquid engine used an allocator model to perform memory allocation throughout their entire engine.

But the FTL requires a slightly different approach so I won’t be treading over already covered ground here.

FTL allocators are well hidden inside containers, the only control you have is specifying the allocator type which means it’s use, and how and when objects are allocated and created is completely fixed with only the allocator itself being able to effect the outcome. Because of this the allocator behaviour needs to be as customisable as possible without requiring any changes the container itself.

When the FTL was original started, it was quite small scale and only used by a couple of developers, so allocators were not a priority. The flexibility wasn’t needed so in-container malloc and free allowed us to concentrate on container creation but obviously this wasn’t going to last.

The follow just describes the various approaches we took, why we dismissed them and what we eventually ended up with.

 

The Initial Approach

Allocators should have a minimal over-head. What we didn’t want to do was increase the size of every container dramatically when we rolled out customisable allocators. As a result, we initially used an approach used by a couple of vendors and defined the allocator specification using only static members – removing the need for an internal allocator object.

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  
// Completely made up code showing the use of a static based allocator
 
  template <typename T, typename Alloc>
 
  class vector
 
  {
 
    void push_back(void)
 
    {
 
      void* new_memory = Alloc::allocate( sizeof(T) );
 
      T* new_object = new(new_memory) T;
 
    }
 
  };

I knew this would limit the flexibility of the allocators, but it had minimal over-head (especially using default template parameters) and wouldn’t effect those containers already being used. And besides, programmers are a creative bunch and I wanted to see what people did with this before resigning it to the scrap heap.

But while programmers were able to work around the main limitation of not having allocator state per container, they were having to jump through hoops to get the behaviour they wanted. Which made it less likely that other programmers would feel confident enough writing their own allocators and their ability to really customise their behaviour was too limited.

So we ended up adding allocator state on a per container basis, making it much easier for developers to do what they wanted though it did add at least 4 bytes per container. But I felt that flexibility and simplicity were much more important.

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  
// Completely made up code showing the use of an instanced allocator
 
  template <typename T, typename Alloc>
 
  class vector
 
  {
 
    Alloc m_allocator;
 
   
 
    void push_back(void)
 
    {
 
      void* new_memory = m_allocator.allocate( sizeof(T) );
 
      T* new_object = new(new_memory) T;
 
    }
 
  };

 

Allocator Specification

The allocator specification is complicated. While I’m sure there are good reasons for some of the decisions I wanted ours it to be much simpler. So removing the type information (since our allocators work on raw memory and nothing else), removing the rebind(!) and exception handling (which we don’t use) we ended up with the following.

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  
typedef ftl::pair<void*, size_t> allocated_memory;
 
   
 
  class Alloc
 
  {
 
  public:
 
    allocated_memory allocate(const size_t alloc_size);
 
    void deallocate(void* ptr);
 
  };

And for the basic allocator, that’s it, it doesn’t require anything else to work, but it does have one interesting aspect.

1
 
  
typedef ftl::pair<void*, size_t>  allocated_memory;

As we can have all sorts of allocator types, what it allocates might not be exactly what you asked for. If it can’t allocate all the memory you requested, that’s fine, it simply returns allocated_memory(nullptr, 0) but it can also return more than was requested (for example, fixed sized block allocators will do this). This return type simply allows the allocator to return not only the allocated memory, but also how much was allocated, which allows calling objects to take advantage of this additional memory if possible.

Most of the time this isn’t queried and only what was asked for is given, but it offers an additional level of information which might allow better memory usage in some containers.

So a container will most likely end up with something like the following when adding and removing elements.

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  
// Creating a new object in a container
 
  allocated_memory new_memory = m_allocator.allocate( sizeof(T) );
 
  if (new_memory.first)
 
    T* new_object = new(new_memory.first) T;
 
   
 
  // Losing the object now we’re done with it
 
  old_object->~T();
 
  m_allocator.deallocate(old_object);

This is fine and gives us a very simple entry point for allocators. But by forcing the use of placement new and the destructor call in the container itself, we are limiting what an allocator can do. While the allocators are required to return raw memory, that doesn’t mean that it has to internally. Some allocators might pre-create the objects before returning them so creation is front loaded but the forced placement new could mean we’re over-riding an object that has already been created.

 

Construction Functions

As a result, we want developers to be able to not only over-ride the memory allocation, but also the object creation. 99% of allocators won’t need to do this so we don’t want to add additional requirements to the allocator so instead we can create non-member, non-friend functions, specialised on the allocator, which will do the creation for us.

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  13
 
  14
 
  15
 
  16
 
  17
 
  
template <typename TConstruct, typename TAlloc>
 
  TConstruct* construct(TAlloc& allocator);
 
   
 
  template <typename TConstruct, typename TAlloc>
 
  TConstruct* construct(TAlloc& allocator, const TConstruct& obj);
 
   
 
  template <typename TConstruct, typename TAlloc>
 
  TConstruct* construct(void* allocated_mem);
 
   
 
  template <typename TConstruct, typename TAlloc>
 
  TConstruct* construct(void* allocated_mem, const TConstruct& obj);
 
   
 
  template <typename TConstruct, typename TAlloc>
 
  void destroy(TConstruct* ptr);
 
   
 
  template <typename TConstruct, typename TAlloc>
 
  void destroy(TConstruct* ptr, TAlloc& allocator);

So our point of allocation/creation now becomes something much simpler and much more powerful.

1
 
  2
 
  3
 
  4
 
  5
 
  
// Creating a new object in a container
 
  T* new_object = alloc::construct<T>(m_alloc);
 
   
 
  // Lose our existing object and return it back to the allocator
 
  alloc::destroy(old_object, m_alloc);

The default version of construct performs the allocation and placement new within the construct function, but should the allocator need something more (or less), simply over-loading the function on the allocator type allows complete control over both memory allocation and object creation. The same goes for the destroy function and the automatic call of the objects destructor.

 

No Base Allocator

One thing I didn’t add to the allocators was a base allocator interface. The allocators are created within the container, with their type defined at the point of the containers declaration. The addition of a base interface (and the associated virtual functions) would have increase the size over-head of the allocator which is something I wanted to avoid and something I thought didn’t add enough to warrant the overhead. I was less worried about the overhead of calling a virtual function as that would be insignificant compared to the overhead of everything else what was going on.

 

Conclusion

So by introducing an allocator model that is much simpler (only 2 required functions) with more extensible customisation should an allocator should need it, developers have complete control over all the memory allocation and importantly the object creation itself. Due to it’s initial simplicity, developers have no problem creating new allocators that improve both memory and container use in a project and can start to experiment with better and more efficient allocators quickly.

 

Squeezy.  Used with permission.


Yes…If

Original Author: Jameson Durall

Walt Disney Imagineering which focuses on how Walt Disney created the parks and how the Imagineering group came into existence along with their practices.  The first part of the book talks about their philosophy and says that no one could tell Walt no…if you did, he would find someone who would say yes.  This morphed into the saying that you can never say “No, because” you can only say “Yes…If”.

 

This is dear to my heart because it strikes home with an issue that Game Designer’s often deal with.  When working on a new Design idea, it’s not uncommon to get the response of “that’s not possible” from the other disciplines that you are working with. I have too much respect in the abilities of every person around me to let anyone tell me “No” or that it “Isn’t Possible”.  

It’s important that everyone learns to say “Yes…If” and then figure out exactly what it’s going to take to make this happen.  It may be the case that what the Game Designer is asking for will take a programmer two years working non stop to accomplish…but that’s fine.  If the feature is important enough, we can then evaluate what we need to do to make that happen.

Early in the development of HERE, we expected this to be a long shot at best but wanted to see what kind of cost we were looking at.  The response we got was that it would take X time just for the programmer to look into it deeply and see what the costs would be to us.  This was actually good info for us because we could then evaluate if we were willing to spend that cost and how it would impact development.  The response could have easily been that will be too hard or it’s not worth it, but this attitude and willingness to investigate the issue thoroughly allowed Design to make a decision.

 

 which wouldn’t have been possible if the initial question of co-op had been shrugged off entirely because of perceived difficulty.

Part of my job as a Game Designer is to think ahead to what kinds of gameplay we need to be accomplishing two or three years down the road and this often means we’re thinking of exciting new things.  I need everyone on the same page and most importantly with a positive attitude so we can all work toward making great games for years to come.  The ideas we come up with are not always going to be home runs, but we need the freedom to explore anything we think could evolve our gameplay into something special.

Now let me be clear, I’m not saying I expect the rest of the team to bow to Game Designers and provide their every wish.  I prefer true collaboration from all disciplines when working on new things and when there is hesitation from anyone it’s best to talk it out.  Part of the problem is often that the Game Designer isn’t exactly clear in the messaging of what they are looking for and too much is left to interpretation by the listener.  This is a tough thing especially when delving into radical new areas that the Game Designer doesn’t yet fully understand himself.

Designers need what I like to call “Freedom to Fail” during development so we can try new things without feeling like they all have to be a success…or even possible for that matter.  This not only means the Designer needs to be free to try anything, but all supporting staff need to be on board with new ideas and thinking how can we accomplish this instead of being concerned with how hard it is to achieve.

In the end, anything is possible…if you’re willing to assign the time and resources to it.  I would even argue that if you’re not scared to death about something new for your upcoming game then you might not be pushing yourself hard enough toward making something truly unique.


Path Finding For Princes

Original Author: Rod-Mathisen

Advocacy /

Once upon a time, there was a Prince who wanted to rescue his fair Maiden from the Evil Wizard. Now our gallant Prince was raring to go a-damsel-rescuing, but alas, there were many paths to the Evil Wizard’s lair, how could he know which to take?

Well, our plucky wise royal just looked to the sky, for there he did see A*, to guide his way [ouch!].

For those not wincing I should add that A*, pronounced a-star, is the name of a path finding algorithm. Yes it is a terrible, terrible, pun. Sorry.

Path Finding For Princes

Well hello again dear Readers, I have bad news to report. Perhaps unsurprisingly, since I last wrote, I have utterly failed to do any sort of planning, scoping or other rational action to ensure I deliver my baby before my wife deliver hers. I have, however, had a lot of fun playing with path finding instead, so I thought I’d share it here.

Now, path finding is an old problem, largely considered a solved problem and therefore probably not that interesting to the various industry vet’s who read/write here. So to those people, I recommend you just look at the pictures, you’ll probably be appalled by the uneducated rubbish I’ll be writing down.

To those of you who are completely unfamiliar with path finding algorithms, come closer my gullible friends. The principle is very simple:

  1. Build a list of Places-We-Know-How-To Get-To (let’s just call them places)
  2. Add more places to this list by picking one place we already know, and see if it leads to any new places.
  3. Stop when we find a place that leads to our destination.

How good or otherwise the method is, largely depends on how we choose which place to consider for new leads, from the ones we already know. This magical part is defined by the algorithm used, of which there are many, but one of the best is A*, hence my awful joke at the start. I highly recommend checking out this Amit’s excellent A* pages for much better descriptions than I could give.

All I will say about A*, is that firstly, it is a bit clever about picking the next place to consider, using both the distance it took to get to that place and a guess of how far it is from there to the destination. Secondly, it keeps track of places it has considered, so if it happens to re-consider a place, it will forget the worst way of getting there. Together these mean it will generally find a good path, relatively quickly.

To the rest of you, well, read on I guess.

The Scene

On stage is my baby, or at least the tender embryo of my iPhone game that is slowly gestating. It is to be a top-down, turn based strategy type affair, which I will say more about in future posts. For now, all you need to know, is that it will require “things” to navigate a set of relatively random, 2d maps. Breaking that down:

1. Things navigating will not be player controlled, but will move relatively slowly and with instant turning, so no need to worry about turning circles or the like. Their size will vary, but lets ignore that for today.

2. Navigate – things will have to figure out how to move from point A to point B, preferably along the shortest path, avoiding all walls and obstacles.

3. Relatively random – the maps and therefore the walls and obstacles are procedurally generated on demand, meaning that no manual input is allowed in making them navigable.

4. A set of maps – I’ll be using lazy loading/creation in game time so there can’t be a 1 minute pause while a map is analysed to create a navigation mesh or place waypoints.

5. Turn based – only one thing is navigating at a time, so the actual navigation part does not have to be that quick.

The Scenery

So, my baby already has a few tricks up its sleeve. One of the first things I implemented was random dungeon generation. If you look over to the right you’ll see how this works. It’s very easy, has lot’s of limitations, but does give you a random map.

The nice thing is that each of the wall’s you remove effectively becomes a link between two areas. I tend to call these “exits” because when you are in an area, the exits are your path to other areas. Furthermore, if you ensure each area has a list of the exits available then path finding becomes very easy.

So with relatively little jiggery-pokery using this I had something that would use the exits and areas to find the best path across pretty much any map. Hooray. At this point I did the only sensible thing, got cocky and went utterly our of my depth. If only I’d listed to Han Solo…

The Acts

In a fit of enthusiasm, I thought that whilst having an infinite random dungeon is quite cool, what would be far cooler would be to allow the dungeon to have exits to the outside-world so that you could move between dungeons. In fact the random generation could be making oddly shaped houses rather than dungeons – et voila a random village!

In fact adding exits to the outside was easy, just add them to an area, but leave the other side of the connection attached to nothing – the outside. The issue was how to navigate outside. You see whilst “inside” consists of lots of areas we know the connections between, “outside” on the other hand, is just one giant amorphous blob. Sure we could link up every exit that is from an area to nothing, but what if we are trying to navigate from the outside to the outside, say from one side of a dungeon-cum-house to the other, how would it know whether to go around as opposed to through.

Part 1: Polyigonisation

The classical solution to this problem is to divide up the rest of the space into polygons, then replace the “walls” between them with exits. Almost as if we had added lots of interconnected room areas that happened to cover the entire map.

I’ve mocked this up on the right. Note that for this and the following diagrams, green things are supposed to denote places we know how to get to, while red denotes places we have considered. The route chosen will therefore always be along a red path of places considered.

In this case I hand crafted the polygons in a pretty haphazard way, however nonetheless they give a reasonable route with very few places being considered. The problem is, how would this polygonisation (yes, I made that word up) be done automatically.

Perhaps unsurprisingly, there are a bunch of algorithms to do this. Unsurprising because this type of navigational mesh appear to be used in lots of games, and I’m sure they don’t hand craft them. However, these algorithms appeared, to me, to be (a) quite hard, and (b) quite processor intensive. They didn’t look like the sort of operation that could be done on the fly by an iPhone. Which now I write it, makes me think I really ought to try it – a future post perhaps…

So what else could be tried

Part 2: The Grid

Another common solution is to forget the original area/exit based navigation and use a grid.

In this scenario a grid is overlaid over the entire space and each point of the grid (the corners of the squares) is considered a node, I could have used the centre of the squares, but I prefer this way. So, let’s call these points “grid nodes”.

Movement then is done between adjacent grid nodes, either horizontally, vertically or diagonally, but criticaly, only one grid square at a time. Where the line between two grid nodes crosses a wall it is considered blocked and cannot be used.

The result is an algorithm that works a lot like the animated wikipedia image above. It doesn’t mind about inside or outside, but it needs to check a lot more places and suffers a bit with rooms that are not aligned with the grid.

Comparing the number of places considered inside areas, to those by the mesh method, you can see why meshes are so popular; they cover more map with a lot less checking.

That said, it does generate effective routes. No, the real issue with this method is the number of nodes required to firstly be generated and secondly kept in memory. Again, I didn’t fully implement this method, it just felt wrong to lose the efficient indoors navigation. A logical reason to progress if ever there was one!

So what next, well of course its time for…

The Finale – A Monstrous Hybrid

At last, the point of this post, what did I actually do. Well in truth, I pretty much just mashed the grid and areas together into an awkward hybrid monstrosity. It worked as follows:

  • Whilst we are considering points inside an area will use the area’s exits as our places-to-consider.
  • Whilst outside we will generate a grid on the fly, calculate which grid nodes are accessible and these as our places-to-consider.

Of course nothing is ever easy, in this case it is the transitions between inside and outside. The problem being that without further assistance a grid node will never go inside, whilst an exit will never go outside.

The solution is to build special cases for both situations:

When we are outside and considering a grid node, if one of the neighbouring grid nodes we are trying to reach is inside an area we do an extra couple of tests. First we find out if that area has any exits to outside – i.e. exits that connect the area to nothing. If it does we then test to see if our current grid node can get directly to that exit. If that is possible, then the exit is included in our open list of places we could visit, thereby allowing us to go inside.

Conversely, when we are inside, and are considering an area which has an exit to the outside – connected to nothing – we will identify all the neighbouring grid nodes to that exit. All the grid nodes that are not blocked will be added to the places we could visit, thereby allowing us to go outside. I’ve illustrated this below.

The following animation was supposed to show how this would work in full, unfortunately I was a bit scuppered by the animated-gif-uploader limiting the number of frames to 10, so I had to skip a lot of stages. Hopefully it still shows the principle. Some points to note:

  • The crosses denote grid points that could become grid nodes as required, when they are called into play they become coloured:
    • The green crosses are places we can get to in one step (our open list in A* terminology)
    • The red crosses are places we have already considered (our closed list in A*)
  • The spokes are to show the neighbours of a grid point being considered:
    • The green spokes indicate a grid point we could navigate to
    • The red spokes indicate a grid point we can’t navigate to
  • The final image shows the route after a little smoothing… clearly very useful in this circumstance and surprisingly easily done

Happily Ever After

So there you have it, the results of all my amateurish strivings – I have reinvented the wheel as a square, but at least our Prince can get to the Evil Wizards Lair; I feel strangely proud. Time will tell if this method survives play-testing, but even if it doesn’t I have learned a lot implementing it. As ever, the beauty of amateur iPhone development.

But enough of this frivolity! Now, I really am going to knuckle down and get a plan together for this game I’ve pledged to deliver before 2011 is out.

Honestly.

The Importance of Vision

Original Author: Rob-Galanakis

Every ambitious creative endeavor has at its helm a single individual who is responsible for providing the vision for its development. In games, we have Art Directors in charge of the aesthetic, Technical Directors in charge of the technology decisions, and Creative Directors in charge of the overall game. Their chief responsibility is to guide the creation of a project that achieves their vision. The most successful directors are able to articulate a clear vision to the team, get buy in from its merits and his success, and motivate the team to execute with excellence. A project without a director’s vision is uninspired and unsuccessful.

It is no surprise, then, that even though we talk about tools and pipeline as its own niche- and even acknowledging it as its own niche is a big step- we have such uninspired and unsuccessful tools and pipeline at so many places in the industry. We seem to have a mild deficiency of vision in our small community of tools programmers and tech artists, and an absolute famine of vision and representation at the director level.

This situation is unfortunate and understandable, but underlies all tools problems at any studio. Fixing it is the vital component in fixing the broken tools cultures many people report. Without anyone articulating a vision, without anyone to be a seed and bastion of culture and ideas, we are doomed to not just repeat the tools mistakes of yesterday, but to be hopelessly blind towards their causes and solutions.

Where does this lack of vision come from? What can we do to resolve it?


The lack of vision stems from the team structures most studios have. Who is responsible for tools as a whole, tools as a concept, at your studio? Usually, no one and everyone. We have Tech Art Directors that have clever teams that often lack the programming skills or relationships to build large tool, studio-wide toolsets. We have Lead Tools Programmers that are too far removed from, or have never experienced, actual content development. We have Lead Artists that design tools and processes for their team, that do not take into account other teams or pipelines and are uninspired technically.

There is no one who understands how every content creator works, who also has the technical understanding and abilities to design sophisticated technologies and ideas. No one who understands how content and data flow from concept art and pen and paper into our art and design tools, into the game and onto the release disk.

Without this person, what sort of tools and pipelines would you expect? If there were no Art Director or someone who had final say and responsibility for a cohesive art style across the entire game, how different would characters and environment look in a single game? If there were no Creative Director who had final say over design, how many incohesive features would our games have? If there were no Technical Director to organize the programming team, how many ways would our programming teams come up with so solve the same problems?

So how come with Tools and Pipeline we don’t think the same way? There is no Tools Director, so we end up with disparate tools and workflows that fail to leverage each other or provide a cohesive experience. The norm for the tools situation is to look like the type of situation we find in studios with weak leadership at the Director level. A mess.  We need a person who understands how everyone at the studio works, and to take ownership of it and provide a vision for improving it.


No longer can this vital role be left to a hodepodge of other people. Your Art/Technical/Creative Directors, your Lead Programmers/Artists/Designers, can no longer be the people expected to provide the vision for studio’s Tools and Pipeline.

The person who fills this role needs to be someone with enough experience creating art that they can embed with Artists. Someone who can program well enough to have the title of Programmer. Someone flexible enough that they can deal with the needs of Designers. Someone charismatic enough that they can fight and win the battle against the inevitable skepticism, fear, and opposition a change like this would bring.

These people are few and far between, and every one of them I know is happily employed. We’re asking for a unique set of passions and skills, a set that isn’t common in the games industry especially (who gets into games to write tools?!). We need to start training our tools developers (tech artists, tools programmers) to aspire to have these passions and skills.

This won’t happen magically. Unless our studios can promise that these aspirations will be fulfilled, few people will bother, and I cannot blame them. Many studios have made the commitment to having killer tools. Almost as many have failed. And almost as many as that have failed to realize lack of a cohesive vision as a primary factor.

It isn’t surprising that resources get moved from tools dev, that schedules cannot be stuck to, that they cannot attract senior developers. Without a cohesive tools vision, how are resources supposed to be properly allocated? Resources become a fragile compromise between competing departments, rather than brokered by a separate party without allegiances. How is a schedule supposed to be followed, when the people doing the work are not the ones who feel the repercussions? And it is no surprise that it is difficult to attract senior talent with strong programming skills necessary to develop great tools to these positions. If there is no career path- and, let’s face it, most studios have no career path for tools developers- they’re going to go into game programming, or the general software industry (which is, for the most part, some form of tools development in a different environment).


Not every studio has these problems (I know because I’ve argued with you about this). And I dare say that studios that don’t have these problems are simply lucky. I suspect that such people are in a fragile situation, and taking away a key player or two would destroy the precarious dynamic that doesn’t birth these problems. If you are at a studio without these problems, ask yourself this: is your setup one that you can describe, export, advocate for, reproduce? How would you do it, without saying “just hire better people?” It is this “coincidence as a solution” that propogates the problems at less lucky studios.

Let’s create real solutions.

We need to create roles and departments that can provide studios with a cohesive tools vision. We need to fill these director-level roles with uniquely qualified individuals who are experienced in art and design, and are excellent programmers. We need to mature our views on tools as an industry, and start looking for concrete solutions for our endemic tools issues rather than relying on chance.

We’re not going to find these people or do these things overnight. We need to, first, decide on this path as our goal. Not just you, but your studio’s management, and there’s no formula helpful formula I can give to convince them. Just nonstop advocacy, education, and reflection.

Then, start discussing what the application of these ideas would mean at your studio. And who is going to fill these key roles? There are people you already have at your studio who just need a little bit of training. Put your tech artists on your programming teams for a bit, or your programmers working on game design or art. See how quickly you’ll find someone with the unique set of skills for a Tools Director position.

We need people who understand how people work and content flows across a project.  We need people who are able to guide its formulation/improvement/reconsideration.  This is vision.  And the lack of vision in tools development is a deadly disease we must remedy if we are to improve the state of our tools across the industry.


Performance anxiety

Original Author: Kyle-Kulyk

My team is currently entering the home stretch, our final month, before completion of our first title, Itzy (try the demo, plug plug plug)!

It’s at this point that I was suddenly struck by near crippling performance anxiety.  What if the game flat-out doesn’t work?  Then what?

Now, I need to step back and explain.  I am proud of my team and proud of the work that we’ve done, however, mobile games are something new to us.  We each have our personal areas of expertise.  I have the pleasure of working with some amazing programmers but in the game space we remain untested.  There are certainly some different factors at play here and despite the decade of programming experience we currently possess, moving a 3D spider around, eating glowing, animated alien fireflies and creating web meshes on the fly for our character to traverse is a different task from linking to a database, searching through files and updating information based on specific search criteria while producing user based reports via the internet.

Put simply, we don’t know what we’re doing but like a rhino in a mine field we’re charging forward.

However as I sat at my workstation the other day I was suddenly distracted by the Nattering Elf of Doubt, or Neod as I’ve come to name him.  Neod hopped up on my shoulder as I was trying to work and exclaimed “Sure and begorra…”

Neod sounds like a leprechaun.  I’m not entirely sure what that’s about.

“Sure and begorra!  Do ya not see those verts, lad?  Sure it runs fine on your PC – but how can you be sure tis not all arseways when you put it on ya ‘droid?  You daft?”

As much as I don’t want to admit it, that little chattering imp sounding strangely like a cross between Bono on helium and my regular voice in my head going on about “Videogames?  They’re gonna cut off your power!  Go back to selling mutual funds!” had a point.  We’re competent programmers and designers, but we really have no idea at this stage if our game will actually run half decent on the Iphone/Android platforms.  I’ve read so many conflicting comments about vert budgets, size restraints, texture limits that I really don’t know where we stand in the performance department.

At this stage in development our touch screen controls have not been implemented, so even if I did create a build for the iPhone/Android platforms – my spider character, Itzy, would just sit there – staring at me lazily, waiting for input that will never come and asking “Forget about something there, champ?”

We’re attempting to remain conservative and mindful of the finished platforms but for all our talents we really don’t know how it’ll end up.  So I’m left with the option of forging ahead with the rest of the team and hoping that everything will work itself out in the end and we won’t blow past our deadline so quickly I’m left staring out the car window saying “Was that the deadline we just passed?  I can’t tell.  Shouldn’t there have been a sign?  I didn’t see the sign…”

I guess that’s just the fun ride between “no idea” and “ok, we sorta get it now”.  It would just be a more pleasant journey without these damn elves.