So they made you a lead; now what? (Part 2)

Original Author: Oliver Franzke

The first part of this article took a closer look at why people with outstanding art, design or programming skills sometimes struggle or even fail as team leads. In addition to that part one also identified the core values of leadership as trust, direction and support.

The goal of this part is to provide newly minted leads with practical advice how to get started in their new role and it also describes different ways to develop the necessary leadership skills.

Learning leadership skills

Now that we have a better understanding of what leadership is (and isn’t) it’s time to look at different ways of developing leadership skills. Despite the claims of some books or websites there is no easy 5-step program that will make you the best team lead in 30 days. As with most soft skills it is important to identify what works for you and then to improve your strategies over time. Thankfully there are different ways to find your (unique) leadership style.

The best way to develop your skills is by learning them directly from a mentor that you respect for his or her leadership abilities. This person doesn’t necessarily have to be your supervisor, but ideally it should be someone in the studio where you work. Leadership depends on the organizational structure of a company and it is therefore much harder for someone from the outside to offer practical advice.

Make sure to meet on a regularly basis (at least once a month) in order to discuss your progress. A great mentor will be able to suggest different strategies to experiment with and can help you figure out what does and doesn’t work. These meetings also give you the opportunity to learn from his or her career by asking questions like this:

  • How would you approach this situation?
  • What is leadership?
  • Which leader do you look up to and why?
  • How did you learn your leadership skills?
  • What challenges did you face and how did you overcome them?

But even if you aren’t fortunate enough to have access to a mentor you can (and should) still learn from other game developers by observing how they interact with people and how they approach and overcome challenges. The trick is to identify and assimilate effective leadership strategies from colleagues in your company or from developers in other studios.

While mentoring is certainly the most effective way to develop your leadership skills you can also learn a lot by reading books, articles and blog posts about the topic. It’s difficult to find good material that is tailored to the games industry, but thankfully most of the general advice also applies in the context of games. The following two books helped me to learn more about leadership:

  • Team Leadership in the Games Industry” by Seth Spaulding takes a closer looks at the typical responsibilities of a team lead. The book also covers topics like the different organizational structure of games studios and how to deal with difficult situations.
  • How to Lead” by Jo Owen explores what leadership is and why it’s hard to come up with a simple definition. Even though the book is aimed at leads in the business world it contains a lot of practical tips that apply to the games industry as well.

Talks and round-table discussions are another great way to learn from experienced leaders. If you are fortunate enough to visit GDC (or other conferences) keep your eyes open for sessions about leadership. It’s a great way to connect with fellow game developers and has the advantage that you can get advice on how to overcome some of the challenges you might be facing at the moment.

But even if you can’t make it to conferences there are quite a few recorded presentations available online. I highly recommend the following two talks:

  • Concrete Practices to be a Better Leader” by Brian Sharp is a fantastic presentation about various ways to improve your leadership skills. This talk is very inspirational and contains lots of helpful techniques that can be used right away.
  • You’re Responsible” by Mike Acton is essentially a gigantic round-table discussion about the responsibilities of a team lead. As usual Mike does a great job offering practical advice along the way.

Lastly there are a lot of talks about leadership outside of the games industry available on the internet (just search for ‘leadership’ on YouTube). Personally I find some of these presentations quite interesting since they help me to develop a broader understanding of leadership by offering different ways to look at the role. For example the TED playlist “How leaders inspire“ discusses leadership styles in the context of the business world, military, college sports and even symphonic orchestras. In typical TED fashion the talks don’t contain a lot of practical advice, but they are interesting nonetheless.

Leadership starter kit

So you’ve just been promoted (or hired) and the title of your new role now contains the word ‘lead’. First of all, congratulations and well done! This is an exciting step in your career, but it’s important to realize that your day to day job will be quite different from what it used to be and that you’ll have to learn a lot of new skills.

I would like to help you getting started in your new role by offering some specific and practical advice that I found useful during this transitional period. My hope is that this ‘starter kit’ will get you going while you investigate additional ways to develop your leadership skills (see section above). The remainder of the section will therefore cover the following topics:

  • One-on-one meetings
  • Delegation
  • Responsibility
  • Mike Acton’s quick start guide

As a lead your main responsibility is to support your team, so that they can achieve the current set of goals. For that it’s crucial that you get to know the members of your team quite well, which means you should have answers to questions like these:

  • What is she good at?
  • What is he struggling with?
  • Where does she want to be in a year?
  • Is he invested in the project or would he prefer to work on something else?
  • Are there people in the company she doesn’t want to work with?
  • Does he feel properly informed about what is going on with the project / company?

You might not get sincere replies to these questions unless people are comfortable enough with you to trust you with honest answers. Sincere feedback is absolutely critical for the success of your team though which is especially true in difficult times and therefore I would argue that developing mutual trust between you and your team should be your main priority.

Building trust takes a lot of time and effort and an essential part of this process is to have a private chat with each member of your team on a regular basis (at least once a month). These one-on-one meetings can take place in a meeting room or even a nearby coffee shop. The important thing is that both of you feel comfortable having an open and honest conversation, so make sure to pick the location accordingly.

These meetings don’t necessarily have to be long. If there is nothing to talk about then you might be done after 10 minutes. At other times it may take an hour (or more) to discuss a difficult situation. Make sure to avoid possible distractions (e.g. mobile phone) during these meetings, so you can give the other person your full attention.

One-on-one meetings raise the morale because the team will realize that they can rely on you to keep them in the loop and to represent their concerns and interests. Personally I find that these conversations help me to do my job better since it’s much more likely to hear about a (potential) problem when the team feels comfortable telling me about it.

At this point you might be concerned that these meetings take time away from your ‘actual job’, but that’s not true because they are your job now. Whether you like it or not you’ll probably spend more time in meetings and less time contributing directly to the current project. Depending on the size of your company it’s safe to assume that leadership and management will take up between 20% and 50% of your time. This means that you won’t be able to take on the same amount of production tasks as before and you’ll therefore have to learn how to delegate work. I know from personal experience that this can be a tough lesson to learn in the beginning.

In addition to balancing your own workload delegation is also about helping your team to develop new skills and to improve existing ones. Just because you can complete a task more efficiently than any other person on your team doesn’t necessarily mean that you are the best choice for this particular task. Try to take the professional interest of the individual members of your team into account as much as possible when assigning tasks, because people will be more motivated to work on something they are passionate about.

Beyond these practical considerations it is important to note that delegation also has an impact on the mutual trust between you and your team. By routinely taking on ‘tough’ tasks yourself you indicate that you don’t trust your teammates to do a good job, which will ruin morale very quickly. Keep in mind that your colleagues are trained professionals just like yourself, so treat them that way!

Experiencing your entire team working together and producing great results is very empowering and it is your job to make it happen even if nobody tells you this explicitly. In an ideal world it would be obvious what your company expects from you, but in reality that will probably not be the case. It is important to understand that while you have more influence over the direction of the project, your team and even the company you also have more responsibilities now.

First and foremost you are responsible for the success (or failure) of your team and any problem preventing success should be fixed right away. This could be as simple as making sure that your team has the necessary hardware and software, but it could also involve negotiations with another department in order to resolve a conflict of interest.

One responsibility that is often overlooked by new leads is the professional development of the team. It is your job to make sure that the people on your team get the opportunities to improve their skillset. In order to do that you’ll first have to identify the short- and long-term career goals of each team member. In addition to delegating work with the right amount of challenge (as described above) it is also important to provide general career mentorship.

A video game is a complicated piece of software and making one isn’t easy. Mistakes happen and your team might cause a problem that affects another department or even the production schedule. This can be a difficult situation especially when other people are upset and emotions run high. I know it’s easier said than done, but don’t let the stress get the best of you. Rather than identifying and blaming a team member for the mistake you should accept the responsibility and figure out a way to fix the problem. You can still analyze what happened after the dust has settled, so that this issue can be prevented in the future.

It is very unfortunate that a lot of newly minted team leads have to identify additional responsibilities themselves. Thankfully some companies are the exception to the rule. At Insomniac Games, for example, new leads have access to a ‘quick start guide’ that helps them to get adjusted to their new role. This helpful document is publicly available and was written by Mike Acton who has been doing an exceptional job educating the games industry about leadership. I highly recommend that you read the guide:

Leadership is hard (but not impossible)

Truth be told becoming a great team lead isn’t easy. In fact it might be one of the toughest challenges you’ll have to face in your career. The good news is that you are obviously interested in leadership (why else would you have read all this stuff) and want to learn more about how to become a good lead. In other words you are doing great so far!

I hope you found this article helpful and that it’ll make your transition into your new role a bit easier.

Good luck and thank you for reading!

PS.: Whether you just got promoted or have been leading a team for a long time I would love to hear from you, so please feel free to leave a comment.

PPS: I would like to thank everybody who helped me with this article. You guys rock!

Custom Vector Allocation

Original Author: Thomas Young

(First posted to, number 6 in a series of posts about Vectors and Vector based containers.)

A few posts back I talked about the idea of ‘rolling your own’ STL-style vector class, based my experiences with this at PathEngine.

In that original post and these two follow-ups I talked about the general approach and also some specific performance tweaks that actually helped in practice for our vector use cases.

I haven’t talked about custom memory allocation yet, however. This is something that’s been cited in a number of places as a key reason for switching away from std::vector so I’ll come back now and look at the approach we took for this (which is pretty simple, but nonstandard, and also pre C++11), and assess some of the implications of using this kind of non-standard approach.

I approach this from the point of view of a custom vector implementation, but I’ll be talking about some issues with memory customisation that also apply more generally.

Why custom allocation?

In many situations it’s fine for vectors (and other containers) to just use the same default memory allocation method as the rest of your code, and this is definitely the simplest approach.

(The example vector code I posted previously used malloc() and free(), but works equally well with global operator new and delete.)

But vectors can do a lot of memory allocation, and memory allocation can be expensive, and it’s not uncommon for memory allocation operations to turn up in profiling as the most significant cost of vector based code. Custom memory allocation approaches can help resolve this.

And some other good reasons for hooking into and customising allocations can be the need to avoid memory fragmentation or to track memory statistics.

For these reasons generalised memory customisation is an important customer requirement for our SDK code in general, and then by extension for the vector containers used by this code.

Custom allocation in std::vector

The STL provides a mechanism for hooking into the container allocation calls (such as vector buffer allocations) through allocators, with vector constructors accepting an allocator argument for this purpose.

I won’t attempt a general introduction to STL allocators, but there’s a load of material about this on the web. See, for example, this article on Dr Dobbs, which includes some example use cases for allocators. (Bear in mind that this is pre C++11, however. I didn’t see any similarly targeted overview posts for using allocators post C++11.)

A non-standard approach

We actually added the possibility to customise memory allocation in our vectors some time after switching to a custom vector implementation. (This was around mid-2012. Before that PathEngine’s memory customisation hooks worked by overriding global new and delete, and required dll linkage if you wanted to manage PathEngine memory allocations separately from allocations in the main game code.)

We’ve generally tried to keep our custom vector as similar as possible to std::vector, in order to avoid issues with unexpected behaviour (since a lot of people know how std::vector works), and to ensure that code can be easily switched between std::vector and our custom vector. When it came to memory allocation, however, we chose a significantly different (and definitely non-standard) approach, because in practice a lot of vector code doesn’t actually use allocators (or else just sets allocators in a constructor), because we already had a custom vector class in place, and because I just don’t like STL allocators!

Other game developers

A lot of other game developers have a similar opinion of STL allocators, and for many this is actually then also a key factor in a decision to switch to custom container classes.

For example, issues with the design of STL allocators are quoted as one of the main reasons for the creation of the EASTL, a set of STL replacement classes, by Electronic Arts. From the EASTL paper:

Among game developers the most fundamental weakness is the std allocator design, and it is this weakness that was the largest contributing factor to the creation of EASTL.

And I’ve heard similar things from other developers. For example, in this blog post about the Bitsquid approach to allocators Niklas Frykholm says:

If it weren’t for the allocator interface I could almost use STL. Almost.

Let’s have a look at some of the reasons for this distaste!

Problems with STL allocators

We’ll look at the situation prior to C++11, first of all, and the historical basis for switching to an alternative mechanism.

A lot of problems with STL allocators come out of confusion in the initial design. According to Alexander Stepanov (primary designer and implementer of the STL) the custom allocator mechanism was invented to deal with a specific issue with Intel memory architecture. (Do you remember near and far pointers? If not, consider yourself lucky I guess!) From this interview with Alexander:

Question: How did allocators come into STL? What do you think of them?

Answer: I invented allocators to deal with Intel’s memory architecture. They are not such a bad ideas in theory – having a layer that encapsulates all memory stuff: pointers, references, ptrdiff_t, size_t. Unfortunately they cannot work in practice.

And it seems like this original design intention was also only partially executed. From the wikipedia entry for allocators:

They were originally intended as a means to make the library more flexible and independent of the underlying memory model, allowing programmers to utilize custom pointer and reference types with the library. However, in the process of adopting STL into the C++ standard, the C++ standardization committee realized that a complete abstraction of the memory model would incur unacceptable performance penalties. To remedy this, the requirements of allocators were made more restrictive. As a result, the level of customization provided by allocators is more limited than was originally envisioned by Stepanov.

and, further down:

While Stepanov had originally intended allocators to completely encapsulate the memory model, the standards committee realized that this approach would lead to unacceptable efficiency degradations. To remedy this, additional wording was added to the allocator requirements. In particular, container implementations may assume that the allocator’s type definitions for pointers and related integral types are equivalent to those provided by the default allocator, and that all instances of a given allocator type always compare equal, effectively contradicting the original design goals for allocators and limiting the usefulness of allocators that carry state.

Some of the key problems with STL allocators (historically) are then:

  • Unnecessary complexity, with some boiler plate stuff required for features that are not actually used
  • A limitation that allocators cannot have internal state (‘all instances of a given allocator type are required to be interchangeable and always compare equal to each other’)
  • The fact the allocator type is included in container type (with changes to allocator type changing the type of the container)

There are some changes to this situation with C++11, as we’ll see below, but this certainly helps explain why a lot of people have chosen to avoid the STL allocator mechanism, historically!

Virtual allocator interface

So we decided to avoid STL allocators, and use a non-standard approach.

The approach we use is based on a virtual allocator interface, and avoids the need to specify allocator type as a template parameter.

This is quite similar to the setup for allocators in the BitSquid engine, as described by Niklas here (as linked above, it’s probably worth reading that post if you didn’t see this already, as I’ll try to avoid repeating the various points he discussed there).

A basic allocator interface can then be defined as follows:

class iAllocator
    virtual ~iAllocator() {}
    virtual void* allocate(tUnsigned32 size) = 0;
    virtual void deallocate(void* ptr) = 0;
// helper
    template <class T> void
    allocate_Array(tUnsigned32 arraySize, T*& result)
        result = static_cast<T*>(allocate(sizeof(T) * arraySize));

The allocate_Array() method is for convenience, concrete allocator objects just need to implement allocate() and free().

We can store a pointer to iAllocator in our vector, and replace the direct calls to malloc() and free() with virtual function calls, as follows:

    static T*
    allocate(size_type size)
        T* allocated;
        _allocator->allocate_Array(size, allocated);
        return allocated;
    reallocate(size_type newCapacity)
        T* newData;
        _allocator->allocate_Array(newCapacity, newData);
        copyRange(_data, _data + _size, newData);
        deleteRange(_data, _data + _size);
        _data = newData;
        _capacity = newCapacity;

These virtual function calls potentially add some overhead to allocation and deallocation. It’s worth being quite careful about this kind of virtual function call overhead, but in practice it seems that the overhead is not significant here. Virtual function call overhead is often all about cache misses and, perhaps because there are often just a small number of actual allocator instance active, with allocations tending to be grouped by allocator, this just isn’t such an issue here.

We use a simple raw pointer for the allocator reference. Maybe a smart pointer type could be used (for better modern C++ style and to increase safety), but we usually want to control allocator lifetime quite explicitly, so we’re basically just careful about this.

Allocators can be passed in to each vector constructor, or if omitted will default to a ‘global allocator’ (which adds a bit of extra linkage to our vector header):

    cVector(size_type size, const T& fillWith,
        iAllocator& allocator = GlobalAllocator()
        _data = 0;
        _allocator = &allocator;
        _size = size;
        _capacity = size;
            _allocator->allocate_Array(_capacity, _data);
            constructRange(_data, _data + size, fillWith);

Here’s an example concrete allocator implementation:

class cMallocAllocator : public iAllocator
    allocate(tUnsigned32 size)
        return malloc(static_cast<size_t>(size));
    deallocate(void* ptr)

(Note that you normally can call malloc() with zero size, but this is something that we disallow for PathEngine allocators.)

And this can be passed in to vector construction as follows:

    cMallocAllocator allocator;
    cVector<int> v(10, 0, allocator);

Swapping vectors

That’s pretty much it, but there’s one tricky case to look out for.

Specifically, what should happen in our vector swap() method? Let’s take a small diversion to see why there might be a problem.

Consider some code that takes a non-const reference to vector, and ‘swaps a vector out’ as a way of returning a set of values in the vector without the need to heap allocate the vector object itself:

class cVectorBuilder
    cVector<int> _v;
    //.... construction and other building methods
    void takeResult(cVector<int>& result); // swaps _v into result

So this code doesn’t care about allocators, and just wants to work with a vector of a given type. And maybe there is some other code that uses this, as follows:

void BuildData(/*some input params*/, cVector& result)
  //.... construct a cVectorBuilder and call a bunch of build methods

Now there’s no indication that there’s going to be a swap() involved, but the result vector will end up using the global allocator, and this can potentially cause some surprises in the calling code:

   cVector v(someSpecialAllocator);
   BuildData(/*input params*/, v);
   // lost our allocator assignment!
   // v now uses the global allocator

Nobody’s really doing anything wrong here (although this isn’t really the modern C++ way to do things). This is really a fundamental problem arising from the possibility to swap vectors with different allocators, and there are other situations where this can come up.

You can find some discussion about the possibilities for implementing vector swap with ‘unequal allocators’ here. We basically choose option 1, which is to simply declare it illegal to call swap with vectors with different allocators. So we just add an assert in our vector swap method that the two allocator pointers are equal.

In our case this works out fine, since this doesn’t happen so much in practice, because cases where this does happen are caught directly by the assertion, and because it’s generally straightforward to modify the relevant code paths to resolve the issue.

Comparison with std::vector, is this necessary/better??

Ok, so I’ve outlined the approach we take for custom allocation in our vector class.

This all works out quite nicely for us. It’s straightforward to implement and to use, and consistent with the custom allocators we use more generally in PathEngine. And we already had our custom vector in place when we came to implement this, so this wasn’t part of the decision about whether or not to switch to a custom vector implementation. But it’s interesting, nevertheless, to compare this approach with the standard allocator mechanism provided by std::vector.

My original ‘roll-your-own vector’ blog post was quite controversial. There were a lot of responses strongly against the idea of implementing a custom vector, but a lot of other responses (often from the game development industry side) saying something like ‘yes, we do that, but we do some detail differently’, and I know that this kind of customisation is not uncommon in the industry.

These two different viewpoints makes it worthwhile to explore this question in a bit more detail, then, I think.

I already discussed the potential pitfalls of switching to a custom vector implementation in the original ‘roll-your-own vector’ blog post, so lets look at the potential benefits of switching to a custom allocator mechanism.

Broadly speaking, this comes down to three key points:

  • Interface complexity
  • Stateful allocator support
  • Possibilities for further customisation and memory optimisation

Interface complexity

If we look at an example allocator implementation for each setup we can see that there’s a significant difference in the amount of code required. The following code is taken from my previous post, and was used to fill allocated memory with non zero values, to check for zero initialisation:

// STL allocator version
template <class T>
class cNonZeroedAllocator
    typedef T value_type;
    typedef value_type* pointer;
    typedef const value_type* const_pointer;
    typedef value_type& reference;
    typedef const value_type& const_reference;
    typedef typename std::size_t size_type;
    typedef std::ptrdiff_t difference_type;
    template <class tTarget>
    struct rebind
        typedef cNonZeroedAllocator<tTarget> other;
    cNonZeroedAllocator() {}
    ~cNonZeroedAllocator() {}
    template <class T2>
    cNonZeroedAllocator(cNonZeroedAllocator<T2> const&)
    address(reference ref)
        return &ref;
    address(const_reference ref)
        return &ref;
    allocate(size_type count, const void* = 0)
        size_type byteSize = count * sizeof(T);
        void* result = malloc(byteSize);
        signed char* asCharPtr;
        asCharPtr = reinterpret_cast<signed char*>(result);
        for(size_type i = 0; i != byteSize; ++i)
            asCharPtr[i] = -1;
        return reinterpret_cast<pointer>(result);
    void deallocate(pointer ptr, size_type)

    max_size() const
        return 0xffffffffUL / sizeof(T);
    construct(pointer ptr, const T& t)
        new(ptr) T(t);
    destroy(pointer ptr)
    template <class T2> bool
    operator==(cNonZeroedAllocator<T2> const&) const
        return true;
    template <class T2> bool
    operator!=(cNonZeroedAllocator<T2> const&) const
        return false;

But with our custom allocator interface this can now be implemented as follows:

// custom allocator version
class cNonZeroedAllocator : public iAllocator
    allocate(tUnsigned32 size)
        void* result = malloc(static_cast<size_t>(size));
        signed char* asCharPtr;
        asCharPtr = reinterpret_cast<signed char*>(result);
        for(tUnsigned32 i = 0; i != size; ++i)
            asCharPtr[i] = -1;
        return result;
    deallocate(void* ptr)

As we saw previously a lot of stuff in the STL allocator relates to some obsolete design decisions, and is unlikely to actually be used in practice. The custom allocator interface also completely abstracts out the concept of constructed object type, and works only in terms of actual memory sizes and pointers, which seems more natural and whilst doing everything we need for the allocator use cases in PathEngine.

For me this is one advantage of the custom allocation setup, then, although probably not something that would by itself justify switching to a custom vector.

If you use allocators that depend on customisation of the other parts of the STL allocator interface (other than for data alignment) please let me know in the comments thread. I’m quite interested to hear about this! (There’s some discussion about data alignment customisation below.)

Stateful allocator requirement

Stateful allocator support is a specific customer requirement for PathEngine.

Clients need to be able to set custom allocation hooks and have all allocations made by the SDK (including vector buffer allocations) routed to custom client-side allocation code. Furthermore, multiple allocation hooks can be supplied, with the actual allocation strategy selected depending on the actual local execution context.

It’s not feasible to supply allocation context to all of our vector based code as a template parameter, and so we need our vector objects to support stateful allocators.

Stateful allocators with the virtual allocator interface

Stateful allocators are straightforward with our custom allocator setup. Vectors can be assigned different concrete allocator implementations and these concrete allocator implementations can include internal state, without code that works on the vectors needing to know anything about these details.

Stateful allocators with the STL

As discussed earlier, internal allocator state is something that was specifically forbidden by the original STL allocator specification. This is something that has been revisited in C++11, however, and stateful allocators are now explicitly supported, but it also looks like it’s possible to use stateful allocators in practice with many pre-C++11 compile environments.

The reasons for disallowing stateful allocators relate to two specific problem situations:

  • Splicing nodes between linked lists with different allocation strategies
  • Swapping vectors with different allocation strategies

C++11 addresses these issues with allocator traits, which specify what to do with allocators in problem cases, with stateful allocators then explicitly supported. This stackoverflow answer discusses what happens, specifically, with C++11, in the vector swap case.

With PathEngine we want to be able to support clients with different compilation environments, and it’s an advantage not to require C++11 support. But according to this stackoverflow answer, you can also actually get away with using stateful allocators in most cases, without explicit C++11 support, as long as you avoid these problem cases.

Since we already prohibit the vector problem case (swap with unequal allocators), that means that we probably can actually implement our stateful allocator requirement with std::vector and STL allocators in practice, without requiring C++11 support.

There’s just one proviso, with or without C++11 support, due to allowances for legacy compiler behaviour in allocator traits. Specifically, it doesn’t look like we can get the same assertion behaviour in vector swap. If propagate_on_container_swap::value is set to false for either allocator then the result is ‘undefined behaviour’, so this could just swap the allocators silently, and we’d have to be quite careful about these kinds of problem cases!

Building on stateful allocators to address other issues

If you can use stateful allocators with the STL then this changes things a bit. A lot of things become possible just by adding suitable internal state to standard STL allocator implementations. But you can also now use this allocator internal state as a kind of bootstrap to work around other issues with STL allocators.

The trick is wrap up the same kind of virtual allocator interface setup we use in PathEngine in an STL allocator wrapper class. You could do this (for example) by putting a pointer to our iAllocator interface inside an STL allocator class (as internal state), and then forward the actual allocation and deallocation calls as virtual function calls through this pointer.

So, at the cost of another layer of complexity (which can be mostly hidden from the main application code), it should now be possible to:

  • remove unnecessary boiler plate from concrete allocator implementations (since these now just implement iAllocator), and
  • use different concrete allocator types without changing the actual vector type.

Although I’m still not keen on STL allocators, and prefer the direct simplicity of our custom allocator setup as opposed to covering up the mess of the STL allocator interface in this way, I have to admit that this does effectively remove two of the key benefits of our custom allocator setup. Let’s move on to the third point, then!

Refer to the bloomberg allocator model for one example of this kind of setup in practice (and see also this presentation about bloomberg allocators in the context C++11 allocator changes).

Memory optimisation

The other potential benefit of custom allocation over STL allocators is basically the possibility to mess around with the allocation interface.

With STL allocators we’re restricted to using the allocate() and deallocate() methods exactly as defined in the original allocator specification. But with our custom allocator we’re basically free to mess with these method definitions (in consultation with our clients!), or to add additional methods, and generally change the interface to better suit our clients needs.

There is some discussion of this issue in this proposal for improving STL allocators, which talks about ways in which the memory allocation interface provided by STL allocators can be sub-optimal.

Some customisations implemented in the Bitsquid allocators are:

  • an ‘align’ parameter for the allocation method, and
  • a query for the size of allocated blocks

PathEngine allocators don’t include either of these customisations, although this is stuff that we can add quite easily if required by our clients. Our allocator does include the following extra methods:

    virtual void*
            void* oldPtr,
            tUnsigned32 oldSize,
            tUnsigned32 oldSize_Used,
            tUnsigned32 newSize
            ) = 0;
// helper
    template <class T> void
            T*& ptr,
            tUnsigned32 oldArraySize,
            tUnsigned32 oldArraySize_Used,
            tUnsigned32 newArraySize
        ptr = static_cast<T*>(expand(
            sizeof(T) * oldArraySize,
            sizeof(T) * oldArraySize_Used,
            sizeof(T) * newArraySize

What this does, essentially, is to provide a way for concrete allocator classes to use the realloc() system call, or similar memory allocation functionality in a custom head, if this is desired.

As before, the expand_Array() method is there for convenience, and concrete classes only need to implement the expand() method. This takes a pointer to an existing memory block, and can either add space to the end of this existing block (if possible), or allocate a larger block somewhere else and move existing data to that new location (based on the oldSize_Used parameter).

Implementing expand()

A couple of example implementations for expand() are as follows:

// in cMallocAllocator, using realloc()
        void* oldPtr,
        tUnsigned32 oldSize,
        tUnsigned32 oldSize_Used,
        tUnsigned32 newSize
        assert(oldSize_Used <= oldSize);
        assert(newSize > oldSize);
        return realloc(oldPtr, static_cast<size_t>(newSize));
// as allocate and move
        void* oldPtr,
        tUnsigned32 oldSize,
        tUnsigned32 oldSize_Used,
        tUnsigned32 newSize
        assert(oldSize_Used <= oldSize);
        assert(newSize > oldSize);
        void* newPtr = allocate(newSize);
        memcpy(newPtr, oldPtr, static_cast<size_t>(oldSize_Used));
        return newPtr;

So this can either call through directly to something like realloc(), or emulate realloc() with a sequence of allocation, memory copy and deallocation operations.

Benchmarking with realloc()

With this expand() method included in our allocator it’s pretty straightforward to update our custom vector to use realloc(), and it’s easy to see how this can potentially optimise memory use, but does this actually make a difference in practice?

I tried some benchmarking and it turns out that this depends very much on the actual memory heap implementation in use.

I tested this first of all with the following simple benchmark:

template <class tVector> static void
PushBackBenchmark(tVector& target)
    const int pattern[] = {0,1,2,3,4,5,6,7};
    const int patternLength = sizeof(pattern) / sizeof(*pattern);
    const int iterations = 10000000;
    tSigned32 patternI = 0;
    for(tSigned32 i = 0; i != iterations; ++i)
        if(patternI == patternLength)
            patternI = 0;

(Wrapped up in some code for timing over a bunch of iterations, with result checking to avoid the push_back being optimised out.)

This is obviously very far from a real useage situation, but the results were quite interesting:

OS container type time
Linux std::vector 0.0579 seconds
Linux cVector without realloc 0.0280 seconds
Linux cVector with realloc 0.0236 seconds
Windows std::vector 0.0583 seconds
Windows cVector without realloc 0.0367 seconds
Windows cVector with realloc 0.0367 seconds

So the first thing that stands out from these results is that using realloc() doesn’t make any significant difference on windows. I double checked this, and while expand() is definitely avoiding memory copies a significant proportion of the time, this is either not significant in the timings, or memory copy savings are being outweighed by some extra costs in the realloc() call. Maybe realloc() is implemented badly on Windows, or maybe the memory heap on Windows is optimised for more common allocation scenarios at the expense of realloc(), I don’t know. A quick google search shows that other people have seen similar issues.

Apart from that it looks like realloc() can make a significant performance difference, on some platforms (or depending on the memory heap being used). I did some extra testing, and it looks like we’re getting diminishing returns after some of the other performance tweaks we made in our custom vector, specifically the tweaks to increase capacity after the first push_back, and the capacity multiplier tweak. With these tweaks backed out:

OS container type time
Linux cVector without realloc, no tweaks 0.0532 seconds
Linux cVector with realloc, no tweaks 0.0235 seconds

So, for this specific benchmark, using realloc() is very significant, and even avoids the need for those other performance tweaks.

Slightly more involved benchmark

The benchmark above is really basic, however, and certainly isn’t a good general benchmark for vector memory use. In fact, with realloc(), there is only actually ever one single allocation made, which is then naturally free to expand through the available memory space!

A similar benchmark is discussed in this stackoverflow question, and in that case the benefits seemed to reduce significantly with more than one vector in use. I hacked the benchmark a bit to see what this does for us:

template <class tVector> static void
PushBackBenchmark_TwoVectors(tVector& target1, tVector& target2)
    const int pattern[] = {0,1,2,3,4,5,6,7};
    const int patternLength = sizeof(pattern) / sizeof(*pattern);
    const int iterations = 10000000;
    tSigned32 patternI = 0;
    for(tSigned32 i = 0; i != iterations; ++i)
        if(patternI == patternLength)
            patternI = 0;
template <class tVector> static void
PushBackBenchmark_ThreeVectors(tVector& target1, tVector& target2, tVector& target3)
    const int pattern[] = {0,1,2,3,4,5,6,7};
    const int patternLength = sizeof(pattern) / sizeof(*pattern);
    const int iterations = 10000000;
    tSigned32 patternI = 0;
    for(tSigned32 i = 0; i != iterations; ++i)
        if(patternI == patternLength)
            patternI = 0;

With PushBackBenchmark_TwoVectors():

OS container type time
Linux std::vector 0.0860 seconds
Linux cVector without realloc 0.0721 seconds
Linux cVector with realloc 0.0495 seconds

With PushBackBenchmark_ThreeVectors():

OS container type time
Linux std::vector 0.1291 seconds
Linux cVector without realloc 0.0856 seconds
Linux cVector with realloc 0.0618 seconds

That’s kind of unexpected.

If we think about what’s going to happen with the vector buffer allocations in this benchmark, on the assumption of sequential allocations into a simple contiguous memory region, it seems like the separate vector allocations in the modified benchmark versions should actually prevent each other from expanding. And I expected that to reduce the benefits of using realloc. But the speedup is actually a lot more significant for these benchmark versions.

I stepped through the benchmark and the vector buffer allocations are being placed sequentially in a single contiguous memory region, and do initially prevent each other from expanding, but after a while the ‘hole’ at the start of the memory region gets large enough to be reused, and then reallocation becomes possible, and somehow turns out to be an even more significant benefit. Maybe these benchmark versions pushed the memory use into a new segment and incurred some kind of segment setup costs?

With virtual memory and different layers of memory allocation in modern operating systems, and different approaches to heap implementations, it all works out as quite a complicated issue, but it does seem fairly clear, at least, that using realloc() is something that can potentially make a significant difference to vector performance, in at least some cases!

Realloc() in PathEngine

Those are all still very arbitrary benchmarks and it’s interesting to see how much this actually makes a difference for some real uses cases. So I had a look at what difference the realloc() support makes for the vector use in PathEngine.

I tried our standard set of SDK benchmarks (with common queries in some ‘normal’ situations), both with and without realloc() support, and compared the timings for these two cases. It turns out that for this set of benchmarks, using realloc() doesn’t make a significant difference to the benchmark timings. There are some slight improvements in some timings, but nothing very noticeable.

The queries in these benchmarks have already had quite a lot of attention for performance optimisation, of course, and there are a bunch of other performance optimisations already in the SDK that are designed to avoid the need for vector capacity increases in these situations (reuse of vectors for runtime queries, for example). Nevertheless, if we’re asking whether custom allocation with realloc() is ‘necessary or better’ in the specific case of PathEngine vector use (and these specific benchmarks) the answer appears to be that no this doesn’t really seem to make any concrete difference!

Memory customisation and STL allocators

As I’ve said above, this kind of customisation of the allocator interface (to add stuff like realloc() support) is something that we can’t do with the standard allocator setup (even with C++11).

For completeness it’s worth noting the approach suggested by Alexandrescu in this article where he shows how you can effectively shoehorn stuff like realloc() calls into STL allocators.

But this does still depends on using some custom container code to detect special allocator types, and won’t work with std::vector.


This has ended up a lot longer than I originally intended so I’ll go ahead and wrap up here!

To conclude:

  • It’s not so hard to implement your own allocator setup, and integrate this with a custom vector (I hope this post gives you a good idea about what can be involved in this)
  • There are ways to do similar things with the STL, however, and overall this wouldn’t really work out as a strong argument for switching to a custom vector in our case
  • A custom allocator setup will let you do some funky things with memory allocation, if your memory heap will dance the dance, but it’s not always clear that this will translate into actual concrete performance benefits

A couple of things I haven’t talked about:

Memory fragmentation: custom memory interfaces can also be important for avoiding memory fragmentation, and this can be an important issue. We don’t have a system in place for actually measuring memory fragmentation, though, and I’d be interested to hear how other people in the industry actually quantify or benchmark this.

Memory relocation: the concept of ‘relocatable allocators’ is quite interesting, I think, although this has more significant implications for higher level vector based code, and requires moving further away from standard vector usage. This is something I’ll maybe talk about in more depth later on..

** Comments: Please check the existing comment thread for this post before commenting. **

So they made you a lead; now what? (Part 1)

Original Author: Oliver Franzke

What to do after you get promoted into a leadership position should be a trivial question to answer, but in my experience the opposite is true. In fact sometimes it seems to me that leadership is some kind of taboo topic in the games industry. Making games is supposed to be creative and fun and people would rather not talk about a ‘boring’ topic like leadership, but everyone who has had a bad supervisor at some point will agree that lack of leadership skills can be incredibly harmful to team morale and therefore to the game development process. That’s why, when I was first promoted into a leadership position, I set myself the goal to be just like the awesome supervisors I had in the past. But what made these people a great boss? I had no idea, but I assumed I would figure it out myself along the way. Looking back at it now I have to admit I was quite naïve.

After learning more about the theory and practice of leadership I realized that I was unprepared for this role and I’m not the only one with this experience. Before I started writing this article I talked to several leads (or ex-leads) and none of them had ever received any kind of leadership training. Some people were lucky enough to have a mentor, but even that doesn’t seem to be the standard. To me the most troubling fact is that none of the leads were ever told what was expected of them in their new role.

Given how important this role is you would think that game studios would invest some time and money to train their leads, but that doesn’t seem to be the case. The optimistic interpretation is that the companies trust their employees enough to quickly pick up the required skills themselves. The pessimistic interpretation on the other hand is that management simply doesn’t care or know any better. The real reason is probably located somewhere in between these extremes, but it doesn’t change the fact that most new leaders are simply thrown in at the deep end.

For example when I was first promoted into a leadership role I really had no clue what I was doing or what I was supposed to do. I was a good programmer and a responsible team player (which is why I was promoted I guess) and I figured I should simply continue coding until some kind of big revelation would turn me into an awesome team lead. Obviously I never had this magical epiphany and after a while I realized I should probably start investigating leadership in a more methodical way.

My goal for this two-part article is to share some of the lessons I learned myself while adjusting to my role as a lead programmer. If you were recently promoted into a leadership position hopefully you’ll find some of the content in this post helpful. If you had different experiences or have additional advice you’d like to share, then please leave a comment or contact me directly.

I want to emphasize the fact that leadership isn’t magic nor do you have to be born for it. Leadership is simply a set of skills that can be learned and in my experience it’s worth the time investment!

What is leadership anyway?

At the heart of a leadership position are people skills which make this role different from a regular production job. Being a great programmer, designer or artist doesn’t necessarily mean you are also an awesome team lead. In fact your production skills are merely the foundation on which you’ll have to build your leadership role.

But what exactly are these necessary people skills and what makes an effective team lead? Depending on who you talk to you’ll get different answers, but I think that the core values of leadership are about developing trust, setting directions and supporting the team in order to make the best possible product (e.g. game, tool, engine) with the given resources and constraints.

In order to be an effective lead you’ll first have to earn your colleagues trust. If your team feels like they can’t come to you with questions, problems or suggestions, then you (and the company) have a big problem. Gaining the trust of your team doesn’t happen automatically and requires a lot of effort. You can find some practical advice how to work on this in the ‘leadership starter kit’ in part 2 of this article.

Similarly if your supervisor (e.g. project lead) doesn’t trust you, then he or she will probably manage around you which is a bad situation for everyone involved. In my experience transparency is crucial when managing up especially when things don’t go as planned. Let your supervisor know if there is a problem and take responsibility by working on a solution.

Making games is complicated and it would be unrealistic to assume that there won’t be problems along the way. Dealing with difficult situations is much easier if everyone on your team is on the same page about what has to get done. Setting a clear direction for your team is therefore a crucial part of your role.

A great mission statement is concise so that it’s easy to remember and explain. For an environment art team this could be “We want to create a photorealistic setting for our game” whereas a tools lead might come up with “Every change to the level should be visible right away”. Of course it is important that your team’s direction is aligned with the vision of the project, because creating a photorealistic environment for a game with a painterly art style doesn’t make sense.

In addition to defining a clear direction for your team one of your main responsibilities as a lead is to provide support for your team, so that they can be successful. This might seem very obvious, but the shift from being accountable only for your own work to being responsible for the success of a group of people can be a hard lesson to learn in the beginning.

Almost all leads I talked to mentioned that they were surprised by how little time they had for their ‘actual job’ after being promoted. It is essential to realize that the support of your team is your actual job now, which means that you’ll have to balance your workload differently. Some practical advice for this specific issue can be found in the second part of this article in the ‘leadership starter kit’.

Support can be provided in many different ways: Discussing the advantages and disadvantages of a proposed solution to a problem is one example. Being a mentor and helping the individual team members with their career progression is another form of support. A third example is to make sure that the team has everything it needs (e.g. dev-kits, access to documentation, tools …) to achieve the goals.

As a lead you might also have to support your team by letting someone know that his or her work doesn’t meet your expectations. A conversation like this isn’t easy, but it is important to let the person know that there is a problem and to offer advice and assistance to resolve the situation.

What leadership isn’t

In order to avoid misconceptions and common mistakes it can be quite useful to define what leadership (in the games industry) is not. This topic is somewhat shrouded in mystery and there are many incorrect or outdated assumptions.

For example I thought for the longest time that leadership and management are the same thing. This is not the case though and when I talked to other leads about what they dislike about their role I found that most aspects mentioned were in fact related to management rather than to leadership. Of course it would be unrealistic to assume that you will be able to avoid management tasks altogether, but getting help from a producer can reduce the amount of administrative work significantly.

Another misconception that is often popularized by movies is that you have to demonstrate your power as a leader by barking out orders all day. This might work well in the army, but making video games requires collaboration and creativity and an authoritative leadership has no place in this environment. An inspired team is a productive team and autonomy is crucial for high morale.

Equally as bad is to ignore the team by using a hands-off leadership approach. This mistake is quite common since most team leads started their career with a production job. It can be tough for a new lead to accept the changed responsibilities, but in my opinion this is one of the most important lessons to learn. Rather than contributing to the production directly your primary responsibility is to support your team. Having time for design, art or programming in addition to that is great, but the team should always come first.

As a lead you are responsible for your team, which means that you’ll also have to deal with complications and it’s inevitable that things will go wrong during the production of a game. Your team might introduce a new crash bug or maybe you run into an unexpected problem that causes the milestone to slip. Whatever the issue may be you are responsible for what your team does and playing the blame game is the worst thing you can do, because it’ll ruin trust and team morale. Instead of shifting your responsibility to a team member you should concentrate on figuring out how to solve the problem.

End of Part 1

The second part of this article focuses on practical advice for newly minted team leads and it also discusses effective strategies to develop leadership skills, so please check it out once it is online (very soon).

I hope you enjoyed this post and thank you for reading.

PS.: Whether you just got promoted or have been leading a team for a long time I would love to hear from you, so please feel free to leave a comment.

PPS: I would like to thank everybody who helped me with this article. You guys rock!

Data flow in a property centric game engine

Original Author: Fredrik Alströmer


I stopped taking on contracts about a year ago to focus on building my own indie game. I decided to build my own engine, and I know, I know, indies shouldn’t build their own engine, but let’s ignore reason for now and instead focus on something else. I wanted to build a property centric engine, using composition for pretty much everything rather than inheritance.

So what does it mean when you say you have a property centric model? In contrast to using inheritance, a game object might no longer be a renderable, physics-simulated, camera-trackable, weapon-carrying, and enemy-discoverable object. Instead it has the corresponding properties.

So what? Potato, pot-ah-to, right?

Well, not really. When a player is playing the game, they see distinct objects, perhaps a couple of monsters, wielding broadswords, strolling down Sunrise Lane or, who knows, maybe a heavyset man in a suit and a hard-hat blocking off the George Washington Bridge. Basically, what the player sees is this:

Fruit Salad

However, the advantage of having a property centric engine, is that instead of dealing with it this way, we have the option of organizing our things somewhat differently. What if we instead dealt with our entities like this?

Sorted Fruit Salad

That is, we have monsters, we have broadswords, and we have Sunrise Lane. (As a side note, these images makes me think of scatter/gather I/O, am I alone here?) Of course, the analogy is greatly simplified, and — as I hinted at above — we’d have a great number of different properties of which only a few are directly related to actual visible objects and their shape. I like this layout and the way it allows us to deal with all instances of a property at once.

The bridge

So how do we create the bridge between the engine’s sorted and ordered view of the world, and the player’s view? An object is bound to have dependent properties, for example, a rendering geometry property which depends on the physical simulation property, so we need to somehow combine them to give the illusion of being independent objects, rather than independent properties.

Data exchange

My research into a property centric design was driven by a fascination for data oriented design, and specifically the “where there’s one there are many” mantra. I got obsessed with arrays of raw data, of lean structures. I wanted this to be the core of my engine.

Each property is handled by a separate manager, which takes care of both memory allocation and ‘ticking’ or updating the properties each frame, and it’s free to move data around and sort it as it sees fit. The data itself is dumb, straight arrays of floats or perhaps structured in groups of vectors or quaternions, or similar smaller groups of data which is generally accessed together.

Raw data in this structure-of-arrays layout can’t really do inheritance (in the traditional OOP sense) even if I wanted to, so the property centric model became the natural choice.

Read and write data

With arrays of raw data, the easiest solution to fetch information from, or pass information to, a different component is to simply read or write the data directly using pointers. This also has the advantage that there can be no immediate side-effects as we’re not calling any functions, and we’re certainly not calling any virtual functions. This approach has several draw-backs, but what it lacks in flexibility, it makes up for in a lack of complexity, so we’ll need to be aware of the limitations and work accordingly. I’ve often found that having to adapt to a simple mechanism has a tendency to make it more robust too, so there’s also that aspect.

We’re effectively passing messages back and forth, quite similar to a data bus. We don’t need to know who receives the data, or who sent it, all we care about is that it is formatted correctly, and how to slap a destination address on it, which is a nice characteristic.

Semi-standardized memory blocks

If we define a set of standard memory blocks, and try to use these as often as possible in our properties, chances are we don’t need to worry about data formatting very often. If you look closely, you’ll notice that data that you typically would want to pass around is probably already using a small defined set of data types. For example, I have a type called f3 which is simply an array of three floats (a not an all too uncommon type when dealing with three dimensional space), and I use fixed time steps interpolating between the previous and the current step of the simulation, thus the most common type in my engine is f3[2]. I also place the ‘current’ value first so I can use the same reference to read both f3 and f3[2], where I need it. I guess this could technically be considered a very naive implementation of (multiple) inheritance, but let’s not go there. It does allow us to interact with objects we know very little about though, so I guess you could call it something similar to polymorphism if you wanted to, and you’re an ad-man with affinity for buzz-words.

We still need to know the ‘interface’ or data layout of our components. Assuming we’re using standardized data-blocks, this can be extracted out of the manager logic, and into a wiring phase which is done by a higher level game object. The game object code creates the properties, passing references to the appropriate data blocks as input, and all managers can remain completely oblivious to how they’re connected.

Wiring it up

It was important to me that each manager would be free to reorganize or sort its data arrays as it saw fit, so I couldn’t use raw pointers without forcing every manager to keep track of who’s referencing which of the properties it’s managing (in order to keep them up to date on the new address). As the number of references can be pretty arbitrary depending on where a specific property is used, I chose to insert an indirection instead, i.e. a lookup table.

Furthermore, I elected to go with a handle scheme instead of straight up pointers into the lookup table. The lookup table still stores pointers into the property data though. Using handles for the lookup has a couple of advantages, the first thing that comes to mind is that we can eliminates the problem of dangling pointers by keeping a version counter in the handle, which is nice. Second, I cut my handles to 32 bits, which is half the size of a pointer on a 64 bit system. And third, I reserve 8 bits of the handle for offsets into the data being pointed to, which lets me store one pointer per property structure, while still allowing a handle to ‘point’ somewhere within that structure. This is useful when I’m storing, for example, both origin and orientation together, and for a particular case I’m only interested in orientation. The offset handling is hidden in the lookup table functions, so as long as we set it correctly during wiring, the user of the handle doesn’t need to worry about it, giving us a bit of flexibility and reducing the sheer number of properties that otherwise would’ve needed to be registered and kept in sync. The observant reader might notice that this limits the maximum size of the data structure to 256 bytes, but as mentioned earlier, I want these to be as small as possible and only contain the data which is generally accessed together. So really, 256 bytes ought to be enough for everybody…

I don’t have graphical UI with little squiggly lines representing wires connecting an input of some property to the output of some other; but in my mind that’s what’s going on, I arbitrarily connect data ‘slots’ to one another and the property managers are none the wiser. As an example, this is how I picture what the wiring looks like when setting up the player camera.

Player camera wiring

The ID of the network entity to track is provided by the server, the camera logic doesn’t know what kind of object we’re tracking, is only given the origin property. During gameplay, it’s not even a server object directly, but a client side prediction property. The rest of the wiring applies an offset (the transformation) to the entity origin before feeding it into a PID controller, controlling a linear momentum property. The output is fed to rendering views as well as ground synthesis (only generate ground mesh where we can actually see it) and directional lighting (the shadow map rendering needs to follow the camera around). Each of these boxes represent separate property managers which, when called, update all the instances of that property, e.g. the PID manager updates all PID controller instances, it doesn’t matter if they’re being used to control the player camera or moving UI elements around.

Where’d the game logic go?

You’ll notice I haven’t really talked about game logic. It hasn’t magically disappeared just because the game engine has a particular architecture, it’s still there. The fact is still that the player will see compound game objects on screen, such as a soldier, and you will need something to keep track of the relevant properties. I still have a soldier game object. The difference is, there’s no soldier-update (or -tick, or -think, whatever you want to call it), there’s pretty much only a create and destroy for client and server side respectively. The create function initializes the appropriate properties, after that the property managers take over and deal with the frame-to-frame work. I’m sure you can think of other functions you’d want for special game logic stuff, but the important notion is that there’s no frame-by-frame update function.

Actually, I lied. I have a soldier-update function. But it is a specialized soldier property which handles animations on the client side, with dependencies on the server object property, if the object starts moving it’ll trigger walk cycle animations, do some simple forward kinematics to aim the gun in the right direction, and so on. It does not move the soldier around on screen, that’s a redraw property wired to a server object property via a client-side prediction property.

Wrap up

I find this design has a certain charm, it keeps the implementation of each property manager focused on doing a single thing, and doing it efficiently.

Additionally, if you focus on keeping your data in a raw format like this, you’ll end up with very lean data. It’ll make you think about that and how to organize it. You won’t end up with objects where, alongside your couple of vectors worth of data, you have a virtual table pointer, references to a couple of engine sub systems, and so on and so forth. Don’t underestimate how objects may balloon thanks to a couple of references, especially if you’re on a 64 bit architecture, and you’re creating maybe a million instances.

Two problems that stand out are particularly affected by the design choice are concurrent access to lookup tables and shared data, if we run updates of separate property managers simultaneously. Note that this doesn’t stop us from dividing the list of properties to update over several threads. Related to this is ordering, if we run property manager updates sequentially, and cannot wait for the next frame to let the value propagate, we’ll need to be clever about in what order we run our updates.

All in all, it’s a neat set up, and it really appeals to how my brain works. So should I have built my own engine? Probably not. It’s been a cool ride though, and I’ve learned a tremendous amount. It hasn’t been without its share of dark moments, but perhaps that’s a topic for a different post.

Feel free to let me know what you think, either in the comments below or poke me on Twitter, I’d love to hear it.

Post scriptum – In the works

Concurrency and multi-threading is something close to heart for me, and having a set up that works reliably in parallel without slapping a lock on everything (mind you, that approach to concurrency will bite you in the behind soon enough). I have not yet come around to implementing this part of the system, but this is what I have planned.

I’ve elected to go for a semi-static directed acyclic graph (DAG) model, where property updates are carried out in a breadth-first manner. Each property can only reference the layers before it, thus all properties in a layer can be updated simultaneously without risking interfering with other properties. I don’t want to have a fixed, compile-time, but rather I want it to sort itself appropriately during wiring. This solves both ordering (properties depending on other properties) and concurrent access.

To achieve this, I’ll reserve a few more bits in the handle to denote the ‘graph depth’ of the referenced property. Thus when I create a property, passing the appropriate dependencies to it, it’ll examine all handles and set its own depth to maximum plus one. I’ll keep all instances of the property sorted by graph depth, and during update, process single depth at a time. During each frame, I’ll step through each populated layer of the graph and fire off multithreaded jobs for each property type, synchronizing after each layer. As each layer only depends on the layers before it there will only be concurrent reads which is fine. In the PID-controlled camera example above, the feed back to the momentum property would have to be queued and applied at the end of the frame (thus explicitly delaying the input by one frame), this doesn’t change the current behavior though as, depending on how the updates are ordered, one of the properties are already reading one frame old data.

This implies I cannot let the property change depth without rebuilding the graph, which is non-trivial as we do not keep track of who depends on us, only who we depend on. I doesn’t stop me from rewiring the dependencies of a property, but it does stop me having it depend on something in its own layer or something further down the line.

Crafting Madness #1: Volufaketric Fog in Asylum

Original Author: Francisco Tufró

If you’ve been following Asylum’s development, you already know that the game’s graphics are based on pre-rendered textures projected on top of a cube with inverted normals. Since gouraud shading is avoided in these faces, you have the illusion of a panoramic view. Using this technique instead of 3D rendering allows the game to have great looking graphics without the need of costly (in development and computational time) real-time rendering techniques. A clever trade off if you ask me.

The downside of this technique is that you lose all the benefits of dynamic lighting and depth provided by the third dimension, and with it, a lot of realism. I had been thinking that with some shader-level magic we could re-create some stuff needed for a few visual effects we wanted to implement. This post is about one of those visual effects…

A horror game without fog? No way!

This was basically the feeling of the whole team. We needed to have animated fog and it should be realistic, it couldn’t be just an overlay on top of each cube’s face, that wouldn’t look good enough.

Pablo and I started discussing the idea of using a Z-Depth mask exported from 3D Studio Max in some way to simulate depth in each face.
For you to understand, Pablo exported an image where he defined white as far and black as near and all the distances in between as shades of grey. You can see an example here:


The idea is that I could use the depth information inside a shader to control the fog’s opacity. To give you an idea, if fog is represented by a white square the result would look something like this:


This was the first test we did, and it already looked promising!
The second step was to use a cloud texture instead of a white square, which ended up looking fine but if we were going to use a cloudy fog, we couldn’t use a static one, it needed to be animated.

Screen Shot 2014-07-10 at 4.22.05 PM

Let’s move!

I’ve done some post-processing effects in the past, so I was already familiar with using framebuffers as textures. This approach worked like charm.

In Unity we solved this using Render Textures in a few steps (sorry, but this is a Pro-Only feature):

Create a fog particle system

The particles move really slow and have their alpha changing slowly during its lifespan, they also rotate a little bit over time.
We added a layer called “Fog” and selected it in the particle system.

Screen Shot 2014-07-10 at 5.32.21 PM


Create a camera

We created another camera (without the main camera tag), and positioned it to look at the particle system, and limited its Culling Mask to ‘Fog’ only.
The background of the camera was set to black too, in this way we were able to sum the fog texture on top of the original one.

Screen Shot 2014-07-10 at 4.30.40 PM

Create and assign a render texture

We created a Render Texture (in the project browser Create->Render Texture) and assigned it as the Render Texture to the camera.
Then also assigned it as the Fog texture to send to our custom shader and…

Enjoy fog

In this video you can see the result of the technique, it added a really nice looking and animated fog that is consistent in terms of depth.


This was the first experiment to achieve a set of ideas we have about re-creating some 3D visual effects on our 2D projected space in Asylum.
Using a texture with z-depth as an opacity mask for an animated fog texture proved to work really well and the results looked good enough.
In the future we’re planning to use similar techniques to simulate dust particles and dynamic lights. We started to evaluate using normals to recreate specular highlights and other interesting stuff like that. We’ll see how far we’ll be able to take this!

Is there any interesting idea that this post has brought to your mind? What do you think about the technique? Any optimization that you may think about? It would be great to read some comments!

The Indie Content Problem

Original Author: Alistair Doulin

Now that we’re wrapping up work on Battle Group 2 we’ve begun planning out our next major project. I’ve briefly spoken about this previously and today I’m going to share some further discussions that have come out of our planning. The main theme revolves around creating enough content for a game with a small development team. With three main developers (a programmer, a designer and an artist) and a project timeframe of 12 months we need to make smart decisions about how we will create enough content for our game. I see the same problem crop up with a lot of other indie friends and I thought I’d give my thoughts on the subject.

The Problem – Not Enough Content

The underlying problem is the creation of enough quality content to keep players engaged for a set period of time. For Battle Group 2 this was a handful of hours, however for our next project, we are aiming for something people can play for months without running out of content. The problem is that a small team is limited in what it can produce in a given period of time. For us, 3 (hu)man-years of work. So what are our options to solve this problem?

Solution 1 – Reduce Scope

The first solution is to reduce the scope of the game. Instead of providing x-months of content for the player, cut this back to weeks or hours. This is the usual advice I give to game developers when they are concerned with the amount of time/budget required to develop their game. It’s a common trap to overscope a project and have the development go on for years, abandoning it entirely or releasing something that doesn’t live up to the original vision of the game. Reducing scope has the advantage of focussing the design back on the core “5 minutes of fun” and making sure the game being built is the tightest play experience possible.

Solution 2 – Change Design

The second solution is the change the design of the game to cater to limited resources. From a business point of view, this can involve changing the monetization for the product. Free to play games often require a large amount of content to keep people engaged/playing and therefore draw out more money. Switching to a paid model allows developers to “get paid” up front and focus on quality over quantity. The game is then more about providing an enjoyable experience than keeping people playing and extracting as much money for as long as possible. From a game design perspective, this involves changing the underlying design of the game to cater to reduced resources. The difficult part to this is keeping the original vision of the game at the same time.

Solution 3 – Roadblocks

The current trend for free to play games (Boom Beach, Candy Crush) is to stop the player from racing through the content by placing artificial blocks on their progress. Players continually run into roadblocks that require them to wait, ask a friend for help or pay cash. This is not something we want to do for our future projects. While it has become the norm for a particular set of games, I’m glad to see it hasn’t made its way into more mainstream games outside of F2P mobile domain.

Solution 4 – Procedural Content

The solution I am leaning towards on our future project is to use procedural content generation for the majority of our content. This changes the problem from one of time/resources to one of solving complex problems and tweaking algorithms to make quality content. This in itself can sometimes be as time consuming as simply creating the content and therefore needs to be handled carefully. The major advantage to this solution is that it frees the team up to make the building blocks for the game and have players explore the space in the direction they enjoy. One risk of this approach is creating content that all feels the same. Players quickly see through procedural generation when all that changes is simple stats or superficial changes to content. However games that are built upon procedural content from their core (Minecraft, No Man’s Sky) can give deep experiences that allow almost unlimited play time.

Our Decision

We are in the middle of making this decision for our next project at the moment. We have not decided on the best option and this blog post is a way for me to think through our options as clearly as possible. Have you encountered a similar problem and what was your solution? Are there any other solutions you would suggest?

What Is In a Name?

Original Author: Niklas Frykholm

Today I’d like to revisit one of the most basic questions when designing a resource system for a game engine:

How should resources refer to other resources?

It seems like a simple, almost trivial question. Yet, as we shall see, no matter what solution we choose, there are hidden pitfalls along the way.

To give some context to the question, let’s assume we have a pretty typical project setup. We’ll assume that our game project consists of a number of individual resources stored in a disk hierarchy that is checked into source control.

There are three basic ways of referring to resources that I can think of:

  • By path
  • By GUID
  • By “name”

By Path

texture = "textures/flowers/rose"

This is the most straightforward approach. To refer to a particular resource you just specify the path to that resource.

A word of warning: If you use paths as references I would recommend that you don’t accept ridiculous things such as “./././models../texturesFLOWers/////rose” even though your OS may think that is a perfectly valid path. Doing that will just lead to lots of headaches later when trying to determine if two paths refer to the same resource. Only use a canonical path format, from the root of the project, so that the path to same resource is always the same identical string (and can be hashed).

Path references run into problem when you want to rename a resource:

textures/flowers/rose -> textures/flowers/less-sweet-rose

Suddenly, all the references that used to point to the rose no longer works and your game will break.

There are two possible ways around this:

You can do what HTML does and use a redirect.I.e., when you move rose, you put a little placeholder there that notifies anyone who is interested that this file is now called less-sweet-rose. Anyone looking for rose will know by the redirect to go looking in the new place.

There are three problems with this, first the disk gets littered with these placeholder files. Second, if you at some point in the future want to create a new resource called rose, you are out of luck, because that name is now forever occupied by the placeholder. Third, with a lot of redirects it can be hard to determine when two things refer to the same resource.

Renaming tool
You can use a renaming tool that understands all your file formats, so that when you change the path of a resource, the tool can find all the references to that path and update them to point to the new location.Such a tool can be quite complicated to write — depending on how standardized your file formats are. It can also be very slow to run, since potentially it has to parse all the files in your project to find out which other resources might refer to your resource. To get decent performance, you have to keep an up-to-date cache of the referencing information so that you don’t have to read it every time.

Another problem with this approach can occur in distributed workflows. If one user renames a resource while another creates references to it, the references will break when the changes are merged. (Note that using redirects avoids this problem.)

Both these methods require renames to be done with a tool. If you just change the file name on disk, without going through the tool, the references will break.


The problems with renaming can be fixed by using GUIDs instead of paths. With this approach, each resource specifies a GUID that uniquely identifies it:

guid = "a54abf2e-d4a1-4f21-a0e5-8b2837b3b0e6"

And other resources refer to it by using this unique identifier:

texture = "a54abf2e-d4a1-4f21-a0e5-8b2837b3b0e6"

In the compile step, we create an index that maps from GUIDs to compiled resources that we can use to look things up by GUID.

Now files can be freely moved around on disk and the references will always be intact. There is not even a need for a special tool, everything will work automatically. But unfortunately there are still lots of bad things that can happen:

  • If a file is copied on disk, there will be two files with the same GUID, creating a conflict that needs to be resolved somehow (with a special tool?)
  • Lots of file formats that we might want to use for our resources (.png, .wav, .mp4, etc) don’t have any self-evident place where we can store the GUID. So the GUID must be stored in a metadata file next to the original file. This means extra files on disk and potential problems if the files are not kept in sync.
  • Referring to resources from other resources is not enough. We also need some way of referring to resources from code, and writing:

    is not very readable.

  • If a resource is deleted on disk, the references will break. Also if someone forgets to check in all the required resources, the references will break. This will happen no matter what reference system we use, but with GUIDs, everything is worse because the references:
    texture = "a54abf2e-d4a1-4f21-a0e5-8b2837b3b0e6"

    are completely unreadable. So if/when something breaks we don’t have any clue what the user meant. Was that resource meant to be a rose, a portrait, a lolcat or something else.

In summary, the big problem is that GUIDs are unreadable and when they break there is no clue to what went wrong.

By “Name”

Perhaps we can fix the unreadability of GUIDs by using human readable names instead. So instead of a GUID we would put in the file:

name = "garden-rose"

And the reference would be:

texture = "garden-rose"

To me, this approach doesn’t have any advantages over using paths. Sure, we can move and rename files freely on disk, but if we want to change the name of the resource, we run into the same problems as we did before. Also, it is pretty confusing that a resource has a name and a file name and those can be different.

By Path and GUID?

Could we get the best of both worlds by combining a path and a GUID?

I.e., the references would look like:

texture = {
	path = "textures/flower/rose"
	guid = "a54abf2e-d4a1-4f21-a0e5-8b2837b3b0e6"

The GUID would make sure that file renames and moves were handled properly. The path would give us the contextual information we need if the GUID link breaks. We would also use the path to refer to resources from code.

This still has the issue with needing a metadata file to specify the GUID. Duplicate GUIDs can also be an issue.

And also, if you move a file, the paths in the references will be incorrect unless you run a tool similar to the one discussed above to update all the paths.


In the Bitsquid engine we refer to resources by path. Frustrating as that can be sometimes, to me it still seems like the best option. The big problem with GUIDs is that they are non-transparent and unreadable, making it much harder to fix stuff when things go wrong. This also makes file merging harder.

Using a (GUID, path) combination is attractive in some ways, but it also adds a lot of complexity to the system. I really don’t like adding complexity. I only want to do it when it is absolutely necessary. And the (GUID, path) combination doesn’t feel like a perfect solution to me. It would also require us to come up with a new scheme for handling localization and platform specific resources. Currently we do that with extensions on the file name, so a reference to textures/flowers/rose may open textures/flowers/ if you are using French localization. If we switched to GUIDs we would have to come up with a new system for this.

We already have a tool (the Dependency Checker) that understands references and can handle renames by patching references. So it seems to me that the best strategy going forward is to keep using paths as references and just add caching of reference information to the tool so that it is quicker to use.

This has also been posted to The Bitsquid blog.

Analyzing free-to-play games development: the Game / Team Clock

Original Author: Raul Aliaga Diaz


There’s currently a lot of information out there about how to approach the free-to-play business model and its implications in the Game Development Process. It becomes increasingly complicated to make sense of all this information, however: how does it apply particularly to successful games; how does knowing and doing all this prevent a game from failing, and how we can make it work on our game to succeed. In this post, I’ll explore a framework to evaluate free-to-play development execution. This framework is visualized by arranging your game’s needs and your team’s strengths on a Clock shaped chart to assess your game’s viability, opportunities and risks.

Figure 01: An Example of the Game/Team Clock.


By taking a user centered point of view for inspiration, we can think about the questions players implicitly ask themselves at each step of their lifecycle as “users” of our game. The key to this framework is putting yourself in the position of a player of your game, and asking yourself 12 questions from that perspective:

  • What is the game about? The first step is figuring out a clear, unique, compelling way to describe a game to quickly grab players’ attention and interest.
  • Where and how do I get to know about this? There are many venues in which people can get to know new things: TV, websites, social networks, etc. This is why is so important to know where people are spending time.
  • Do I have access to any of the platforms in which is available? In an era with so many different gaming enabled devices, the platform in which a game is available shapes not only the development challenges but also the potential number of players to reach, and their expectations.
  • Do I understand and like the gameplay? Once you can play a game, you start familiarizing yourself with it and decide whether you understand, and like, the game or not. I’m stressing the difference between understanding and liking, because they’re both important when we compete for people’s leisure time.
  • Do I like the art and content? Games that look and feel nice do get noticed and hook people. This encompasses art, the story, the sounds and music, etc.
  • Does it suit my lifestyle patterns to play? Some games are enjoyable at any moment in which you have 5 minutes. Others require a heavier investment of time and scheduling, such as PC free-to-play games. But if a game fits into your gaming patterns (every other weeknight and some longer time on weekends, for example), then you’re more inclined to feel that you can commit to including the game into your busy life.
  • Is it not a burden? This is not asked right in the beginning, but it is increasingly present in the game’s interactions. Is it sending too many push notifications? Am I getting lots of prompts to pay before even trying to play? Is the game getting too frustrating? etc.
  • Do I want to keep playing this game? After the initial glow of the game and once you understand and like what it is about, does it make sense to continue playing? Are you getting somewhere? Is there a meaningful goal to achieve? In other words: Are you getting increasingly better engagement?
  • Do other cool people or friends of mine play it too? To play is inherently social, and it’s very rewarding to know when your friends enjoy the games you like too, or when you get to interact with other people based solely on the experiences a game enables.
  • Do I take joy in identifying myself as a player of this? This element is subtle. Are you grateful for the good times the game has given you? Do you tell (and encourage!) other people to play this game? Do you start thinking about why you like it and is it becoming a little piece of your current identity? This is a way to essentially figure out whether your game’s brand is resonating with the player or not.
  • Does this game have clear value for me? Next to identifying yourself as a player of a game, this must offer a clear value for you as a player. It’s the cherry on top of making the game part of your lifestyle. The game gives you something beyond fun. It’s a hobby, it reminds you of your childhood, you have enjoyed it so much that you want to see more of it and seek ways to support it, you tell other people that you play this game for a reason that is crystal clear for you.
  • Is it worth paying for this? After all this, having an answer for all the previous questions, you consider the trade off: Is it valuable for you to pay in the game for the things it has to offer you?

This is roughly the path of questions people implicitly ask and answer to themselves from first learning about a game to then deciding to pay for it.

This is radically different from non-free-to-play games in which the questions from 4 to 11 have to be answered before playing to whole game at length. This made the game development process so distant from people’s feedback that studios usually needed to rely on expensive focus groups, risky leaps of faith and existing brands or IPs.

Moreover, given the platform choice for some games, like in console games, many questions were rendered unnecessary to address because the platform comes with a set of expectations and assumptions. For example: you expect on the latest consoles to have things such as highly-polished graphics and long experiences to justify spending $60 on them.

These questions can be mapped to concerns you need to take care of when developing and launching a game:

Figure 02: Questions and their areas of concern.

Let’s adventure some definitions for each one of these concerns:

  • Unique Value Proposition: It’s the short, clear, attractive, unique description that permeates your whole game.
  • Visibility: The aggregated set of outlets in which people can find out about your Unique Value Proposition, visible where they spend time.
  • Platform: The place/device where people play your game.
  • Gameplay: The core experience of your game and its evolution.
  • Content: The elements that shape your game’s core experience and enable people to perceive it through their senses.
  • Lifestyle affordance: The way all previous elements fit on the (ever diminishing) leisure time of people’s busy, constrained lives.
  • UX Affordance: The way all previous elements not only fit, but are made more accessible, convenient, and affordable.
  • Meta Gameplay: The motivation for the longer run.
  • Community: The social aspects of your game and the part of the experience that happens largely outside your game application.
  • Identification: The positive encapsulation of all previous elements for each player.
  • Value accessibility: The clear realization of the reasons why YOU play THIS game (and not others).
  • Monetization: Whether you’re willing to pay for the previous reasons or not.

Additionally, we can map each of these topics to key areas of the Team’s Expertise, and which areas carry the biggest effort on each topic (note: your mileage may vary):

Figure 03: Concerns and their key driver Expertise.

You start with a strategy to make a game, coming either from business or creative inspirations, and then you make sure Marketing puts the game on people’s minds, making them come together to play a game designed and developed by tech, art and design teams, with additional effort contributed by teams including Product, UX, Live Operations, Community Management and Analytics to operate the game as a service (even if a light one), to finally encompass this pursuit as a Business.

Of course, for each team, considering your proper definitions of the 12 topics and your particular game, the matching can be different, and all the Teams areas of Expertise are involved to different degrees on each Player Concern:

Figure 04: Player Concern / Team Expertise Relationship Matrix.

Entering: The Game/Team Clock

Now, we can map these Player Concerns and Team Expertise areas to regions on a clock for better visualization:

Figure 05: The Game Clock.

For your particular game, each “hour” will demand more or less effort to make it successful. Conversely, depending on the particular shape of your team, you can score competence and expertise in lesser or higher degrees on each of these Concerns. The key is to be conscious about your game’s requirements for success, to be honest about your team’s execution reality, and to make a match between your game and your team, that has the lowest possible gap.

We can even place the game companies that focus on each Team Expertise area on different parts of the clock as well:

Figure 06: The Team Clock and the types of teams they gather.

Successful teams play on their strengths. Since we have now plenty of platforms and the possibility of using quantifiable insights through metrics and analytics, a lot of new game companies are created focused only in the early life cycle, considering aspects such as Monetization, the Unique Value Proposition, Marketing/Distribution and the Platform considerations, usually under the assumption that the next phase is “just” the game part.

Indie Game Developers start right from the Platform, Gameplay and Content aspects before exploring the other areas, because their strength lies in the game development part, considering that the Unique Value Proposition, Visibility and Monetization will come naturally from a great game.

In the subsequent region is where we have the divergence and conflict on approaches, because from 6 to 9, we can find scaling, growing startups together with successful indie game developers. So which approach can get you there? It will depend on your particular project and how well it aligns with your team’s strengths.

No matter how much effort you put on the early part of the cycle, if your game is not good enough, it will not take off. No matter how much marketing, monetization, analytics, etc you throw at your game besides great gameplay and high quality content, you will only continue to fool yourself with hope until you run out of money. On the other hand, a unique, great polished game has no chance to survive on a small platform, if it’s complicated or a burden to play in itself, or if it’s pursuing an audience blindly without correcting it to cater better to the people that actually like the game.

In general, we tend to fixate on evaluating the approach of scaling startups or successful indies, disregarding the complementary elements that took them to 6 to 9, and hopefully on the holy path to 12. Startup founders learn all about the business elements of the ones making it, disregarding the importance of having a great game and a great team to execute. Indie developers try to emulate successful Indies that already have strong games, that have become brands of themselves, and don’t pay attention to how the successful Indie started, and which mix of factors made them successful beyond having a great game.

Finally, in the region from 9 to 12 we have the “Unicorns”, the games and their developers who are very, very successful. King’s Candy Crush, Supercell’s Clash of Clans and Hayday, and Riot’s League of Legends. Among the Indies we can consider Mojang’s Minecraft, together with Nimblebit’s Tiny Tower and their subsequent games, among others.

The way to use this framework is to assess your projects needs and your teams strengths, providing a score on each User concern and Area of expertise, to look for potential gaps.

Figure 07: Example Game/Product Needs and Team Expertise.

When we have a slightly higher competence for something required on our game, we have a competitive advantage; conversely, lacking the expertise to address needed concerns for a game or product to be successful pose Execution Risk. A considerably higher expertise over a Game/Product need is untapped potential, and a good alignment is Execution Feasibility.

Figure 08: Game/Product Needs versus Team’s Expertise comparison.

It’s important to note that even though this framework is conceived for free-to-play games, it can be extended to premium games as well, to study how to make the most of your team’s key areas of expertise, and how your work fits in the larger scope of other teams and their games. This is, for example, the reasoning to include Mojang as an example of a successful indie game developer: it doesn’t operate Minecraft as a service (not heavily at least), but it has a brand so strong that its operations can be roughly described as “Continue catering to your (large and established) audience”.


This framework is a practical way to ask questions about your game and your team to make the decisions that effectively contribute to your game’s success. The questions, the Player Concerns, the importance of each element may vary greatly among platforms, game genres and your team’s motivations. My hope is that this can be an effective starting point.

(Originally published at Mobile Dev Memo and my blog on June 2014).

A Midnight Catharsis

Original Author: Jorge Rodriguez

I can’t sleep.

In three hours I have to wake up, catch a 4 am bus to an airport to make a 5:30 am flight to Brussels to continue my extended European vacation, but I know that I’m only going to be able to sleep tonight after writing this. The cause of my insomnia is that I just read these words:

Subject: *fart*

Post by mranime » May 11th, 2014, 2:35 am

hear that? that’s the sound of yet another failed attempt by vino of making a mod. the only mods he has to his name that have ever seen a release or more than 15 players is one that he shoveled a 6 foot deep grave and made sure no one ever played it again. he even managed to fuck that up because it took him literally 4 years to dig the grave by ruining every single aspect of the game. i guess he was busy ruining his marriage too, then

sorry i just wanted to add some activity to this forum since no one has posted in like 2 weeks

[Some editorial context: Vino is me. The mod he’s talking about that got “shoveled into a 6 foot deep grave” was called The Specialists, a popular game I worked on ten years ago, and then put to bed after building a big final version. Or maybe he’s talking about Calamity Fuse, which I killed following some internal team friction, after working on it for years. Or maybe he’s talking about my current project, Double Action, which I’ve been working on for more than 3 years. I was married briefly and then divorced when I was very young, and I was dumb enough to talk about it on the internet. Nobody has posted in two weeks because the project is on temporary hiatus during my European vacation.]

Okay. It’s just an Internet troll, right? Ban him. Go to sleep. Catch your bus. Get on your plane. Have your vacation, then go release Double Action and move on with your life. Why should you let an Internet troll affect you?

But somehow its not quite that easy. I get this sort of thing fairly often, actually. I’m not sure if it’s the same few people repeating the same troll over and over (the messages do seem to have similarities) or different people every time, but every month or two I get something like this. Every single time it gets me down. I’ve had to ban people from the forums, block them on IRC and Twitter, and filter out their emails from my inbox. But I always try to stay available to anybody who wants to get in contact with me, and so it’s always easy for them to find a new way to send me a hateful message. I can try to avoid them but once contact is made, once my eyes meet even the subject of the message, I can’t draw myself away. I know what’s in the message, I know I shouldn’t look, by my morbid self-flagellating curiosity brings my eyes all the way to the end.

At first they were crushing and I had no idea how to handle them. It’s not that I’m thin skinned; I’ve certainly fielded much more qualitative and substantial criticism and not taken it personally. But this isn’t that. I’m a shitty game designer, they say. I ruined their favorite game, I’ll never design a game that they’ll like, I’m Hitler (I’m not exaggerating there, I have been compared to Hitler) and I ran my own career into the ground. As it turns out, when I’ve spent years making something (usually for free) for no other reason than that I want someone to be entertained, I can only take the volleys so many times before the arrows start to reach their mark. After a while I listened to some others in the industry who deal with the same thing, specifically Ben Kuchera, talk about how they deal with the trolls, and I learned to avoid comment sections. I normally crave any feedback about anything I make, if only because I need to know that someone somewhere liked it even just a little bit. But there is no quality feedback in comment sections, only people proving Godwin right.

I learned to recast these people in my mind. They’re not playing the same game as I am. They’re not trying to help me. They don’t have any design insights. They haven’t put any deep thinking into the game’s mechanics. They don’t hope for me to succeed or look forward to release day. Their input is less useful to me than the starry eyed play testers who earnestly suggest that I should add rocket launchers that shoot exploding watermelons. I live and breathe game development, he stands in the audience hurling tomatoes, not because the show is bad, but because he wants to see a smashed fruit on my face. Nothing he says matters. It can’t help me, it isn’t substantive, it isn’t even well intended, and I should ignore it wholesale.

But it still hurts. Why does it hurt so much? It’s not like this person even cares about me or my project. He even said so in his post, he’s only posting because things are quiet and he wants to stir the hornet’s nest. Like any good gamer, he wants to push the lever and be stimulated by a response. He’s using my emotions as his own personal Skinner box for his casual amusement. Why should it hurt at all when he doesn’t care a whistle about me or my game?

Or does he? Why would he be posting at all unless he does care, at least a little bit? Someone who truly doesn’t care would never post at all. He wants to hurt me. His jabs are aimed for the soft parts. He knows that every motion he makes twists a knife. He points at my failures, my divorce and my lackluster backlog, in hopes that it will strike a vein. He may even know how difficult it is to continue work on the game when words like his weigh on my mind. This person, whoever he is, thinks that if I hadn’t somehow fucked up then today he could be playing a game he likes instead of trolling a game he doesn’t like. Let me explain.

I mentioned before that I was involved in a game called The Specialists. This was before X Box Live, before the iPhone, before Half Life 2 was released, and before John Blow and 2D Boy made “indie games” a thing. I was one of two programmers on that game, or “mod” as it was called in those days. I was brought on after the original programmer lost interest. TS was many things to many people. To some people (like me) it was a sylish slowmo action shooter inspired by John Woo and The Matrix. To others it was an imaginative role playing framework. To still others it was a competitive test of skills and the boundaries of the game’s mechanics. To the original creator it was a hobby that got more popular than he ever imagined it would. It was a lot of different games that meant a lot of different things to a lot of different people.

Ten years later I’m now working on Double Action, which on the surface is the same sort of action shooter as TS was, but Double Action is only one type of game. It’s about style, and nothing else. It says it right on the tin, if you go to the website. It’s about jumping out of windows guns blazing, explosions exploding and paper money flying everywhere, just for fun. It’s actually a pretty shallow game, I admit, but it’s also earnest. It’s not about competition, or role playing, or twitch skills. I can’t make those other games because they’re not the side of TS that I appreciated. I’m making the game that’s in my head, and I’m fucking proud of it. I struggled for a long time with the vision of Double Action, with what kind of game it should be, but now the game has finally found a track and it’s doing a good job of being coherent, fun, and true to itself.

But it’s still not the game that the writer of that post wants it to be. He wants a different game, aesthetically similar, but mechanically fundamentally different. He doesn’t see the game I’m trying to create, which is fun and engaging and makes play testers scream in delight and yell “HOLY SHIT DID YOU SEE MY DOVES!” He only sees the game I didn’t make, the one he wanted but can’t have because I fucked up and went and made the game about something else. And so he says the meanest things he can to me in order to vent his frustration, or something.

That’s why it’s personal, and that’s why it hurts.

So I have a post of my own for mranime.

Dear mranime,

Double Action is a good game. When I get home from Europe, I am going to finish what little work remains and release it. Maybe it will be successful, and maybe it won’t, either way it doesn’t matter. What matters is that I’m proud of it. My metric for success isn’t whether you are pleased, but whether I was true to what the game wanted to be. I was.

So please, go away.

Love, Vino

(PS I know it’s you, demu.)

Now I can sleep.

Building an Engine Plugin System

Original Author: Niklas Frykholm

A plugin system is a useful way to allow developers to extend the capabilities of a game engine. Of course, an engine can also be extended by directly modifying the source code, but there are several drawbacks with that approach:

  • Changing the code requires you to recompile the engine. Anyone who wants to modify the engine must have the full source code, access to all the libraries and the build environment set up correctly.

  • Every time you pull changes from upstream you will have to merge your changes with the incoming patches. Over time, this adds up to a significant chunk of work.

  • Since you work directly in the source code, instead of against a published API, refactoring of engine systems might force you to rewrite your code from scratch.

  • There is no easy way to share the modifications you have made with other people.

A plugin system solves all these issues. Plugins can be distributed as compiled DLLs. They are easily shared and you can install them by just putting them in the engine’s plugin folder. Since the plugins use an explicit API, they will continue to work with new versions of the engine (unless backwards compatibility is explicitly broken).

Of course, the plugin API can never cover everything, so there will always be things you can do by modifying the engine that you can’t do through the plugin API. Nevertheless, it is a good complement.

A Tale of Two APIs

When building a plugin system, there are actually two APIs that you need to think about.

The first, and most obvious one, is the API that the plugin exposes to the engine: a set of exported functions that the engine will call at predefined times. For a very basic system, it could look something like this:

__declspec(dllexport) void init();
  __declspec(dllexport) void update(float dt);
  __declspec(dllexport) void shutdown();

The other API, which usually is a lot more complicated, is the API that the engine exposes to the plugin.

In order to be useful, the plugin will want to call on the engine to do stuff. This can be things like spawning a unit, playing a sound, rendering some meshes, etc. The engine needs to provide some way for plugins to call on these services.

There are a number of ways of doing this. One common solution is to put all the shared functionality in a common DLL and then link both the engine application and the plugin against this DLL.


The drawback of this approach is that the more functionality that the plugins need access to, the more must go in the shared DLL. Eventually you end up with most of the engine in the shared DLL, which is pretty far from the clean and simple APIs that we strive for.

This creates a very strong coupling between the engine and the plugins. Every time we want to modify something in the engine, we will probably have to modify the shared DLL and thus likely break all of the plugins.

As anyone who has read my previous articles know I really don’t like these kinds of strong couplings. They prevent you from rewriting and refactoring your systems and thus eventually cause your code to stagnate.

Another approach is to let the engine’s scripting language (in our case Lua) work as the engine’s API. With this approach, any time a plugin wanted the engine to do something it would use a Lua call.

For lots of applications I think this can be a really good solution, but in our case it doesn’t seem like a perfect fit. First, the plugins will need access to a lot of stuff that is more “low level” than what you can access from Lua. And I’m not to keen on exposing all of the engine’s innards to Lua. Second, since both the plugins and the engine are written in C++, marshalling all the calls between them through Lua seems both overly complicated and inefficient.

I prefer to have an interface that is minimalistic, data-oriented and C-based (because of C++ ABI compatibility issues and also because of… well… C++).

Interface Querying

Instead of linking the plugin against a DLL that provides the engine API. We can send the engine API to the plugin when we initialize it. Something like this (a simplified example):


typedef struct EngineApi
  	void (*spawn_unit)(World *world, const char *name, float pos[3]);
  } EngineApi;


#include "plugin_api.h"
  __declspec(dllexport) void init(EngineApi *api);
  __declspec(dllexport) void update(float dt);
  __declspec(dllexport) void shutdown();

This is pretty good. The plugin develeoper does not need to link against anything, just include the header file plugin_api.h, and then she can call the functions in the EngineApi struct to tell the engine to do stuff.

The only thing that is missing is versioning support.

At some point in the future we probably want to modify the EngineApi. Perhaps we discover that we want to add a rotation argument to spawn_unit() or somehting else.

We can achieve this by introducing versioning in the system. Instead of sending the engine API directly to the plugin, we send the plugin a function that lets it query for a specific version of the engine API.

With this approach, we can also break the API up into smaller submodules that can be queried for individually. This gives us a cleaner organization.


#define WORLD_API_ID    0
  #define LUA_API_ID      1
  typedef struct World World;
  typedef struct WorldApi_v0 {
  	void (*spawn_unit)(World *world, const char *name, float pos[3]);
  } WorldApi_v0;
  typedef struct WorldApi_v1 {
  	void (*spawn_unit)(World *world, const char *name, float pos[3], float rot[4]);
  } WorldApi_v1;
  typedef struct lua_State lua_State;
  typedef int (*lua_CFunction) (lua_State *L);
  typedef struct LuaApi_v0 {
  	void (*add_module_function)(const char *module, const char *name, lua_CFunction f);
  } LuaApi_v0;
  typedef void *(*GetApiFunction)(unsigned api, unsigned version);

When the engine instances the plugin, it passes along get_engine_api(), which the plugin can use to get hold of different engine APIs.

The plugin will typically set up the APIs in the init() function:

static WorldApi_v1 *_world_api = nullptr;
  static LuaApi_v0 *_lua_api = nullptr;
  void init(GetApiFunction get_engine_api)
  	_world_api = (WorldApi_v1 *)get_engine_api(WORLD_API, 1);
  	_lua_api = (LuaApi_v0 *)get_engine_api(LUA_API, 0);

Later, the plugin case use these APIs:

_world_api->spawn_unit(world, "player", pos);

If we need to make a breaking change to an API, we can just introduce a new version of that API. As long as get_engine_api() can still return the old API version when requested for it, all existing plugins will continue to work.

With this querying system in place for the engine, it makes sense to use the same approach for the plugin as well. I.e. instead of exposing individual functions init(), update(), etc, the plugin just exposes a single function get_plugin_api() which the engine can use in the same way to query APIs from the plugin.


#define PLUGIN_API_ID 2
  typedef struct PluginApi_v0
  	void (*init)(GetApiFunction get_engine_api);
  } PluginApi_v0;


__declspec(dllexport) void *get_plugin_api(unsigned api, unsigned version);

Since we now have versioning on the plugin API as well, this means we can modify it (add new required functions, etc) without breaking existing plugins.

Putting It All Together

Putting all this together, here is a complete (but very small) example of a plugin that exposes a new function to the Lua layer of the engine:


#define PLUGIN_API_ID       0
  #define LUA_API_ID          1
  typedef void *(*GetApiFunction)(unsigned api, unsigned version);
  typedef struct PluginApi_v0
  	void (*init)(GetApiFunction get_engine_api);
  } PluginApi_v0;
  typedef struct lua_State lua_State;
  typedef int (*lua_CFunction) (lua_State *L);
  typedef struct LuaApi_v0
  	void (*add_module_function)(const char *module, const char *name, lua_CFunction f);
  	double (*to_number)(lua_State *L, int idx);
  	void (*push_number)(lua_State *L, double number);
  } LuaApi_v0;


#include "plugin_api.h"
  LuaApi_v0 *_lua;
  static int test(lua_State *L)
  	double a = _lua->to_number(L, 1);
  	double b = _lua->to_number(L, 2);
  	_lua->push_number(L, a+b);
  	return 1;
  static void init(GetApiFunction get_engine_api)
  	_lua = get_engine_api(LUA_API_ID, 0);
  	if (_lua)
  		_lua->add_module_function("Plugin", "test", test);
  __declspec(dllexport) void *get_plugin_api(unsigned api, unsigned version)
  	if (api == PLUGIN_API_ID && version == 0) {
  		static PluginApi_v0 api;
  		api.init = init;
  		return &api;
  	return 0;


// Initialized elsewhere.
  LuaEnvironment *_env = 0;
  void add_module_function(const char *module, const char *name, lua_CFunction f)
  	_env->add_module_function(module, name, f);
  void *get_engine_api(unsigned api, unsigned version)
  	if (api == LUA_API_ID && version == 0 && _env) {
  		static LuaApi_v0 lua;
  		lua.add_module_function = add_module_function;
  		lua.to_number = lua_tonumber;
  		lua.push_number = lua_pushnumber;
  		return &lua;
  	return 0;
  void load_plugin(const char *path)
  	HMODULE plugin_module = LoadLibrary(path);
  	if (!plugin_module) return;
  	GetApiFunction get_plugin_api = (GetApiFunction)GetProcAddress(plugin_module, "get_plugin_api");
  	if (!get_plugin_api) return;
  	PluginApi_v0 *plugin = (PluginApi_v0 *)get_plugin_api(PLUGIN_API_ID, 0);
  	if (!plugin) return;

This has also been posted to The Bitsquid blog.