What they DON’T tell you about being a game developer

Original Author: Alex Norton

So I’m in an interesting position. Malevolence, while not my first game by a long shot, is my first RELEASED game, and I’ve been lucky enough to have it gather a lot of attention (for an indie title) early on in its creation. From what people tell me, this does not normally happen. Normally, a developer will hit on gold after they’ve tested the waters with a few titles first, or had a hand in other game development, such as working for a AAA company.

Because of this unique perspective of having a relatively successful title (despite not yet being released) on my first ever attempt, I haven’t yet developed the pessimism that often comes with being an experienced indie game developer. This has led me to want to write this new thought piece, which goes into all of the things that they DON’T tell you about being a game developer. If you want the short and sweet version, feel free to skip to the end.

STAGE 1 – DELUSIONS

—————————

Going through university I had the same delusions as most people that I would get my qualifications, build a folio and get a job at a AAA game company. Shortly thereafter, fame and riches would ensue and I would live happily ever after, making games that I love, and having everything right with the world.

I finished university to find that all of my hard work would get me on a “consideration” list for a baseline, entry level QA job which would mostly consist of me being locked in a cubicle for 70+ hours a week doing some of the most repetitve, soul destroying work known to man.

A good example of what it is to be a tester, provided by Penny Arcade.

A Formal Language for Data Definitions

Original Author: Niklas Frykholm

Lately, I’ve started to think again about the irritating problem that there is no formal language for describing binary data layouts (at least not that I know of). So when people attempt to describe a file format or a network protocol they have to resort to vague and nondescript things like:

Each section in the file starts with a header with the format:
 
  
 
  4 bytes			header identifier
 
  2 bytes			header length
 
  0--20 bytes		extra data in header
 
  
 
  The extra data is described below.

As anyone who has tried to decipher such descriptions can testify, they are not always clear-cut, which leads to a lot of unnecessary work when trying to coax data out of a document.

It is even worse when I create my own data formats (for our engine’s runtime data). I would like to document those format in a clear and unambiguous way, so that others can understand them. But since I have no standardized way of doing that, I too have to resort to ad-hoc methods.

This whole thing reminds me of the state of mathematics before formal algebraic notation was introduced. When you had to write things like: the sum of the square of these two numbers equals the square of the previous number. Formal notation can bring a lot of benefits (just look at what it has done for mathematics, music, and chess).

For data layouts, a formal definition language would allow us to write a tool that could open any binary file (that we had a data definition) for and display its content in a human readable way:

height = 128
 
  width = 128
 
  comment = "A funny cat animation"
 
  frames = [
 
  	{display_time = 0.1 image_data = [100 120 25 ...]}
 
  	...
 
  ]

The tool could even allow us to edit the readable data and save it back out as a binary file.

A formal language would also allow debuggers to display more useful information. By writing data definition files, we could make the debugger understand all our types and display them nicely. And it would be a lot cleaner than the hackery that is autoexp.dat.

Just to toss something out there, here’s an idea of what a data definition might look like:

typdedef uint32_t StringHash;
 
  
 
  struct Light
 
  {
 
  	StringHash	name;
 
  	Vector3		color;
 
  	float		falloff_start;
 
  	float 		falloff_end;
 
  };
 
  
 
  struct Level
 
  {
 
  	uint32_t version;
 
  	uint32_t num_lights;
 
  	uoffset32_t light_data_offset;
 
  
 
  	...
 
  
 
  light_data_offset:
 
  	Light lights[num_lights];
 
  };

This is a C-inspired approach, with some additions. Array lengths can be parametrized on earlier data in the file and a labels can be used to generate offsets to different sections in the file..

I’m still tossing around ideas in my head about what the best way would be to make a language like this a reality. Some of the things I’m thinking about are:

Use Case

I don’t think it would do much good to just define a langauge. I want to couple it with something that makes it immediately useful. First, for my own motivation. Second, to provide a “reality check” to make sure that the choices I make for the language are the right ones. And third, as a reference implementation for anyone else who might want to make use of the language.

My current idea is to write a binary-to-JSON converter. I.e., a program that given a data definition file can automatically convert back and forth between a binary and a JSON-representation of that same data.

Syntax

The syntax in the example is very “C like”. The advantage of that is that it will automatically understand C structs if you just paste them into the data definition file, which reduces the work required to set up a file.

The disadvantage is that it can be confusing with a language that is very similar to C, but not exactly C. It is easy to make mistakes. Also, C++ (we probably want some kind of template support) is quite tricky to parse. If we want to add our own enhancements on top of that, we might just make a horrible mess.

So maybe it would be better to go for something completely different. Something Lisp-like perhaps. (Because: Yay, Lisp! But also: Ugh, Lisp.)

I’m still not 100 % decided, but I’m leaning towards a restricted variant of C. Something that retains the basic syntatic elements, but is easier to parse.

Completeness

Should this system be able to describe any possible binary format out there?

Completeness would be nice of course. It is kind of annoying to have gone through all the trouble of defining language and creating the tools and still not be able to handle all forms of binary data.

On the other hand, there are a lot of different formats out there and some of them have a complexity that is borderline insane. The only way to be able to describe everything is to have a data definition language that is Turing complete and procedural (in other words, a detailed list of the instructions required to pack and unpack the data).

But if we go down that route, we haven’t really raised the abstraction level. In that case, why even bothering with creating a new language. The format description could just be a list of the C instructions needed to unpack the data. That doesn’t feel like a step forward.

Perhaps some middle ground could be found. Maybe we could make language that was simple and readable for “normal” data, but still had the power to express more esoteric constructs. One approach would be to regard the “declarative statements” as syntactic sugar in a procedural language. With this approach, the declaration:

struct LightCollection
 
  {
 
  	unsigned num_lights;
 
  	LightData lights[num_lights];
 
  };

Would just be syntactic sugar for:

function unpack_light_collection(stream)
 
  	local res = {}
 
  	res.num_lights = unpack_unsigned(stream)
 
  	res.lights = []
 
  	for i=1,res.num_lights do
 
  		res.lights[i] = unpack_light_data(stream)
 
  	end
 
  end

This would allow the declarative syntax to be used in most places, but we could drop out to full-featured Turing complete code whenever needed.

This has also been posted to The Bitsquid blog.

Playing with my kids helps me make better games

Original Author: Kyle-Kulyk

A funny thing happened to me smack in the middle of my transition from the brokerage industry to the games industry. People tell you how everything changes when you become a parent. Friends of mine tried to explain the feeling, their eyes taking on a bit of a faraway look as if they were describing an unnatural love of unicorns or some sort of mythical being while I smiled and said “Oh yeah. Oh yeah.” I often joked that agents would slip into parent’s houses at night and pump them full of endorphins while they slept because it was the only way to describe the wonder I saw in those faces at the arrival of those little, pooping, screaming, sleep deprivation units. “Everything changes,” they’d tell me and I’d nod without a shred of comprehension. Then after years of difficulties it finally happened to my wife and I and I got it. I understood why so many I knew couldn’t really put the experience into words aside from the fact that everything changes and that it’s wonderful. I don’t even bother to describe the experience to people without children now, other than to offer a genuine smile and say “Hopefully, you’ll understand one day.”

I was never around children from the time I left home until nearly 20 years later when I had kids of my own. When I was faced with other people’s children, I often found the experience awkward and a bit uncomfortable. I had no idea how to relate to kids of any age or how to interact with them. Now with children of my own I can hardly remember a time where I didn’t know how to play with children, and in return my kids have opened my eyes to why we find certain things “fun”. I hope I can describe this idea in a way that could prove useful to aspiring developers.

Playing video games in my twenties and thirties I think I lost some of the understanding of why I found games fun to play when I was a kid. Video games to me were about roleplaying or they were about competition and if you had asked me why video games were fun even three years ago, I probably would have described some combination of those two factors but over the years I’d forgotten something. Perhaps not forgotten so much as overlooked. While roleplay and competition can be factors in why games are appealing long term I think what makes video games fun is much more fundamental to the way we learn. Watching my children grow and play has helped me remember what drew me to video games as a child and what still keeps me coming back now. It has to do with learning and the feeling of accomplishment when you finally master a challenging game.

From a very early age, babies love patterns. Nothing quite locks an infant’s gaze like faces and patterns. As they get older it doesn’t stop. We find patterns all around us all the time even when confronted with something that doesn’t seemingly have a pattern. We see shapes in clouds and we instantly look for some sort of familiar arrangement in a jumble of letters or numbers. I watched my son stare at a wooden puzzle, then progress to dumping the pieces and creating chaos only to then restore order. He would continue to play in this manner until eventually it’s no longer challenging to solve that particular puzzle and suddenly that toy is forgotten for good (or until his little sister picks up a piece). He moves onto the next challenge and that’s his day with the exception of naps and meal time.

To me, right there I see two fundamental pieces of what keeps us coming back to a good video game. One factor is some sort of pattern recognition mechanic and the other is a challenge. When I started looking at the video games I enjoyed as a kid and that I enjoyed now they all have, at their core, some sort of pattern recognition element and they all had increasing levels of difficulty. I’d play until I either mastered the game and it became too easy or until the difficulty became such that I grew frustrated and no longer found the experience entertaining. I see the same behaviours in the way my toddler plays. It’s fun unless the task is too difficult, and it’s fun until the task becomes too easy.

When I was a kid I remember spending quite a bit of time on Space Ace, among others games at my local arcade. Space Ace was a cartoon, laser disk based game along the lines of Dragon Slayer. A series of events would play out on the screen and a visual cue would signal the move to make with the timing becoming more challenging as the game progressed. Mastering a game like this in a time before strategy guides and the internet took trial and error, a good memory and a pocketful of quarters and I loved that game. That was, until I beat it. Shortly after I memorized the patterns, I moved onto the next game only occasionally popping in a quarter to feel important when throngs of kids who would gather when they’d see “that kid who can beat Space Ace” start a new game.

Whether it’s timing involved in arcade fighting games or if it’s strategy in an on-line shooter, when you break it down video games are all about recognizing patterns and using them within the confines of the game’s rules.  It’s an understanding of game development that in retrospect I feel I poorly implemented in the first game my team released in our efforts to appeal to a wider audience. Each level of the game was unique, but the challenge of the game, the pattern required to win didn’t vary enough and looking back at the testing, our players enjoyed the game but the question we didn’t ask was “for how long will they enjoy it?” It’s a choice we made in the interest of appealing to a broader base, but I think this choice didn’t do us any favours and by the time we realized this and updated the title with different ways to play our window of opportunity had already closed. It’s something that seems so basic a notion in hindsight but hopefully by bringing this up I can encourage other new developers to take a look at their product differently.

Playing games with a two and a half year old also helped me rethink control schemes as well. My son loves to pick up a controller and ask “Sack-boy, Daddy?” but a Playstation 3 controller and LittleBigPlanet is a bit beyond him currently. However, I sat him down with Angry Birds – Star Wars and within seconds he was flinging birds at piggies and loving it. The same goes playing “Digit Chase” on the Playstation Vita, a quick demo that has users tap numbers on the screen in sequence. There’s something undeniably intuitive about touch screen input as illustrated by how quickly children take to them, but often mobile developers try to shoehorn controller type controls into their mobile games. I’m not saying there’s anything wrong with modern game controllers, but controls needs to be intuitive. That doesn’t mean they have to be toddler approved simple, I just think the basic controls should be straightforward. This was a lesson we learned developing our first game and reaffirmed by watching my son play. Just because you have variety of ways to control your game doesn’t mean you should just throw everything in because you can. It’s tempting to do. I know because I did it.

I can thank the time I spend playing with my little guy for bringing me back to the basics and helping understand why we find games fun. It’s not about simplifying the games themselves, but it’s recognizing that under everything we’re always searching for patterns and looking to challenge ourselves, because that’s how we learn. It’s not about making controls dead simple, but it couldn’t hurt to imagine a scenario where your game is being played by a gamer who’s never gamed before. Will your controls confuse or will they help the player become comfortable before becoming challenging? It’s easy to lose focus on basic game-play mechanics underneath everything else that makes up modern gaming, especially for experienced gamers. Watching children play and learn helped me realize this and I look forward to gaming with both my kids for years to come, and I look forward to what they have to teach me.

A Simple System to Patch your Game Content

Original Author: Colt McAnlis

This article explains why it’s important to have your own patching system, and describes how to implement a simple patching system modeled after the Quake3 file-based patching process.

Introduction

Since the rise of PC games in the early 90s, game developers have needed ways to quickly issue fixes, updated builds, and new content to existing users – hence the rise of ‘game patch’ systems. Over time, this method of updating games has made its ways from PCs to consoles, and is now trickling into mobile development. It may take some effort to build and use a patching system for your game content, but once you’ve got such a system up and running, it’s a very powerful tool for your development studio.

 

Why have your own patching system?

For modern game developers, the most popular avenue to sell games is through one of many digital distribution services like Google Play, Steam, XBLA, and the Chrome Web Store. Besides marketing games to their users, these distribution services generally handle the lion’s share of transferring game content to customers on developers’ behalf.

For games that need to update frequently, however, the content hosting process from such distribution services can be problematic. For example, some of the services can introduce significant costs in patch creation, or delays in issuing updated builds to users.  For multiplayer games, delays can let client builds get out of sync with server builds, with no way of triggering an update directly.

One opportunity to work around such hurdles is in embracing the ability to make your content available outside of distribution services.  In general, building your game technology on top of your own patching system gives you a great deal of control as a developer, and generally provides options that give your company and products a great deal of flexibility to move between platforms.

 

Reach your users directly

In addition to solving the problems listed above, a patching system gives you the opportunity to reach your users directly. Nowadays every game platform is constantly connected to the interwebs, and keeping a long tail of customers happy means constantly listening to the community, fixing their issues, and furnishing new content to them. A patching system lets you market to existing users with new content, as well as news, updates, and notices relating to your game.

So, be your name Buxum or Bixby or Bray, your mountain of users is waiting, patch them to happiness, and be on your way!

 

Patching system overview

Patching systems generally have 3 components:

  1. A build server that generates builds and patches (this server resides with the developer)
  1. A content server from which to distribute builds and patches
  1. A user client that can detect differences between the local and server versions of a game, retrieve assets, and update the local version

At its core, these are the three pillars of a patching system. You can create more fancy versions once you start getting into details, but such details tend to be game-specific and are beyond the scope of this article.

 

A simple patch-aware file system

The Quake3 source code contains an elementary example of a successful patching system. This simple system allows a patch to append new archives to the file system.  The new archives contain all the assets that differ from previous patches/builds. Another way to describe this system is that new archives are overlaid on top of existing archives. When an asset is to be read in from disk, the file system traverses the archives and selects the newest version of the asset.

In this simple system, over time a user who installed the original game would have multiple archive files from each progressive patch, each archive containing an updated set of assets. In contrast, a user who acquired the game much later in its life span would not have a plethora of archive files on their disk, but rather a collapsed archive representing the proper state of the world as of the time of their installation.

The Quake3 model is hard to beat for simplicity, and offers a good starting point to address more complex topics as your patching system gets more sophisticated. The example patching system that we will implement is thus based on the Quake3 model, and has the following rules:

  1. The majority of the content is archived.
  2. Content in newer archives take precedence over content in older archives.
    • Archived content is generally not patched, but rather replaced entirely.
  3. We ignore binary patching altogether and instead include loose assets that are replaced entirely.

To restate, we assume that as far as assets go, you’ll have the lion’s share in a small number of archives, and that new content will be shipped out in the form of additional archives, the contents of which will wholly replace older content.

We thus assume that there will be a series of archive files on disk.  At load time, we open the archives and merge their file lists into a global dictionary. When it’s time to read an asset, we consult the dictionary to determine what the newest version of the asset is, and which archive to pull the asset from.

 

Updating your build system for patching

Build systems are a bit like sacred rituals – each company tends to have its own flavor and guards its process heavily. I’m not going to cover the concepts of a build system (or tell you how to write one); rather I assume that you’ve got that under wraps.

To generate a patch for a build, your build system needs to generate a list of the files that are different from the prior build (for example, between build 299 and 300, 27 textures may have been updated, 3 models may have been deleted, and the rendering DLL may have been updated). Once you have the ability to generate this type of delta list, you need to combine the information into a patch definition, which is described below.

For our example patching system, any content that has changed or that has been added is included in the archive for a new patch. Finding new files is generally easy: Simply compare the file name listings between two folders to find what didn’t exist before. Finding existing files that have been modified can be trickier. For instance, simply testing the last-modified time of files may not work because of how your build system touches content. The fool-proof approach is to use a brute-force comparison between all the binary data in two build folders.

The ease with which your build system can compute these types of file set differences depends greatly on the language and tools of the build system. For example, if your build system is driven by C++, a binary data compare of a 40GB build would be a gnarly and less-than-ideal process. In contrast, if your build system is driven by Python, you can simply call dircmp, which gives you all the proper delta data between files in two directories.

Figure 2 below shows an example of build deltas.  Blue indicates modified files, red indicates deleted files, and green indicates new files.  In our example patching system, the new and modified files are included in archives, shown on the right.

Creating a patch definition

Once your build system can calculate what’s changed between two builds, the next step is to merge that data into some form that allows the client to consume it. Patch definitions are used to list key changes in builds over time, such that we can minimalistically update the client to the latest build.

We need a few pieces of information in each patch definition to help guide a client to the required actions to update itself. Here are a few examples of the type of information that should go into a patch definition:

  • build number – What build is this? A simple integer is easiest to track.
  • target region – If you distribute your game internationally, there may be restrictions on the types of patches/content you can ship to a specific region.
  • files to add – This list should include new archives, as well as specific loose files that need to be added to the local build.
  • files to remove – At times, in the course of builds, you’ll succeed in completely replacing an old build, or there may be some security/privacy risk with old data existing on the client disk. Having the ability to remove files from disk in these situations is useful.
  • files to binary update – For files that need direct, in-place binary patching, this can present a list of tuples, the content to be patched, and the patch file to use.
  • patch importance – Is this patch required before the user can play? Or can it be streamed in the background?

Determining what files to download

Once you can create per-build patch definitions, the next step is to allow the client to consume this information. The process is generally as follows:

  1. The client queries the patch server, sending the local version and other metadata.
  2. The patch server responds with some form of file information.
  3. The client processes the information and begins requesting new patch data to update the local copy.

There are two primary ways (with lots of variants) to make the determination of what files to download – at the client level or at the server level.  Your choice of implementation  is highly dependent on the engineering resources that are available to you. The simplest approach in terms of server technology is to keep a text-based manifest file on the server that lists all the patches, their versions etc. This entire manifest file is passed to the client upon request, and the client is responsible for building up the series of file requests to update the local copy.

While simple to implement, this approach quickly runs into limitations. Significantly, this approach requires some advanced logic built into the client to parse the manifest file and generate the request list properly. If a large content-shift occurs (for example, you change the manifest file format), the client will likely need special processing to handle the changes, and may require a patch of the client itself before it’s able to update the content.

A much more complex but scalable solution is to keep all the version data on the patch server, listed as entries in a database. The client provides some simple metadata about the state of the local data (easily encodable in a URL) to the server.  The server then computes the proper series of actions the client needs to perform in order to update itself, and transfers an ‘update script’ to the client for direct execution.

The main benefit of this server-based approach is that the computation of what the client needs to do in order to update the local state is all handled on the server. Thus, as the update logic changes, the client can remain neutral to those issues and simply react accordingly. This also allows the client to generally store less data needed for the update process (for instance, the client may only need to store its region and build number). The server can store the rest of the information needed to complete the update process, as well as provide the client with more advanced functionality, like grouping multiple patches into a single request update action.

Applying patches

Once the client has a clear roadmap of what’s required to update the local build, the next step is to actually update the data. For our simple system of downloading new archives, updating content is easy – we download the bits and write them to disk. Done. Let’s get tacos.

Eventually you will encounter a situation where you need to update the game client itself. This can be tricky if the game is running. To solve this problem, most PC games distribute a separate application that checks for patches and updates the local state, including the executable code. Typically these applications are easiest to generate as standalone applications that can patch and then launch the game itself.

For embedded environments, applying patches is a bit trickier.  For example, on consoles the base data is shipped on DVD, so you generally have to write patches to the hard drive and check for content there first. Mobile platforms have a whole separate set of requirements that I won’t get into. Thankfully, most of those platforms contain APIs to help out with this process of applying patches, which makes things a bit easier.

Determining what files to delete

Over time a client can accumulate many patches, and in some cases, it may have lots of data that is no longer needed.  For example, if all of the files in a given patch have been replaced by newer versions in subsequent patches, then the files in the original patch will never be used and are not needed. To keep the user’s machine from consuming disk space unnecessarily, it’s helpful to identify such instances and allow a patch to delete files from disk.

An archive that’s no longer needed is one whose assets are entirely replaced by newer archives. To test for this, your build system needs to be modified such that a target patch-archive has the ability to query newer builds to determine if it’s relevant any more; depending on how your build system is set up, adding this processing can be easy, or months of man-work, so make sure you take full stock of your system before trudging down the path of enlightenment.

Notes

Binary-level file patches

If you do a search for ‘patching game content’, your results will include numerous articles that describe how to minimally modify the on-disk contents of a file at a binary level. Typically this is done by computing the difference for a file and shipping only the difference to the client, resulting in a smaller transfer and a faster patching process. Unfortunately most of the research on patch generation revolves around how to patch executable files (see e.g. Courgette (source), bsdiff, and DCA). Very little (if any) research has been focused on patching binary assets like textures, models, and sounds.

Consequently the patching system described in this article focuses less on the traditional notion of a ‘patch’ and more on a process that allows us to distribute specific assets to clients so that we can easily update the clients to the latest build.  With modern compression technologies this process can result in the transfer of fairly small files to users, and the mental overhead of maintaining this type of system is significantly lower. If however you’re one of the brave souls who needs to do the more traditional version of patching, here are some quick notes.

As a starter, I suggest taking a look at XDelta, which is a fairly straightforward command-line tool: You can run a simple command to create a patch, and another command to apply a patch to a file. The app is open-source so that you can build it into a custom part of your client executable. I haven’t seen XDelta produce amazing compression results, but it does the job fairly well in terms of patching. It’s also worth noting that XDelta doesn’t produce very small deltas for archive files, mainly because the predictive look-ahead model that it uses does not fare well with the contiguous file data laid out in archive files. You’ll quickly find that simply running XDelta on two archive files won’t produce the net savings that you’re looking to gain. As such, it may be more beneficial to generate patch files for each of your assets, and archive/compress those for transfer to the client.

If you’d like to roll your own patching system for arbitrary binary content, be warned that this is a tricky problem.  While it’s easy to come up with a naive solution, you’ll soon realize that there can be multiple patch points to a single file, which usually throws simple solutions out the window. You’ll also realize that the delta between two files may not be linear – it may include deletions, insertions, and replacements, which is difficult to track.  Make sure you’ve got manager approval before going down this route 😉

 

File processing and restricted systems

Newer operating systems place restrictions on what content can be executed, mostly requiring that applications be signed before they can be executed. Once an application is signed, the system can detect any change to the application, whether the change is introduced accidentally or by malicious code.  How to sign your assets and properly distribute/patch them to the client on a given platform is left as an exercise to you. Each OS seems to have its own policies on that process, and your actions will be depend on the systems to which you ship your game.

 

Conclusion

It takes some effort to implement a patching system, but the benefits are well worth it. A patching system gives you increased flexibility and control over your game, lets you provide a better user experience, and gives you the opportunity to market new content to your users.

 

Source code

My github account has some very simple source code that implements the patching system described above.  (Be warned, it uses Python.)  The code includes a mock build system, a content server, and a client.  You can use this system with the following commands:

  1. python build/gen_patch.py
  2. python server/httpd.py
  3. python client/client.py

Command (1) generates a patch from two builds.
Command (2) starts a server from which to distribute patches.
Command (3) calculates differences in the local version (assuming the client already has the initial build) and pulls down a delta patch from the content server.

Starting a new game project? Ask the hard questions first

Original Author: Raul Aliaga Diaz

We have all been there. You wanted to start a new game project, and possibly have been dreaming of the possibilities for a long time, crafting stories, drawing sketches, imagining the dazzling effects on that particular epic moment of the game… then you start to talk to some friends about it, they give you feedback, and even might join you in the crazy journey of actually doing something about it.
Fast forward some weeks or months, and you’ve been pulling too many all-nighters, having lots of junk food and heated discussions. You might even have a playable prototype, several character models, animations, a carefully crafted storyline, a website with a logo and everything but… it just doesn’t feel right. It’s not coming together and everyone involved with the project is afraid to say something. What happened? What went wrong? How such an awesome idea became this huge mess?

Usually all game projects emerge from a simple statement that quickly pushes the mind to imagine the possibilities. Depending on your particular tastes, background and peers, these statements can be like: “Ace Attorney, but solving medical cases, like House M.D.!” (Ace House™), “Wario meets Braid!” (Wraid™) , “Starcraft but casual, on a phone!” (Casualcraft™). These ideas can be just fine as starting points, but somewhere down the line the hardest question is: Is this game something worth doing?.

When you work at a game studio and a new idea arises, that’s the first question it faces. And depending on the studio’s strengths, business strategy and past experiences, the definition of “worth” is very, very specific. It usually involves a quick set of constraints such as: time, budget, platforms, audience, team, among others. So for a particular studio that has developed Hidden Object Games and has done work for hire creating art, characters and stories for several other games, an idea like Ace House™ can be a very good fit, something they can quickly prototype and pitch to a publisher with convincing arguments to move it forward. However, in the case of a studio focused solely on casual puzzle games that has just one multi-purposed artist/designer and two programmers, it can be rather unfeasible, much more if all but one says: “What’s Ace Attorney? What’s House M.D.”?

Ok, you might say, “But I’m doing this on my own, so I can fly as free as I want!”. That’s not entirely true. If you want to gather a team behind an idea, all of the team members must agree that the project is worth doing, and even if you do it on your own, you must answer the question to yourself. Having less limitations can positively set you free, but take that freedom to find out your personal definition of worth, not to waste months on something that goes nowhere. Unless you can, like, literally burn money.

Why is the project worth doing? is the hardest question, and the one that must be answered with the most sincere honesty by everyone involved. The tricky part is that it is widely different for many people working on a game project out of their regular job or studies. It can be to start learning about game development, to improve a particular set of skills, to start an indie game studio, to beef up a portfolio, etc. It is O.K. to have different goals but they all must map to a mutually agreed level of time commitment, priorities and vision. But even if you figured this out, there are still other issues.

All creative projects can be formulated as a set of risks or uncertainties, and the problem with video game development -given its highly multidisciplinary nature- is that is very easy missing to tackle the key uncertainties, and start working on the “easy” parts instead.

So for example, for the Ace House™ project, it can be lots of fun to start imagining characters and doctors, nurses, patients and whatnot; there’s plenty of T.V. series about medical drama to draw inspiration from, and almost surely you can have a good time developing these characters, writing about them, or doing concept art of medical staff in the Ace Attorney style, but, What about the game? How do you precisely translate the mechanics from Ace Attorney to a medical drama? How is this different from a mere re-skin project? Which mechanics can be taken away? What mechanic can be unique given a medical setting? How can you ensure that Capcom won’t sue you? Are there any medic-like games already? How can we blend them? Is it possible? Is this fun at all? Is “fun” a reasonable expectation or should the experience be designed differently?

Let’s talk about Wraid™ now. If Konami pulled off “Parodius” doing a parody from “Gradius”, How cool would it be to do a parody of Braid using the characters from the Wario Ware franchise? Here you have a starting point for lots of laughs remembering playing Braid, and putting there Wario, Mona, Jimmy T. and the rest of the characters on the game, wacky backgrounds, special effects and everything. But: Is this reasonable? Let’s start with the fact that Konami owns the IP of Gradius so they can do whatever they want to it. Can you get away with making a parody of both Nintendo and Jonathan Blow’s IPs? Sure, sure, the possibilities can be awesome but let’s face it: It is not going to happen. What can be a valuable spin-off though? What if Wario Ware games have a time-manipulation mechanic? What if you take Wario’s mini games and shape them around an art style and setting akin to Braid? (Professor Layton? Anyone?) How can you take the “parody” concept to the next level and just make “references” to lots of IP but the game is something completely new in itself?

What about Casualcraft™? Starcraft can be said to have roughly two levels of enjoyment: as an e-sport, and whatever other pleasure the other people draw from it. If we want to make it casual, it should not be an e-sport, should it? If you’re a Starcraft fan and have experience doing stuff for smartphones, you might think “This should be easy, I can make a prototype quickly”, and given that a mouse interface can be reasonably translated to touch, you start coding, and get a lot of fun implementing gameplay features that pumps all your OOP knowledge and creative juices to the roof. But… what does exactly mean “Casual Starcraft”? How can a strategy game be casual? What is the specific thing different from the e-sport experience that we want to bring to a phone? Is it the graphics? Is the unit building-leveling? Is playing with other friends? Which one of those should we aim? Can still be an RTS? What about asynchronous gameplay? Can this be played without a keyboard? Can still be fast? Would it fit on a phone? People that play on a phone: would they play this game?

So, all these are tricky and uncomfortable questions, but they are meant to identify the sources of risk and figure out a way to address them. Maybe the ideas I presented here are plain bad, sure, but they are only for illustrational purposes. Since I started working in games, I’ve seen countless ideas from enthusiasts that are not really too far away from these examples anyway. The usual patterns I’ve seen are:

Not identifying the core valuable innovation, and failing to simplify the rest: It is hard to innovate, much harder to do several innovations at once. Also, people have troubles learning about your game having too much simultaneous innovations and can quickly get lost, rendering your game as something they simply “don’t get”. The key is to identify what’s the core innovation or value of your idea, the one single thing that if done right, can make your game shine and then adjust all the rest to known formulas. And by “key” innovation I mean something important, critical, not stuff like “I won’t use hearts as a health meter but rainbows!”. That can be cute, but it’s not necessarily a “key innovation”.

Putting known techniques and tools over the idea’s requirements: “I only do 3D modeling so it has to be 3D”, “I know how to use Unity so it has to be done in Unity”, “I only know RPG Maker so let’s make an RPG”. It is perfectly O.K. to stick to what you feel comfortable doing, but then choose a different idea. A game way too heavy on 3D might be awesome, but completely out of scope for a side project. Unity can be a great engine, but if all the other team members can work together on Flash on a game that it is completely agreed to live primarily on the web, it can’t hurt to learn Flash. RPG Maker is a great piece of software, but if you can’t really add new mechanics and will concentrate only in creating a story, why not just develop a story then? A comic book project is much more suitable. Why play your particular game when everyone that is into RPG’s surely has at least two awesome ones that they still can’t find the time to play them? Instead of crippling down the value or feasibility of your idea to your skills and resources, change the idea to something that fits.

Obsessing over a particular area of the game (tech, story, etc): This usually happens when the true reason to do the project is to learn. You’re learning how to code graphic effects, or how to effectively use Design Patterns to code gameplay, a new texturing technique, vehicle and machines modeling, a story communicated through all game assets and no words, etc. You can get a huge experience and knowledge doing this. But then it’s not a game meant to be shipped, it is a learning project, or an excuse to fulfill something you feel passionate about.

Failing to define constraints: The romantic idea of developing a game until “it feels right”. If Blizzard or Valve can do it, why can’t you? Well, because at some point, you’ll want to see something done and not feel that your time has gone to waste. The dirty little secret is that constraints almost all the time induce creativity instead of hinder it. So choose a set of constraints to start with, at least a time frame and something you would like to see done at particular milestones: Key concept, Prototype, Expanded Prototype, Game.

Refusing to change the idea: This is usually a sign of failing to realize sunken costs. “I’ve spent so much time on this idea, I must continue until I’m done!”. The ugly truth is that if you’re having serious doubts, those will still be there and will make you feel miserable until you address them, and the sooner you act, the better. It can be that all the time you spent is effectively not wasted, but only when you frame it as your learning source to do the right things.

So if you’re starting a new game project, or are in the middle of one, try asking the tough questions: Do you know why is worth doing? Do all people involved agree on that? Are you making satisfying progress?

Are you sure there isn’t a question about your project you are afraid to ask because you fear that it can render your idea unfeasible, invaluable or messy?

Don’t be frightened, go ahead. If it goes wrong, you will learn, you will improve and the next idea will get to be shaped much better.

Speaker Highlight – Tim Stobo

Original Author: AltDevConf

Today’s Speaker Highlight is Tim Stobo of Kennedy Miller Mitchell Interactive Productions. Tim’s session is called “I Can’t Believe I Do This For A Living: The Generalist Game Designer”.

Development at KMMIP is built around the concept of game designers as ‘vision holders’. They work across disciplines within the team and act as evangelists for the game. All designers are generalists, tasked with level and systems design and individually responsible for multiple hours of game content.

In his first job working as a designer on L.A. Noire, Tim’s responsibilities grew throughout the development process. The different decisions and skills required on any single day gave Tim a unique insight into AAA development and the many aspects that make up the craft of game design. In this talk, Tim looks at a day in the life of a generalist game designer, without a technical background, working in the AAA space.

If you would like more information about Tim, here is his bio:

Tim is a game designer with five years professional experience. After studying screenwriting at the University of Technology, Sydney he went on to work for Team Bondi, helping design levels and mechanics for the multi-million selling detective action-adventure L.A. Noire. Tim stayed with Team Bondi after its closure and rebirth as KMMIP, where he continues to work on their next ground-breaking title.

The AltDev Student Summit is shaping up to be an absolutely brilliant event. You can read more about it at the conference mini-site, and we really hope that you will join us November 10th and 11th to listen to these excellent speakers.

Bitsquid Foundation Library

Original Author: Niklas Frykholm

Today I want to talk a bit about the Bitsquid Foundation Library that we recently released on Bitbucket (under the permissive MIT license).

It’s a minimalistic “foundation” library with things like memory management and collection classes. The idea is to have something that can be used as a reasonable starting-off point for other open source projects.

The library makes some interesting design choices that touches on topics that I have already talked about in this blog and that I think are worth elaborating on a bit further. It also serves an example on how these techniques can be used in “real world” code.

Separation of data and code

The foundation library implements the idea of separating data definitions and function implementation, that I talked about in this article.

Data is stored in structs with public members (prefixed with an underscore to indicate that you should not mess with them unless you know what you are doing) that are found in *_types.h files. Functions that operate on the data are written outside of the struct, in separate *.h files (and organized into namespaces).

For example, the data definition for the dynamic Array<T> class is found in collection_types.h:

template<typename T> struct Array
 
  {
 
      Array(Allocator &a);
 
      ~Array();
 
      Array(const Array &other);
 
      Array &operator=(const Array &other);
 
      
 
      T &operator[](uint32_t i);
 
      const T &operator[](uint32_t i) const;
 
  
 
      Allocator *_allocator;
 
      uint32_t _size;
 
      uint32_t _capacity;
 
      T *_data;
 
  };

The struct contains only the data used by the array and the operators which C++ forces us to implement as member functions.

The implementation of these functions, as well as the declaration and definition of all other functions that operate on the arrays are found in the array.h file. It contains things like:

namespace array
 
  {
 
      template<typename T>
 
      inline uint32_t size(const Array<T> &a)
 
      {return a._size;}
 
      
 
      template<typename T>
 
      inline bool any(const Array<T> &a)
 
      {return a._size != 0;}
 
      
 
      template<typename T>
 
      inline bool empty(const Array<T> &a)
 
      {return a._size == 0;}
 
  }
 
  
 
  template <typename T>
 
  inline Array<T>::Array(Allocator &allocator) :
 
      _allocator(&allocator), _size(0), _capacity(0), _data(0)
 
  {}

This way of arranging data and code fills two purposes.

First, it improves compile times by reducing header inclusion. Header files that want to make use of arrays only need to include collection_types.h, which just contains a few struct definitions. They don’t have to drag in array.h, with all its inline code.

Headers including other headers indiscriminately because they need their types is what leads to exploding compile times. By only including the minimal thing we need (the type definitions), compile times are minimized.

Second, and more importantly, this design allows the collection types to be freely extended. Is there anything you miss in the array interface? Perhaps you would like shift() and unshift() methods? Or binary_search()?

No problem. If you want them you can just add them, and you don’t even need to modify array.h. Just create your own file array_extensions.h or whatever, and add some new functions to the array namespace, that manipulate the data in the Array<T> interface. The functions you create will be just as good as the functions I have created.

Note that this isn’t true for traditional class designs, where you have first-class citizens (methods) and second-class citizens (external functions).

The foundation library has some interesting examples of this. For example, the string_stream functions don’t operate on any special StringStream class, they just directly use an Array<char>. Also, the hash and multi_hash interfaces both work on the same underlying Hash<T> struct.

I believe that this design leads to simpler, more orthogonal code that is easier to extend and reuse.

Memory management

The library implements the allocator system mentioned in this article. There is an abstract Allocator interface, and implementations of that interface can provide different allocation strategies (e.g. ArenaAllocator, HeapAllocator, SlotAllocator, etc).

Since I want to keep the library platform independent, I haven’t implemented a PageAllocator. Instead, the MallocAllocator is used as the lowest allocator level. If you want to, you can easily add a PageAllocator for your target platform.

For the same reason, I haven’t added any critical section locking to the allocators, so they aren’t thread safe. (I’m thinking about adding an interface for that though, so that you can plug in a critical section implementation if needed.)

The system for temporary allocations is kind of interesting and deserves a bit further explanation.

Most games have a need for temporary memory. For example, you might need some temporary memory to hold the result of a computation until it is done, or to allow a function to return an array of results.

Allocating such memory using the ordinary allocation system (i.e., malloc) puts a lot of unnecessary stress on the allocators. It can also create fragmentation, when long lived allocations that need to stay resident in memory are mixed with short lived temporary allocations.

The foundation library has two allocators for dealing with such temporary allocations, the ScratchAllocator and the TempAllocator.

The ScratchAllocator services allocation requests using a fixed size ring buffer. An allocate pointer advances through the buffer as memory is allocated, and a corresponding free pointer advances as memory is freed. Memory can thus be allocated and deallocated with simple pointer arithmetic. No calls need to be made to the underlying memory management system.

If the scratch buffer is exhausted (the allocate pointer wraps around and catches up with the free pointer), the ScratchAllocator will revert to using the ordinary MallocAllocator to service requests. So it won’t crash or run out of memory. But it will run slower, so try to avoid this by making sure that your scratch buffer is large enough.

If you forget to free something allocated with the ScratchAllocator, or if you accidentally mix in a long-lived allocation among the short-lived ones, that allocation will block the free pointer from advancing, which will eventually exhaust your scratch buffer, so keep an eye out for such situations.

TempAllocator<BYTES> is a scoped allocator that automatically frees all its allocated memory when it is destroyed (meaning you don’t have to explicitly call deallocate(), you can just let the allocator fall out of scope). This means you can use it everywhere where you need a little extra memory in a function scope:

void test()
 
  {
 
       TempAllocator1024 ta;
 
       Array<char> message(ta);
 
       ...
 
  }

The BYTES argument to TempAllocator<BYTES> specifies how much stack space the allocator should reserve. The TempAllocator contains char buffer[BYTES] that gets allocated on the stack together with the TempAllocator.

Allocation requests are first serviced from the stack buffer, then (if the stack buffer is exhausted) from the ScratchAllocator.

This means that TempAllocator gives you an allocator that can be used by all collection classes and will use the fastest allocation method possible (local stack memory, followed by scratch buffer memory, followed by malloc() if all else fails).

Minimalistic collection types

The collection classes in the library are distinctly anti-STL. Some of the important differences:

  • They use the allocation system described above (taking an Allocator as argument to the constructor). They can thus be used sensibly with different allocators (unlike STL types).

  • The use the data/function separation also described above, which means that the headers are cheap to include, and that you can extend them with your own functionality.

  • They use a minimalistic design. They assume that the stored data consists of plain-old-data objects (PODs). Constructors and destructors are not called for the stored objects and they are moved with raw memmove() operations rather than with copy constructors.

This simplifies the code and improves the performance (calling constructors and destructors is not free). It also saves us the headache of dealing with storing objects that must be constructed with Allocators.

Personally I like this minimalistic approach. If I want to keep non-POD data in a collection, I prefer to store it as pointers anyway, so I have control over when and how the data is constructed and destroyed. I don’t like those things happening “behind my back”. You may disagree of course, but in that case you are probably happy to use STL (or boost).

Another example of choosing minimalism is the Hash<T> class. The hash uses a fixed key type which is a uint64_t. If you want to use a key that doesn’t fit into 64 bits, you have to hash it yourself before using it to access the data.

And more?

I’m planning to add some basic math code to the library, but haven’t gotten around to it yet.

Is there anything else you’d like to see in a library like this?

This has also been posted to the Bitsquid blog.