Fixing the DirectX D3D debug layer

Original Author: Richard Munn

If you’ve ever done any serious Direct10/11 work, then you’ll know how invaluable the  Direct 3D debug layer can be when you’re having problems. When it is set running you get a stream of informational messages, warnings and errors telling you where you were going wrong.

Previously this was invaluable when tracking down bugs in an Autodesk Maya Direct X based plugin. However, it was found that lately the debug messages would not show up in Visual Studio, and it was not clear why.  Many things were reinstalled, and older versions were tried, but to no avail.

Eventually investigation revealed this blog post on MSDN that seems to have explained what has happened.  Basically DirectX 11.1 has come along, and changed how things work, so that the old DirectX SDK installed debug layer is no longer functional (since DirectX is now part of the Platform SDK). I presume DX11.1 has ended up installed as part of a service pack, a kb patch, or when Visual Studio 2012 was installed.  This basically means that the DirectX Control Panel (“dxcpl.exe”) installed by the DirectX SDK is a hanging on like a vestigial organ, not performing any useful function.

Further digging uncovered that there were multiple versions of the DirectX Control Panel on the computer. The ones that are installed as part of the old (legacy) DirectX SDK come in 32bit and 64bit versions, and are linked in the start menu under Microsoft DirectX SDK (June 2010) and newer versions lurking deep within the Windows folder.

dxcpl-old dxcpl-new
Old New

The original DXSDK versions (in 32bit and 64bit flavours) look like the screenshot on the left.  The screenshot on the right shows the versions found in C:WindowsSysWOW64 and C:WindowsSystem32 which don’t appear to have start menu shortcuts.  Note that it says 11.x rather than 11, and has a few newer options, but also only offers the D3D tab.

The “11.x” version seem to share their settings data with the older DXSDK versions, so it’s easy to get the two confused. However, it appears that once DX11.1 is installed only the latter versions in Windows will actually enable the debug later, and display output in the debugger.

Looking at a new machine, that’s never had the DXSDK installed, it does have the Windows versions of the tool available.

In summary: Don’t use the Direct X Control Panel shortcut, run C:WindowsSysWOW64dxcpl.exe instead.  On new machines, don’t install the DirectX SDK, just the Platform SDK, and you (hopefully) won’t get in such a tangle!

The Jones On Fire Postmortem: A Game In Five Acts

Original Author: Megan

Author’s Note: I should mention that Michael Nielsen was with me on Jones On Fire from the very beginning – he did the music for it during Blaze Jam, and stayed on from there. Nathan Madsen was brought on shortly after, to round out the music and handle sound effects. Folmer was brought on toward the end, to handle promotional art, icons, that kind of thing. So, when you see a “we”, that’s the “we” I’m referring to.

Also, this is my first #AltDevBlog – it seemed appropriate that my first act should be to sum up how I got to where I am.

I hate writing endings.

Writing this effectively means putting an end to Jones On Fire, which remains something of a bummer. It was meant to be a strong launching pad for our studio, the game on which we’d make our name. Ironically, it managed to become just that despite. But we’re getting ahead of ourselves.

I write this now because our next game, Hot Tin Roof, is doing fantastically on Kickstarter:  – thanks!)

… you looked? Excellent! Yes, kitties in fedoras, no joke. Noir metroidvania. It’ll be awesome!

In any case, now seems to be the right time to write an ending to Jones On Fire. The Hot Tin Roof Kickstarter is ending, but that ending is itself a beginning, and one can’t start a new thing without ending the old thing first, so, here we go.

The Life And Times Of Jones On Fire, Act 1: “What About A Runner?”

It’s spring in Colorado, April 2012, and I have a problem. It’s been 6 months since I was laid off after LEGO Universe’s cancellation (on which I was the senior graphics programmer), and I do not have a game with which to save my fledgling indie studio. We have under our belts a too-ambitious metriodvania prototype, a promising but failed Kickstarter for a racer called Gravitaz, and a try at a casual Facebook game that didn’t congeal. What’s more, I just had to let my artist partner go for lack of anything left with which to pay him, meaning it’s now, more or less, just me: a programmer with no ability to create art on her own.

Panic? Psyah, of course I didn’t panic. I was far too terrified to be panicked.

But then, a vision. “Well, mobile seems like a great place for indies these days,” I thought to myself, and so I did some research. “I’d like to do a platformer of some kind, but that would be too big, too risky, and I couldn’t monetize it as F2P… so what about a runner!,” I decided, “It’s like a platformer, but faster to make, and one would probably do pretty well! They’re popular!”

Little did I know it was at that precise moment when Jones On Fire failed, at least in financial terms. But more on that later.

The Life And Times Of Jones On Fire, Act 2: “I Swear This Will Only Take 6 Months”

It did not take 6 months. It took 8. I didn’t even know what game I was making until about 3 months in.

Colorado was having one of the worst wildfire seasons on record, so Dave Calabrese of Cerulean Games threw together a charity game jam to try and help out. Called “Blaze Jam,” it was to materially benefit those affected by the wildfires. I kicked in my support, we had some fun, and we raised around $2k. Not bad. In the process, I met Marc Wilhelm, Dave Calabrese, and a bunch of other people that turned out to be really good people to know.

More importantly, I drew a cat. I had a 3 month developed runner codebase by that point, so I figured I’d make some kind of runner, and I started by drawing a firefighter on the whiteboard. I used a boxy LEGO-inspired style I’d wanted to experiment with, and it worked, even via my stupid hands. I followed that up with a cat in the same style, and it worked too, so I ran with it. I was also out of time, and needed to finish the game, so instead of a broad range of animals to rescue, I ended up just making cats.

The second I added meow effects that played when kitties were rescued, I knew I had something special.

I kept toying with it after the jam. I got it running on an iPhone (Unity 3D is amazing), and I heard from my testers that their kids would abscond with their mobile devices and never give them back. Jones had legs, it seemed. I finished the game out in that style, never substantially changing it from the base game I’d created in the jam, and I fleshed the world out the only way I know how – with quirky, sardonic writing and characters.

It bears mentioning at this point that my mother is both a technical editor and a novelist. Similarly, my grandmother was a grant writer and general badass of the written word, and some of my earliest memories are of Wordperfect on her old 80186. So yes, I’m a writer too, with a quirky and sardonic sense of humor, so wouldn’t you know it, that’s what I write. But anyways, firefighters.

The Life And Times Of Jones On Fire, Act 3: “Of Cratering Harder Than The Moon”

Jones On Fire released in the same week as Sonic Dash, Temple Run OZ, and Outland Games – all runners, all with higher production values a/o attached to bigger IPs than I could ever compete with in a billion years. Also released that week was Block Fortress, an indie mobile take on mixing Minecraft with tower defense, that went on to deservedly get the single best feature slot there is on the App Store.

Needless to say, Jones On Fire did not get a good featuring. It did get featured, but since Apple had just done their overhaul of how many featuring slots there are, the featuring we got is what you’d call a “weak” featuring. It was way down the list, and under the fold for iPhones (which means you had to scroll to the end, click a button, and THEN you saw it). For most mobile gamers, this means it was as good as invisible.

Still, we had a featuring, so I was excited, and I thought we had a chance… except that we had another problem. The game was monetizing terribly. I was getting an order of magnitude less players than it would take to support a F2P game, even during the first day, which is the strongest day you’ll ever see really see. I think I made $200 that day, and it dropped to something like $20 the next day, and kept dropping. Which basically meant all my friends came in, bought some stuff, and then regular gamers were barely buying anything. I’m good with numbers and had done a ton of research, and I could see how it would play out from there – the metrics showed it was a F2P failure, day one.

At this point, I had two choices. I could have kept it as F2P, limping along for months, as I improved the monetization. I could have spent ages analyzing my Flurry data, optimizing funnels, and generally wanting to drive an icepick through my skull as I tried to squeeze blood from a stone. If I’d had a decent body of players, I could have done something with it, but again, we’re talking about download numbers an order of magnitude under what it would take to float F2P. Certainly way below what it would take to pay myself during all of those months of optimization.

My other choice was to flip it to Premium, $1.99, to try and take advantage of my remaining featuring. It was risky, but then I could at least walk away if it kept sinking, instead of getting dragged under with it.

It was a long weekend, and I didn’t sleep much. Eventually, I chose option 2. I stripped all the IAP and flipped to $1.99 so fast that I didn’t have a chance to telegraph the change to any of the press that had come out in support of the game on Monday, which left a few understandably confused. I did my best to message out the why’s of the change and try and make sure no one was too upset, and I imagine some still had ruffled feathers… but what could I do? I had to make a choice, so, I made it.

The Life And Times Of Jones On Fire, Act 4: “Wherein The Programmer Learns To Drink”

Matters didn’t improve from there.

Make no mistake, we were a strong critical success. We were on Kotaku (three times no less), we had a glowing review on Touch Arcade, and we got a much-coveted 5/5 review on iFanzine – it was fantastic. Every single one of them gave us high marks for charm, polish and general style.

Two things held us back. The first is that mobile gamers, on a whole, simply don’t care about reviews. The gamer gamers do, but it turns out that mobile successes are instead driven by the masses, which means store placement is the Alpha and Omega, and we just didn’t gel with those audiences.

The second and most important thing, and much of the why of why we didn’t gel with the casual audience, was that choice, that terrible choice, made 8 months prior. Do you remember? It was simply this: “let’s make a runner, it’s less risky.”

Every single review dinged us for being a derivative runner. We released against 3 other runners, all more polished, all from huge studios. Multiple runners had released every single week prior, and more were released every single week after. We were a quirky, strange game in a genre dominated by polish and big budgets / big IPs. It’d be like releasing an indie FPS and expecting it to compete with Halo. We got flattened. We cratered so hard, the moon felt bad for us.

The same story replayed with every new platform we added. We expanded to GooglePlay, to Amazon, and to a spattering of smaller Android storefronts. We got, literally, every single featuring that matters – we were the Amazon Free App of the Day, we were featured strongly by GooglePlay, and we partnered with a iOS/Android Free App of the Day program (multiple times!). The game even showed at E3, thanks to the Samsung “100% Indie” promotion.

It slowly became clear that the poor sales I’d initially blamed on poor placement in the App Store were simply because I’d made the wrong game. We made a runner long after the runner genre was targetable by anything shy of a $100k+ budget. If we’d made something more original, if we hadn’t played it as safe, maybe we would have done better… though not necessarily. While we were failing miserably with Jones On Fire, I believe Hackeycat was suffering a similar fate, and derivative it most definitely was not. So let’s say it was the lack of risk taking mixed with pure, dumb luck.

All told, we drove over 200,000 installs, mostly through FAAD promotions, which made us about $7k. The game, in terms of “the cost of a roof and rice and beans for the development months” + cash expenses, cost about $20k to make.

So clearly, it was a financial failure, though not as big a one as it looked initially. What was a gut-wrenching $700 on the AppStore in the first month quickly became $4,000 thanks to that strong GooglePlay featuring (Ko Kim, you are amazing, thank you so much for helping get us featured). That then grew, over time, to about $7,000, and it continues to drip up thanks to a mixture of mostly GooglePlay sales and Admob-driven income. Nothing really worth mentioning, but it is still making money, which is kind of cool.

But it was not a failure. Not even remotely. It was a roaring success.

The Life And Times Of Jones On Fire, Act 5: “The Silver Lining”

Finances aren’t everything. Yes, I blew my savings on a game that ultimately didn’t make its money back. Yes, it kind of soured me on mobile in general, and left me bitter for a few months. It did not, however, kill my studio.

After Jones On Fire, I had visibility. I had quite a few people press-side who knew who we were, and respected the style we put into our games. I also had a decent following on Twitter, around 1100 by that point, and it continued to tick up. Most importantly, everyone really wanted to know what we’d make next – they viewed Jones On Fire as a flawed but charming game, and a strong opening act for a studio. People have paid $100k+ for that kind of awareness, and all I paid was $20k, mostly in sweat equity.

I also had that minor but existent income back from it. Don’t think about the lost $20k, that’s sunk costs, just think about the $7k gained. Between that, and the remainder of my savings, I had about 4 months of dev time and some budget for contracting et al. Not enough to make a full game, but enough to take a solid stab at one. That meant Kickstarter, and this time, I knew better than to play it safe, especially there.

So, I did the logical thing. I doubled down on kitties.

I took everything that worked in Jones On Fire, the characters and style basically, and threw away the rest. Then I went into a cave and bluesky’ed up a prototype of a physical, interactive revolver – no real reason, it just seemed cool at the time, and I’d been inspired by a game called Receiver (which featured a fully interactive automatic pistol). From that came detectives, noir. Mix back in the bits of Jones On Fire, let bake for 1 month, and out comes the next entry in the Emma Jones saga. I knew it demanded more than programmer art, so I sat down with Folmer to come up with an appropriate evolution of our style, and then I emailed some old coworkers to see if they were game to make something big. They were – largely because the critical success of Jones On Fire showed them I was for real. Thus was begat Hot Tin Roof: something quirky, something fun, something unique.

Most importantly, Hot Tin Roof was something I could be proud of. I knew I didn’t want to blow what might have been my last 4 months as an indie working on something safe. So I went all out, and figured if nothing else, when I looked back on what I’d done, I could think “damn, I really went out swinging!”. As it happens, that swing connected, and now here I am, sitting on a successful Kickstarter to the tune of $21,781 and rising. Hot Tin Roof: The Cat That Wore A Fedora will happen, we’ll survive through to next year, and we’re doing well enough on Greenlight that we should easily hit Steam too. All told, we have an extremely good chance of a successful launch next year, and if you’re on Steam, that means real, business-sustaining levels of income.

And that’s how you build an indie studio on a foundation of quirk, blocks and kitties.

Scripted Network Debugging

Original Author: Niklas Frykholm

Debugging network problems is horrible. Everything is asynchronous. Messages can get lost, scrambled or delivered out-of-order. The system is full of third-party black boxes: external transport layers (PSN, Steam, etc), routers, firewalls and ineptly written packet intercepting anti-virus programs. (I’ve got my eye on you!)

Reproducing a problem requires setting up multiple machines that are all kept in sync with any changes you make to the code and the data in order to try to fix the problem. It might also require roping in multiple players to actually sit down and play the game on all those machines. This can make a simple bug turn into a multi-day or even a multi-week problem.

Here are some quick tips for making network debugging easier:

  • Have a single place for disabling timeouts. Few things are as frustrating as looking at a problem in the debugger, almost finding the solution and then having the entire game shutdown because the server flagged your machine as unresponsive while you where broken in the debugger. Having a single place where you can disable all such timeouts makes the debugger a lot more useful for solving network problems.

  • Attach Visual Studio to multiple processes. Not everybody is aware of this, but you can actually attach the Visual Studio debugger to multiple processes simultaneously. So you can start a network session with eight players and then attach your debugger to all of them. This can be used to follow messages and code flow between different network nodes.

  • Make sure you can start multiple nodes on the same machine (using different ports). This allows you to debug many network issues locally, without running between different machines in the office or gather a stack of laptops on your desk. Of course this doesn’t work if you are targeting consoles or Steam, since you can’t use multiple Steam accounts simultaneously on the same machine. (Please fix, k thx bye!)

  • Have a way to log network traffic. We have a switch that allows us to log all network traffic (both incoming and outgoing) to a file. That file can be parsed by a custom GUI program that understands our network protocol. This allows us to see all the messages to and from each node, when they were sent and when they were received. This allows us to diagnose many network issues post-fact. We also have a replay functionality, where we can replay such a saved log to the network layer and get it to behave exactly as it did in the recording session.

But today I’m going to focus on a different part of network debugging: scripted tests.

The idea is that instead of running around manually to a lot of different machines, copying executables and data, booting the game, jumping into menus, etc, etc, we write a little Ruby script that does all that for us:

  • Distribute executables

  • Distribute project data

  • Start the game

  • Set up a multi-player session

  • Play the game

I recently had to debug a network issue with a low reproduction rate. With the script I was able to set up and run 500 sample matches in just a few hours and reproduce the bug. Doing that by hand wouldn’t even have been possible.

Let’s look at each of the tasks above and see how we can accomplish them:

Distribute executables

This could be done by the script, but to simplify things as much as possible, I just use a Bittorrent Sync folder to this. I’ve shared the tool-chain directory on my development machine (the tool-chain contains the tools and executables for all platforms) and connected all the other machines to that directory. That way, whenever I build a new executable it will automatically be distributed to all the nodes.

I have a nodes-config.rb file for defining the nodes, where I specify the tool-chain directory used by each node:

  	:toolchain => 'c:worktoolchain')
  MSI =
  	:ip => '', 
  	:toolchain => 'd:toolchain', 
  	:exec => => 'bitsquid-msi', :user => 'bitsquid', :password => ask_password('bitsquid-msi')))
  	:ip => '', 
  	:toolchain => 'c:toolchain', 
  	:exec => => 'bitsquidmacbook', :user => 'bitsquid', :password => ask_password('bitsquidmacbook')))

Distribute project data

Since the Bitsquid engine can be run in file server mode I don’t actually need to distribute the project data. All I have to do is start a file server on my development machine and then tell all the network nodes to pull their data from that file server. I do that by starting the engine with the arguments:

-host -project samples/network

The nodes will pull the data for the project samples/network from the file server at IP and all get the latest data.

Start the game

On the local machine I can start the game directly with a system() call. To start the game on the remote machines I use PsExec. The relevant source code in the script looks like this:

require_relative 'console'
  # Class that can launch executables on the local machine.
  class LocalExec
  	def launch(arg)
  		system("start #{arg}")
  # Class used for executables launched by other means.
  class ExternalExec
  	def launch(arg)
  # Class used for executables launched on remote machines with psexec.
  class PsExec
  	def initialize(args)
  		@name = args[:name]
  		@user = args[:user]
  		@password = args[:password]
  	def launch(arg)
  		system("psexec \\#{@name} -i -d -u #{@user} -p #{@password} #{arg}")
  # Class that represents a node in the network test.
  class Node
  	# Initializes the node from hash data
  	# :ip => ''
  	#  	The IP address of the node.
  	# :toolchain
  	#  	Path to the toolchain folder on the node machine.
  	# :exec =>
  	#   Class for executing programs (LocalExec, ExternalExec, PsExec)
  	# :port => 64000
  	#  	Port that the node should use.
  	def initialize(args)
  		@ip = args[:ip] || ''
  		@toolchain = args[:toolchain]
  		@exec = args[:exec] ||
  		@port = args[:port] || 64000
  	# Starts the project on the remote node and returns a console connection for talking to it.
  	def start_project(arg)
  		@exec.launch "#{exe_path} -port #{@port} #{arg}"
  		return, @port)
  	def exe_path
  		return @toolchain + 'enginewin32bitsquid_win32_dev.exe'

Each node specifies its own method for launching the game with the :exec parameter, and that method is used by start_project() to launch the game.

Additional execution methods could be added. For example for launching the game on PS3s and X360s.

Setup a multi-player session

To get the game to do what we want once it has started we use the in-game console.

All Bitsquid games act as TCP/IP servers when running in development mode. By connecting to the server port of a running game we can send Lua script commands to that game. The Ruby code for doing that is mercifully short:

require 'socket'
  # Class that handles console communication with a running bitsquid executable.
  class Console
  	JSON = 0
  	# Create a new console connection to specified host and port.
  	def initialize(host, port)
  		@socket =, port)
  	# Send the specified JSON-encoded string to the target.
  	def send(json)
  		msg = [JSON, json.length].pack("NN") + json
  	# Send the specified lua script to be executed on the target.
  	def send_script(lua)
  		lua = lua.gsub('"', '\"')
  		send("{type: "script", script: "#{lua}"}")
  # Handles multiple console connections
  class Consoles
  	attr_reader :consoles
  	def initialize(arg)
  		@consoles = arg.respond_to?(:each) ? arg : [arg]
  	def send_script(lua)
  		@consoles.each do |c| c.send_script(lua) end

Node.start_project() returns a Console object that can be used to talk with the newly created network node. Since all the gameplay code for Bitsquid games is written in Lua, setting up a multi-player game is just a matter of sending the right Lua commands over that connection.

Those commands will be game specific. In the network sample where I implemented this, there is a global variable called force_menu_choice which when set will force a selection in the in-game menus. We can use this to set up a network game:

require_relative 'nodes-config'
  consoles = NODES.collect do |n| n.start_project("-host -project samples/network") end
  server = consoles[0]
  clients =[1..-1])
  all =
  puts "Waiting for exes to launch..."
  puts "Launching steam..."
  all.send_script %q{force_menu_choice = "Steam Game"}
  server.send_script %q{force_menu_choice = "Create Lobby"}
  clients.send_script %q{force_menu_choice = "Find Lobby"}
  clients.send_script %q{force_menu_choice = "Niklas Test Lobby"}
  server.send_script %q{force_menu_choice = "Start Game"}

This will start a Steam Lobby on the server, all the clients will search for and join this lobby and then the server will start the game.

Play the game

Playing the game is again just a question of sending the right script commands to expose the bugs you are interested in. In my case, I just tested spawning some network synchronized boxes:

server.send_script %q{
  	local self = Sample.screen
  	local camera_pos = Unit.world_position(self.world_screen.camera_unit, 0)
  	local camera_forward = Quaternion.forward(Unit.world_rotation(self.world_screen.camera_unit, 0))
  	local box_unit = World.spawn_unit(, "units/box/box", camera_pos)
  	local box_id = GameSession.create_game_object(, "box", {position=camera_pos})
  	self.my_boxes[box_id] = box_unit
  	Actor.set_velocity(, 0), camera_forward*20)
  clients.send_script %q{
  	local self = Sample.screen
  	local camera_pos = Unit.world_position(self.world_screen.camera_unit, 0)
  	local camera_forward = Quaternion.forward(Unit.world_rotation(self.world_screen.camera_unit, 0))
  	local box_unit = World.spawn_unit(, "units/box/box", camera_pos)
  	local box_id = GameSession.create_game_object(, "box", {position=camera_pos})
  	self.my_boxes[box_id] = box_unit
  	Actor.set_velocity(, 0), camera_forward*20)

And that is really all. I also added some similar code for shutting down the gameplay session and returning to the main menu so that I could loop the test.

And 500 iterations later, running on the three machines on my desk, the bug was reproduced.

This has also been posted to The Bitsquid blog.

Horrible Hansoft

Original Author: Rob-Galanakis

I’ve used Hansoft for 5 years across 2 jobs. Over that time, I’ve had more arguments about Hansoft than I wish to remember. In this post, I’m going to present a dialogue between Socrates, a software development, and Phonion, a Project Manager (Producer, Product Owner, Product Manager, whatever) about use of Hansoft. I’m not going to introduce Hansoft because it isn’t worth learning about it if you aren’t already subjected to it.

My purpose is to reduce the number of people using Hansoft, and thus reduce total suffering in the world. This post is for two types of people:

  1. Project managers who use Hansoft and force their studio to use Hansoft. I’m guessing most people in this category don’t hate Hansoft (though I don’t think there exists a solitary human being who likes Hansoft), but they defend and proliferate its use.
  2. Developers who use Hansoft. Everyone in this category hates Hansoft. Send this post to someone from category #1 so you can stop using Hansoft.

The scene here is not hard to imagine- Phonion sidles up to Socrates, right as Socrates has put his head down to start working.

Hello Socrates. I noticed you have not been burning down your hours in Hansoft.

Correct, I did not. My team hates using Hansoft so we’ve been tracking our sprint work on a physical wall instead. Why is it important that I burn down my hours in Hansoft?

It’s important that we know the status of your teams sprint and whether you are tracking to complete it or not.

Why? If my team is not tracking to complete its sprint, there is nothing you can do. You cannot call in more sources in the middle of the sprint. You cannot cancel the sprint. We cannot move stories out of or into the sprint. You can do literally nothing with the sprint burndown information during the sprint. We have the information in front of us and it works much better than having it in Hansoft, which we never go into.

OK, whatever. But we need your team’s stories and tasks in Hansoft so we can see what’s done and not done and how the release is burning down.

What does putting these things into Hansoft and then updating them every sprint tell you about the release?

It tells us whether all the stories needed to complete the release are going to be done in time.

Does it? If points were burning down perfectly, it means every team is getting everything done. You do not need Hansoft to tell you that. If things are not getting done, a high-level release burndown does not give you any information. You still need a way of knowing what did and did not get done, you need to reprioritize, you need to figure out how to get things done reliably. The only way to work with this information effectively is face-to-face communication- you are not going to reprioritize stories without talking to the team or solely via email, are you?

Of course not. Anyway, we really want our charts and reports.

Sure, every sprint I will give you the number of points committed to and completed for the sprint and release. I will even put it into Hansoft for you automatically if you give me the documentation to the API.

We don’t have access to its API. It requires a license fee. It does have an SDK, but no one’s been able to figure out how to use the SDK. Someone took a look at it and it is extremely confusing. Anyway, that’s not how Hansoft works, it can’t just take that number and input it with everything else.

Bummer. The fact that management software is completely cut off to extension and integration makes no sense to me. If you figure out a way to easily write integrations for Hansoft, tell me, but why should Hansoft’s shortcomings force my team to switch how we manage outselves?

Because we use Hansoft to manage the project.

Right now it sounds like Hansoft is using you, not the other way around. The people on my team hate using Hansoft. It’s a large, fat client that takes up space in the system tray and hogs resources. It is very slow to start meaning we can’t just jump in and out. The UI is too complex for anyone to remember so we always need to struggle with where to find things. Would you disregard the unpleasantness of using Hansoft and how much your developers dislike it?

If I say “yes”, it makes me the type of boss who does not trust the judgement and opinions of others. But I know the feeling of bad software, and while Hansoft isn’t the iOS of user experience, it just takes some getting used to.

Do you want your developers to spend time learning how to use project management software, or actually developing? Other project management options simply involve moving a card on a wall, or clicking and dragging, or something much more intuitive. Why can’t we use a solution that simple?

A project this size requires more sophisticated tools and there will naturally be added complexity. We must sacrifice simplicity for the greater good.

In fact, using Hansoft is not just an awful user experience, but it actively encourages the wrong behavior. We want to be agile, which includes getting better with breaking down and estimating stories. With Hansoft, I never look at the sprint burndown. I never look at what’s inprogress or in the backlog, I never want to break down stories into smaller stories and tasks, I never want to rebid or groom the backlog or anything I’m supposed to do in order to become more effective at agile. As someone who wants to always do better, this is painful. And certainly lack of improvement does not contribute towards the greater good; in fact, it’s the worst thing we can do to harm it! So by forcing Hansoft on us against our will, you are saying the work of the producers and PMs is more important than the work of the developers. Is that what you think?

Of course not! To show you how important we think you are,  we will assign a producer to your team to do the Hansoft work.

Absolutely not! You are committing teamicide by injecting someone like that. They will never jell and will be an always-present and uncomfortable fungus. Furthermore isn’t the creation of busywork positions a big red flag? You are allowing your project management software to make personnel decisions. If every team repeatedly delivered on time and at quality and was continually improving- they were truly Agile- would it matter what software was used to track progress?

If I said “yes”, that would make me a micro-managing boss who likes to meddle for now reason. It would also do nothing to improve the situation, and just turn it into a top-down decision that I’m sure would come up later anyway and hurt morale in the meantime. So I will say “probably not.”

So, clearly, project management decisions- like what software to use- should be made to make developers and teams more effective. If many developers and teams outright reject Hansoft and say it is actually making them less effective and not growing, is it likely Hansoft is making them more effective? Likewise, is there a way for people to grow and improve without experimenting?

Again, if I said “yes”, that would make me sound like a traditional “I know better than you” manager, and I’d be forcing my decision down peoples’ throats. So, I guess not. But, project management really likes some features of Hansoft and it we cannot just give everything up.


Do you consider yourself an equal party to development in terms of the value of the work you do?

I want to say “Of course, we value each others’ work.” But in actually, I know that Project Management is a type of Lean waste. It is necessary for the project to function but should be reduced and eliminated. So by definition, no, I am not an equal party.

But nor are your needs invalid! What I would do is evaluate your actions and decisions by answering, “am I assisting the creators of the value-added work?” High-level charts and information can help you- but you should develop this as you need it, for a clear purpose, and without adding more waste into the system. You should only develop these things when you can convince the developers and team members that they need them. Ideally the team should ask for it themselves, or better yet just do it themselves. So how can we put things on a better path?

I guess the first thing is to pick something people like- or ask them what they want to use. Then we can migrate over to that.

What about people that don’t want to agree? People that want to use a different software (free of course), or physical boards?

Well, we should figure out what works best for them. If we cannot convince them an alternative is better, maybe it isn’t better. I’m sure they’d be willing to test something out, though, and we can see how it works. And truth be told, it may be fine for some teams to be off on their own, where there’s absolutely no benefit to the team of moving them with everyone else.

The Karaoke Rules of Game Development

Original Author: Ben Serviss

Karaoke singers

The act of going to a karaoke bar is filled with more dramatic tension than you might think. The sheer amount of questions in play at any given time is intense: What will the next person sing? Will they do a good job? Will they be ungodly terrible? Does this place have songs I want to sing? Will they call me next? Will the crowd like my song? What are the happy hour specials again?

Once the next person starts singing, the crowd quickly makes a collective decision. Great song but poor singer? The DJ turns down the mic volume and the crowd takes over vocal duties. Amazing singer? Watch as the crowd transforms into a rapt concert audience, complete with mini meet-and-greet with the performer afterwards. Terrible everything? Everyone takes the next few minutes to refill their drinks and chat with friends.

Yet the more negative outcomes can easily be avoided – as long as singers adhere to the three rarely seen, but inherent rules of successfully picking songs for karaoke:

Rule 1: Pick a song that you enjoy

Rule 2: Pick a song that you can sing

Rule 3: Pick a song that entertains others

Of course, the act of creation that takes place during a karaoke performance is not unique to the singing. Similar to any other creative undertaking, how you choose your song (or your project) greatly affects the outcome of the endeavor. Unlike other creative work, karaoke performances are over in minutes, as opposed to the hours, days, weeks or even years other creative work can take.

Naturally, this includes video game development. And whether your project is on track or troubled, investigating your current game through the lens of karaoke rules might yield hidden insights about what to do next.

Rule 1: Pick a song that you enjoy.

This is a tricky place to start. Often in game development (and in the real world), you will need to work on projects that don’t fire up your enthusiasm. For practical reasons, it’s necessary to continue working on these projects to pay the bills, care for loved ones and maintain your savings.

Photo: CruiseCritic

But over time, your distaste for the games you work on will begin to creep into the end user experience, lowering the enjoyment others get from playing them – much as a karaoke performance of a song you don’t really like may start off with faked enthusiasm that tapers off to a noticeable “when will this song end?” clock-watching energy.

In the meantime, you can adhere to this rule (and recover much-needed enthusiasm for your day job) by working on games you truly like in your spare time. Start a creative project that gives you true joy, and it will make it easier to get through the ones you aren’t as passionate about.

Sing the song you love.

Rule 2: Pick a song that you can sing

Maybe your goal is to work on a super ambitious game, or even something just outside of your current abilities. You might certainly get there with time and practice, but for want of skills, it just isn’t feasible for now.

In karaoke, this is the equivalent of picking a song outside of your vocal range. You may desperately want to sing Whitney Houston’s I Will Always Love You, but if you can’t hit those notes, it just isn’t going to happen today.

Instead, select projects that are within your current ability range. Doing this decreases the odds that they’ll drag on for months or years past your expected finish date, and the simple act of practicing will increase your ability to pick a more challenging project for next time.

Sing the song you can sing.

Rule 3: Pick a song that entertains others

So you’re working on a game that you like, and that is well within your abilities, but nobody likes it. Every time you have someone play it, the feedback is universally negative. Maybe it’s an incredibly niche gameplay design, maybe it’s built on inside jokes that only you and your friends share – for whatever reason, others don’t find it entertaining.

kid playing video game

In karaoke, the equivalent would be picking the average Tom Waits song. Even if you sing it true to the original, Waits’ gravelly voice and aggressive delivery is probably not the best choice for a bar setting. Yet this also depends on context – the latest Katy Perry song might be fine for a college pub, but will probably go over miserably at a biker dive bar. Whatever the context you’re looking at, the song you pick must entertain others.

So what do you do if nobody likes the game you’re working on? You have two options: Work on something else that is more palatable to the audience you’re speaking to as you continue to work on the game on the side, or carry on with your game and start searching for its actual audience. Of course, the more niche and unconventional your game is, the harder your audience may be to locate, but in the age of the internet, the odds are high that they’ll be out there somewhere.

Sing the song others want to hear.

Whether you’re starting up a new project, knee-deep in development or trying to troubleshoot a problematic production, reflecting on these three points can help steer you to a positive experience for you, your players, and your future games.

The Morality of Library Sound Design

Original Author: Ariel Gross

Some sound designers have a serious hair up their butts when it comes to using commercial sound libraries for sound design.

If you’re a sound designer, especially a budding one just starting to claw your way into the industry, you may be in for some pretty severe judgments if it is found out that you’re using sound libraries in a way that offends other sound designers. If you’re just starting out, you could make one wrong move with the way that you use sound libraries, and it’s curtains for you! Your reputation could be sullied far and wide! And you may not even be doing anything wrong, at least technically speaking.

For those of you that aren’t familiar with sound libraries, a sound library, at least in the terms that I’m using for this article, consists of a bunch of sound effects, musical elements, voice lines, or even more broadly speaking, audio files of some sort, that can be purchased or licensed from a vendor for use in your project. An example sound effect vendor that many will be familiar with is Sound Ideas, but there are a bunch of vendors out there providing this service. Some are big companies, some are just one person who wants to sell their field recordings.

The point of this article is to elaborate on the perception issues held by many sound designers out there with how these libraries are used. The aspiring game audio designer should be aware of these perception issues, and then they should use their own brains to determine how the want to approach their work with libraries. I’ll list a handful of common methods for sound design using libraries, and then I’ll provide my own opinion for a little extra context.

The Worst Possible Thing You Could Ever Do In Your Whole Life

Let’s start off with The Worst Possible Thing You Could Ever Do In Your Whole Life, also known as a “straight library rip.”

This practice involves buying a sound library, or even a single sound from an à la carte sound library service, and then implementing that sound as-is into your game without any manipulation.

Even though this is often completely within your rights according to the terms of service of the purchase you’ve made, this is considered by many sound designers to be a cardinal sin of sound design. If you’re aspiring to be a gainfully-employed sound designer, be very careful with this method. Many sound designers have memorized large volumes of sound effects from commonly used libraries. If one of those sound designers is playing your game  or reviewing your demo materials and notices a sound being used as-is from a library, they may want to smite you, and potentially smite your reputation.

My personal opinion: This is a valid practice as long as it is within your rights according to the terms of service that you agreed to when buying the sounds. You’re probably not going to grow very much if you approach your work this way, and if you use this method a lot, you may ask yourself what it really is that you want to be when you grow up, because it may not be “sound designer.” I believe this method is excusable, maybe even smart and practical, when you’re overwhelmed and you’re trying to get a bunch of unimportant sounds off your plate so that you can focus more time on sounds that will have a greater impact on your player’s experience. Personally, I avoid this method at all costs, not because I think it’s morally wrong, but because I find it boring and unenjoyable.

Straight Library Rip + Effects And Other Manipulation

This is the next step up from The Worst Possible Thing You Could Ever Do In Your Whole Life, and even though it’s just marginally different, it will likely assuage most of the sound designers that are looking for reasons to be disgusted by other sound designers. But be warned — the less you manipulate the sound, the more likely that other sound designers will notice and will still want to slap you with their SPL meter.

This practice involves taking a sound from a commercial library, adding effects to or otherwise manipulating that sound, and then implementing the manipulated sound into the game. A similar approach involves implementing straight library rips into your game and adding effects at runtime, although for some reason, this still seems more frowned-upon than doing it outside of the box.

The bare minimum required to fulfill this practice tends to be applying a compressor or equalizer to the source sound, or changing the pitch of the source sound. If you’re just changing the volume (e.g. normalization), you’re probably still in cardinal sin territory.

The more you manipulate the sound to be different from the original sound, the more likely this practice will be acceptable to a larger swath of your peers in audio. If you’ve made the sound unrecognizable from the original source, you’re good to go.

My personal opinion: One you’ve manipulated a sound, you’ve probably surpassed my threshold of giving a crap. Even if it’s just compression or equalization. This is probably a good point to mention, even though it should be obvious by my opinion on the cardinal sin, that I’m not super anal about this topic. But I would still suggest to you that you push yourself further into the next categories below. It will likely be good for your growth and learning, and if you have a sincere interest in sound design, you’ll probably find it more fun and rewarding.

Layering And Mixing Library Sounds (+ Effects And Other Manipulation)

This is a significant step up into what most sound designers would officially call proper sound design. If you’re doing this, you’ve likely surpassed the scowling-threshold of the many sound designers that I’ve spoken to on the subject.

This practice involves taking more than one library sound and mixing them together, possibly manipulating one or more of the library sounds in your DAW before rendering them out. This could also happen at runtime, which would involve you implementing multiple straight library rips into your game and mixing them together dynamically in your authoring tool / engine. Again, in terms of perception, many sound designers, for some reason, seem to prefer that this mixing happens outside the box.

My personal opinion: As far as I’m concerned, if you take even two library files and mix them together, you’re designing a sound at that point. You’re making something new from something that already exists. If you add effects, all the better, as long as it ultimately serves your player’s experience. At many game developers, you will find that this is the bread and butter of sound design work.

Using Library Sounds To Sweeten Your Field Recordings

Some sound designers would call this the entry level to “true” sound design. This can also be reversed by saying that you use your field recordings to sweeten a library sound, although it’s often perceived by some sound designers as better to use the library sounds as a sweetener for a field recording. I think the difference between the two is whether or not an original sound is more prominent in the mix of the final render.

This practice involves mixing sound effects from commercial sound libraries with original field recordings that you, or others within your organization, have captured yourselves.

My personal opinion: I think this is a great practice because there is a tremendous amount of potential growth in this approach. There is a barrier of entry here, which is that you’d need some recording gear, or at least to know someone who is willing to loan you their gear, to make this happen, which is why I wouldn’t scowl at someone saying that they only used libraries for a project. By including your own source material, you’ve ensured that the sound is going to be fresh and new. But I would also say that there is a potential gotcha here, which is that ultimately what I believe matters most is the player’s experience, and if someone insists on using a field recording in their sound design to the detriment of that experience, then that seems silly and prideful to me.

In The End, What Does It Really Matter?

The last thing I want to do is give you one last explanation for the purpose of this post, and maybe to shed a little light on my own opinions.

The reason that I’m posting this is because I think it’s a shame when an unsuspecting, aspiring game audio designer gets lambasted for using straight library rips. It’s entirely possible that it never occurred to them that others would deem this practice as morally wrong, especially when it’s within their rights according to the terms associated with the use of the library. I don’t think it’s fair that people find themselves as the butts of jokes within certain cliques that could harm the future of their careers just because they’re ignorant.

But I would also ask those sound designers out there that would judge someone based on their sound design practices, what does it really matter? If a straight up rip from a library makes its way into the game and it evokes the response in our players that we’re after, what is the actual harm in that?

I’ll mention again, just to protect my reputation, that I cannot recall a time in my career where I have done a straight library rip, but I also wouldn’t necessarily be upset if a sound designer that I worked with used a library sound as-is, as long as it served its purpose.

I’m interested to know what others think. Leave a comment.

OpenGL ES 2: How does it draw?

Original Author: Adam Martin

UPDATED 24/09/13: Added some essential details to the class files at end of post, and corrected typos

Github project with full source from this article

a standalone library on GitHub, with the code from the articles as a Demo app. It uses the ‘latest’ version of the code, so the early articles are quite different – but it’s an easy starting point

This is Part 2. Part 1 was an overview of where Apple’s GLKit helps with OpenGL ES setup and rendering.

NB: if you’re reading this on AltDevBlog, the code-formatter is currently broken on the server. Until the ADB server is fixed, I recommend reading this (identical) post over at, where the code-formatting is much better.

I’ve been using OpenGL ES 2 less than a year, so if you see anything odd here, I’m probably wrong. Might be a mix-up with desktop OpenGL, or just a typo. Comment/ask if you’re not sure.

2D APIs, Windowing systems, Widgets, and Drawing

Windowing systems draw like this:

  1. The app uses a bunch of Widget classes (textboxes (UILabel/UITextArea), buttons, images (UIImageView), etc)
  2. Each widget is implemented the same way: it draws colours onto a background canvas
  3. The canvas (UIView + CALayer) is a very simple class that provides a rectangular area of pixels, and gives you various ways of setting the colours of any/all of those pixels
  4. A window displays itself using one or more canvases
  5. When something “changes”, the windowing system finds the highest-level data to change, re-draws that, and sticks it on the screen. Usually: the canvas(es) for a small set of views

Under the hood, windowing systems draw like this:

  1. Each canvas saves its pixels on the GPU
  2. The OS and GPU keep sending those raw pixels at 60Hz onto your monitor/screen
  3. When a widget changes, the canvas DELETES its pixels, re-draws them on the CPU, uploads the new “saved” values to the GPU, and goes back to doing nothing

The core idea is: “if nothing has changed, do nothing”. The best way to slow down an app is to keep telling the OS/windowing system “this widget/canvas has changed, re-draw it” as fast as you can. Every time you do that, NOT ONLY do you have to re-draw it (CPU cost), BUT ALSO the CPU has to upload the saved pixels onto the GPU, so that the OS can draw it to the screen.

OpenGL and Drawing

That sounds great. It leads to good app design. Clean OOP. etc.

But OpenGL does it differently.

Instead, OpenGL starts out by saying:

We’ll redraw everything, always, every single refresh of the monitor. If you change your code to re-draw something “every frame”, then with OpenGL … there is no change in performance, because we were doing that anyway.

(Desktop graphics chips usually have dedicated hardware for each different part of the OpenGL API. There’s no point in “not using” features frame-to-frame if the hardware is there and sitting idle. With mobile GPUs, some hardware is duplicated, some isn’t. Usually the stuff you most want is “shared” between the features you’re using, so just like normal CPU code: nothing is free. But it’s worth checking on a feature-by-feature basis, because sometimes it is exactly that: free)

When people say “it’s fast” this is partly what they mean: OpenGL is so blisteringly fast that every frame, at full speed, it can do the things you normally do “as sparingly as possible” in your widgets.

Multiple processors: CPUs vs GPUs … and Shaders

This worked really well in the early days of workstations, when the CPU was no faster than the motherboard, and everything in the computer ran at the same speed. But with modern computer hardware, the CPU normally runs many times faster than the rest of the system, and it’s a waste to “slow it down” to the speed of everything else – which is partly why windowing systems work the way they do.

With modern systems, we also have a “second CPU” – the GPU – which is also running very fast, and is also slowed down by the rest of the system. Current-gen phones have multiple CPU cores *and* multiple GPU cores. That’s a lot of processors you have to keep fed with data… It’s something you’ll try to take advantage of a lot. For instance, in Apple’s performance guide for iOS OpenGL ES, they give the example of having the CPU alternate between multiple rendering tasks to give the GPU time to work on the last lot:

Instead of this:

…do this:

OpenGL approaches this by running your code in multiple places at once, in parallel:

  1. A lot of your code runs on the CPU, like normal
  2. A lot of your code appears to run on the CPU, but is a facade: it’s really running on the GPU
  3. Some of your code runs on the GPU, and you have to “send” it there first
  4. Most of your code *could be* running on CPU or GPU, but it’s up to your hardware + hardware drivers to decide exactly where

Most “gl” functions that you call in your code don’t execute code themselves. Instead, they’re in item 2: they run on the CPU, but only for a tiny fraction of time, just long enough to signal to the GPU that *it* should do some work “on your behalf”, and to do that work “at some time in the future. When you’re ready. KTHNXBYE”.

The third item is Shaders (ES 2 only has: Vertex Shaders + Fragment Shaders; GL ES 3 and above have more), and GLSL (the GL Shading Language, a subset of OpenGL).

Of course, multi-threaded programming is more complex than single-threaded programming. There are many new and subtle ways that it can go wrong. It’s easy to accidentally destroy performance – or worse: destroy correctness, so that your app does something different from what your source code seems to be telling it to do.

Thankfully, OpenGL simplifies it a lot. In practice, you usually forget that you’re writing multi-threaded code – all the traditional stuff you’d worry about is taken care of for you. But it leads (finally) to OpenGL’s core paradigm: Draw Calls.

Draw calls (and Frames)

Combine multi-threaded code with parallel-processors, and combine that with facade code that pretends to be on CPU but actually runs on GPU .. and you have a recipe for source-code disasters.

The OpenGL API effectively solves this by organizing your code around a single recurring event: the Draw call.

(NB: not the “frame”. Frames (as in “Frames Per Second”) don’t exist. They’re something that 3D engines (written on top of OpenGL) create as an abstraction – but OpenGL doesn’t know about them and doesn’t care. This difference matters when you get into special effects, where you often want to blur the line between “frames”)

It’s a simple concept. Sooner or later, if you’re doing graphics, you’ll need “to draw something”. OpenGL ES can only draw 3 things: triangles, lines, and points. OpenGL provides many methods for different ways of doing this, but each of them starts with the word “draw”, hence they’re collectively known as “draw calls”.

When you execute a Draw call, the hardware could do anything. But conceptually it does this:

  1. The CPU sends a message to the GPU: “draw this (set of triangles, lines, or points)”
  2. The GPU gathers up *all* messages it’s had from the CPU (since the last Draw call)
  3. The GPU runs them all at once, together with the Draw call

Technically, OpenGL’s multiprocessor paradigm is “batching”: it takes all your commands, caches (but ignores) them … until you give it the final “go!” command (a Draw call). It then runs them all in the order you sent them.

(understanding this matters a lot when it comes to performance, as we’ll soon see)

Anatomy of a Draw call

A Draw call implicitly or explicitly contains:

  • A Scissor (masks part of the screen)
  • A Background Colour (wipe the screen before drawing)
  • A Depth Test (in 3D: if the object in the Draw call is “behind” what’s on screen already, don’t draw it)
  • A Stencil Test (Scissors 2.0: much more powerful, but much more complex)
  • Geometry (some triangles to draw!)
  • Draw settings (performance optimizations for how the triangles are stored)
  • A Blend function, usually for Alpha/transparency handling
  • Fog
  • Dithering on/off (very old feature for situations where you’re using small colour palettes)
  • Culling (automatically ignore “The far side” of 3D objects (the part that’s invisible to your camera))
  • Clipping (if something’s behind the camera: don’t waste time drawing it!)
  • Lighting/Colouring (in OpenGL ES 2: lighting and colouring a 3D object are *the same thing*. NB: in GL ES 1, they were different!)
  • Pixel output (something the monitor can display … or you can do special effects on)

NB: everything in this list is optional! If you don’t do ANY of them, OpenGL won’t complain – it won’t draw anything, of course, but it won’t error either.

That’s a mix of simple data, complex data, and “massively complex data (the geometry) and source code (the lighting model)”.

OpenGL ES 1 had all of the above, but the “massively complex” bit was proving way too complex to design an API for, so the OpenGL committee adopted a whole new programming language for it, wrapped it up, and shoved it off to the side as Shaders.

In GL ES 2, all the above is still there, but half of “Geometry” and half of “Lighting/Colouring” have been *removed* : bits of OpenGL that provided them don’t exist any more, and instead you have to do them inside your shaders.

(the bits that stayed in OpenGL are “triangle/vertex data” (part of Geometry) and “textures” (part of Lighting/Colouring). This also explains why those two parts are two of the hardest bits of OpenGL ES 2 to get right: they’re mostly unchanged from many years ago. By contrast, Shaders had the benefit of being invented after people had long experience with OpenGL, and they were simplified and designed accordingly)

Apple’s EAGLContext and CAEAGLLayer

At some point, Apple has to interface between the cross-platform, hardware-independent OpenGL … and their specific language/platform/Operating System.

[*]EAGL[*] classes are where the messy stuff happens; they’ve been around since the earliest versions of iOS, and they’re pretty basic.

These days, it only handles two things of any interest to us:

  1. Allow multiple CPU threads to run independently, without needing any complex threading code (OpenGL doesn’t support multi-threading on the CPU)
  2. Load textures in the background, while continuing to RENDER TO SCREEN the current 3D scene (the hardware is capable of doing both at once)

In practice … all you need to remember is:

All OpenGL method calls will fail, crash, or do nothing … unless there is a valid EAGLContext object in memory AND you’ve called “setCurrentContext” on it


For fancy stuff later on, you might need to pre-create an EAGLContext, rather than create one on the fly

Later on, when we create a ViewController, we’ll insert the following code to cover both of these:

AltDevBlog’s source-code formatter is broken. For now, please use the other copy of this post at Next: Part3: Vertices, Shaders, and Geometry

The technical interview

Original Author: Angelo Pesce / C0de517e

This post will appear (slightly lenghtier) also on my personal blog,


I’ve had in my career the pleasure of working with a few different studios, seeing a few others at work and talking with people from many more. Of course after a while of this experience you start wondering what’s “best”, or at least, what’s good and bad, the science of making great games.

Shockingly I still don’t really know.

I’ve seen great games made by twenty creatives having fun and good games made by two thousand slaves burning their life away. Smart people coding in pretty oldschool C and equally smart people coding in “modern” C++, and both parties with their reasonable reasons to do so. Wonderfully “engineered” practices, shared by juniors and seniors alike, to code relying on hackers and their ability to work without any engineering. You get the picture…

Now, of course part of this is due to the fact that “great games” come in all sizes and genres, and there’s probably not much in common to how “great” (and successful) Little Big Planet is with how “great” (and successful) Call Of Duty is.

And of course in practice a given process works best only at a given scale (similarly on how a given algorithm is best for a given problem size), something good for two people is not for twenty or two hundred. It’s uncanny the similarity to good programming: over and under engineering are problems, and we should apply fancier methods only when they actually save us time (and profile, always!). That said, all true, but even with all things being comparable, the ways for greatness seem to be many.

That’s not to say that there is no good and bad. Certainly there is bad. There certainly is a quality to the process of making games. And there are even things that are “universal”, that are never or always are a good idea. But most of what makes a big difference, I’m persuaded, is relative. And it’s relative, I think, to the people you have.

Initially, the people you have are the ones you happen to have, companies are made, somehow… But then you keep (or steer towards) a given bias via hiring. And that’s why I’ll spend the rest of this post on interviewing, and my half-wrong opinions on how you should interview “good” people…

Interviewing: a bidirectional communication process

When I started drafting this article I was looking for a new job and so was a period where I was interviewing more than usual, talking with many studios and I was reminded now to finish it as I helped interviewing people for a new team.

From my empirical survey of the state of the industry, I’d say the number one “offender” in the process is not realizing, or caring enough, about the fact that interviewing is a bidirectional communication. What you ask and say is not only used by you to assess the candidate, but by the candidate to assess you, and the job you’re offering. You’re not an university professor giving an exam, the goals of an interviewer and as an interviewee are fundamentally the same. Find a work relationship that makes both parties happy.

In fact I’d say that the situation is so bad, so often, that many people won’t experience a “good” interview, one where closing the door, you really want to get the job, not because of the company name or its products (incredibly deceptive metrics to work by, for an engineer), but because you have a feeling the job and the people are really -right- for you.

So, how can we make this better?

Mistakes in the technical interview

1) Wasting time

It’s good to assume that smart people are busy, and that they are in demand. So, having to spend weeks on an interview test is often not a great idea.

Now, to be fair, longer tests are not necessarily bad, and not necessarily a waste of time. They can be even fun and truly informative, e.g. interesting mini projects done with good communication could be great. Even trying out a candidate as a contractor could be not a bad idea. But, pragmatically, chances are that good candidates are busy, and won’t subject themselves to all this while interviewing with many other companies, so you might lose some great people to your lengthy test. It’s a compromise, be aware of that, and make your process as long as it needs to be but not any longer. Like good code, avoid waste.

2) Overused questions

Simple to google, simple to memorize, boring to answer over and over again. Overused, simple questions are bad in many ways. To the experienced, they signal that you’re not doing a great job interviewing people, that quality is not a priority, that you did not take much effort crafting your interview. For “juniors”, they encourage memorization (or looking things up on the net) over reasoning.

It’s not hard to come up with original questions and even slight variants of common problems are great. Why asking the distance of a point to a line if you can ask for a sphere versus a line, or capsule? It tests exactly the same knowledge, but applied, instead of just recited.

A common side-effect of using such “dumb” question is the necessity of strict timing, where as your questions are too stupid and easily googled, you counter-act not by “fixing your bug” but by asking a load of them in quick succession to “defeat” google.

3) Useless questions

This is an extension and aggravation of the former, often, overused questions are also worthless, not needed. Questions that require a follow-up which would already demonstrate the knowledge of the preceding, e.g. I ask you what’s a class in C++, then I ask you to apply it in a practical context.

Now, it’s true that there is merit in making your candidate comfortable, but useless questions often come in written questionnaires where there is less stress, often don’t make a “difficulty ramp” but are just random.

Good questions sidestep the issue though, as you can, and should, design ones that can be as easy as needed to answer trivially but as deep as possible for smart candidates to dive into and provide smarter solutions.

4) Worthless questions

Delving into bad questions, the worst are ones that are not just too simple, overused, boring or made useless by the structure of the test, but questions that are not good at all per se.

An example could be of the kind of “IQ”, not even slightly related to the field, questions that infamously some large organizations are said to use (but that I doubt they really do). Another are questions that test knowledge that really one shouldn’t need for the job.

Try your best to keep your questions relevant. Large organizations can analyze the performance of the questions they ask, but most studios won’t interview enough for the data to be really significant. A good indicator could be to “eat your own dog food” and “submit” your questions to your employees to see and rate. Ask how much they think they’re relevant, and how much they think they’re interesting, and fun…

Unfortunately, often one is genuinely asking questions that are worthless because it doesn’t think that’s the case. At least that does communicate something though, e.g. if you ask me about design patterns I’ll know I won’t like to work for you.

5) Not tailoring to the skill level

As good code, good questions are specialized. Not tailoring to the skill level is often associated with the previous mistakes, as for sake of generality you tend to ask more questions and wasting more time.

True, certain things have to be asked. And true, certain companies might hire people all of similar skill levels (i.e. smaller studios with mostly senior generalists), but that only means that you won’t need many different questionnaires, still, the one you have will be tailored for the people you’re looking for.

6) Pretending to interview

Asking what you’re supposed to ask. Saying what you’re supposed to say. Going through a checklist someone made somewhere, not understanding what you’re really doing. Happens often with standardized tests when interviewers don’t really understand the purpose or depth of the tests.

This hampers the communication really, it makes the interview not a real interaction but a standardized bureaucratic chore that leaves both parties with very little information.

Ideally, interview questions are coauthored by the interviewers, and every new interviewer has a chance to discuss the process, understand and even add or change it.

7) Not paying enough attention

Pretending redux… Pretending not only doesn’t make much sense as a test of the candidate, but it also usually leaves the candidate with no impression of how working at your company will be, what really are its strengths, how does it work.

Remember, as you won’t accept a candidate that just -says- he’s good at certain things, the same applies to you.

You have to show that your company is smart, you have to prove you are smart, and you have to prove that you work in the ways you say you work! And asking the right questions is one strong hint that the candidates (especially the smarter ones, the ones you want) will look for…

8) Over-engineered platforms

Lastly, there is the sin of over-engineered testing platforms. That probably doesn’t depend on you, but it can be so bad, and such a waste of time, it will make people not want to apply to your company or prioritize it after others that use more friendly processes.

If you need a person to jump through a hundred hoops, register to faulty online services, scan, copy and paste all his resumé, write pages after pages of forms on his education and so on, I really hope you’re getting a great return on all that neatly catalogued information, because surely you’re pissing off your applicants a LOT to gather them.

Also you’re giving the impression of a overly bureaucratic company with a ton of management levels and an HR group that doesn’t know what technical people find fun.

Going further

Your interview is as short as possible. Your questions are simple but deep. They are relevant to the job, tailored to the skill level. You understand them, what kind of people your company needs, you discussed about interviewing with your peers, and you’re doing all this in a fun, informal process.

Great! You’re doing everything right, and even with just interviews you should be able to make an impression in your candidate, paint a picture of what’s valued at the company, how it works, and making a good effort of finding not just people that you can use, but actually good fits. People that work they way you work, or that can learn to.

Can it go even further? I’m partial to openness, and collaboration. What can you show? Are you looking for someone that will hack through your code? Then maybe you could use some of said code in the process. Are you looking for someone that will need to work with the artists? Then you might want them in a room to brainstorm techniques to get a given visual result. The more you can incorporate of the job and of the company in the interview process, the better.

Many companies won’t do that because it’s complicated to disclose anything, even of past projects, even under NDA. It’s a shame and a problem that bogs said companies down, but even if you have to face such restrictions, can you imagine ways around them? Put some effort, and remember that getting the right people is most probably all that it takes to make great games.

Good luck.

Accidental Journalism: Tracking Game Industry Layoffs with

Original Author: Ben Serviss

As in any industry, layoffs in the games business are an accepted reality. Anyone working in development or production today is acutely aware of the tumultuous state of games as the dominance of consoles gives way to a touchscreen-driven order, as the costs to produce triple-A titles climb higher into the stratosphere, and as the first major console transition in seven years quickly approaches.

For most developers affected by layoffs, your options are few. You can suck it up and go indie. You can try to survive on freelance work. Worst case, you can leave the industry altogether and get a (shudder) real job.

Instead, three college friends-turned game developers chose a fourth path:

They decided to do something about it.

“I came up with the concept for the site after seeing a ton of friends lose their jobs at the beginning of this year. I was amazed by the efforts of people on Twitter (a movement that peaked with #38jobs) but noticed that with each new layoff, the last was somewhat overshadowed. We wanted to see if we could help address that problem by giving every studio layoff ongoing exposure.” –Holden Link

Holden Link, Austin Walterman and Cory Johnson met as undergrads at Georgia Tech, where they made a habit of working on game, web and interactive projects together. One was a 2D “fall forever” game called Blarf. Another was a rhythm game designed for guitar controllers called Audiball. As each one went on to work in the games industry, they stayed friends and kept up the collaborations.

Holden Link Austin Walterman Cory Johnson
Holden Link Austin Walterman Cory Johnson

Then in February of 2013, a friend got laid off from EA’s Danger Close studio, makers of the 2010 Medal of Honor reboot and 2012’s Medal of Honor: Warfighter. That hurt – but it was EA executive Frank Gibeau’s open letter breaking the news titled “Transition Is Our Friend” that provided the spark. The subject of layoffs dominated the group’s discussions until they felt compelled to take control of the topic before it faded yet again from the headlines.

The goal wasn’t to create a depressing reminder of the fragile state of the industry. Instead, the group aimed to present information regarding layoffs in a way that was blunt, yet also respectful, with the intent of giving recruiters and studios looking for people another way to source experienced talent.

After six months of part-time development, GameJobWatch launched this August.

The site keeps meticulous track of studio layoffs as they happen, with a total counter tracking all industry-wide layoffs for that year. Users can view layoffs by studio or by the date of their most recent layoff. Start-ups looking for developers in specific cities can easily skim the site to see if there have been any layoffs in that area. Recruiters can zero in on employees that have just been let go.

“The response has been, this is a thing that people feel needs to exist,” says Link.

“Accidental Journalism”

For a venture made to serve the public interest, the journalistic aspirations of GameJobWatch are far from intentional. “I think you could call it accidental journalism,” says Link. “I don’t think we knew that it would end up the way that it did when we started by any means.” Adds Johnson, “We saw the symptoms. There was something missing from a lack of reporting or data.” For the group, filling in the gaps was more of a bug fix than anything else.

And that’s when the data journalism angle of GameJobWatch hits you. Forward-thinking publications like The Texas Tribune have made a name for themselves by harvesting vast amounts of publically accessible data for interactive journalism features, like tracking the average Texan’s life expectancy or visualizing the contributors to notable Super PACs. Used creatively, the data that GameJobWatch is collecting could lead to game-changing dynamics between the employer/employee relationship.


“A lot of the [site] feedback was not about the emotional impact of it, which was surprising, but about what we could do with this data,” says Link. “I almost had a problem thinking of people getting laid off as data when we started this, and started warming up to all of the good that could come out of this… if this can be a tool that in any way discourages the seasonal layoff culture then that would be incredible.”

Data is Power – And the Power is Yours

Imagine you’re working at a game company, and looking to make a lateral move to a new studio. You have three offers on the table. Imagine if you could look up each company’s layoff history to help make an informed decision on where to go next.

Or, say you’re a college student about to hit the workforce. You’re willing to move anywhere for your first break. What if you could search game companies’ layoff records by geographic region to see what part of the world makes the most sense to move to?

Or, what if you’re expecting your first child in six months, but according to your current company’s layoff history, you see that there’s an 87% chance you’ll be laid off in the next three months. Instead of waiting for the axe to fall, you find a new job, give your notice and prepare for the new arrival without having layoffs upend your entire life.

Once armed with a few years of data, the altruistic possibilities of GameJobWatch start to emerge. Acting as an impartial advocate for developers, the industry’s rockstars and ninjas might start to avoid companies that take a slash-and-burn approach to their workforce, leaving the worst companies to crumble under the weight of their own careless practices.

But these scenarios are far off in the future. For right now, what does the group hope to achieve with the site? What’s the goal?

According to Johnson, the focus is on helping people in the present. “I would love for a studio to never have another layoff, but if they do, I would love to place 50% of those people, or be able to place 100% of those people [at new jobs].”

Link has another opinion, true to the blunt nature of the site.

“I’d like it to shut down. I’d like it not to exist.”

Git Off My Lawn – Large and Unmergable Assets

Original Author: Lee Winder

I posted up the Git talk myself and Andrew Fray did at Develop 2013 and mentioned I’d have a few follow up posts going into more detail where I thought it was probably needed (since you often can’t get much from a slide deck and no-one recorded the talk).

One of the most asked questions was how we handled large and (usually) unmergable files (mostly in regards to art assets but it could be other things like Excel spreadsheets for localisation etc.). This was hinted to on slides 35 – 37 though such a large topic needs more than 3 slides to do it justice!

To start talking about this, it’s worth raising one of Git’s (or indeed any DVCS’s) major drawbacks and that’s how it stores assets that cannot be merged. Instead of storing history as a collection of deltas (as it does with mergable files) Git simply stores every version as a whole file, which if you have a 100 versions of a 100MB file, it means your repository could be 10GB in size just for that file alone (it’s not that clear-cut, but it explains the issue clearly enough).

While this is a drawback of DVCS in general it’s not necessarily a bad thing.

It’s how all SCM systems handle files that can’t be merged (some SCMS’s do the same with mergable files too – imagine how large their repositories are) but the problem comes with Git’s requirement that a clone pulls down the whole repository and it’s history rather than just the most recent version of all files. Suddenly you have massive repositories on everyones local drive and pulling that across any connection can be a killer.

As an example, the following image shows how a single server/client relationship might work, where each client pulls down the most recent file, while the history of that file is stored on the server alone.

But in a DVCS, the whole repository is cloned on all clients, resulting in a lot of data being transferred and stored on every clone and pull.

Looking at Sonic Dash, we have some pretty large files (though no-where near as large as they could be) most of them PSDs though we have smaller files like Excel spreadsheets that we use for localisation. Since none of these files are mergable and most of these files are large, we couldn’t store them in Git without drastically altering our workflow. So we needed a process that allowed them to be part of the general development flow but without bringing with them all the problems mentioned above.

Looking at the tools available, and looking at what tools were in use at the moment, it made sense to use Perforce as an intermediary. This would allow us to version our larger files without destroying the size of our Git repository but it did bring up some interesting workflow questions

  • How do we use the files in Perforce without over-complicating our one-button build process?
  • With Git, we could have dozens of active branches, how do they map to single versioned assets in Perforce?
  • How do we deal with conflicts if multiple Git repositories need different versions of the Perforce files?


By solving the the first point we start to solve the following points. We made it a rule that only the Git repository is required to build the game i.e if you want to get latest, develop and build you only need to use the Git repository, P4 is never a requirement. As a result of this the majority of the team never even opened P4V during development.

This means the P4 repository is designed to only hold source assets, and we have a structured or automated process that converts assets from the P4 repository into the relevant asset in the Git repository.

As an example, we store our localisation files as binary Excel files as thats whats expected by our localisation team but as that’s not a mergable format so we store it in P4. We could write (or probably buy) an Excel parser for Unity but again that wouldn’t help since we’d constantly run into merge conflicts when combining branches. So, we have a one-click script (written in Ruby if you’re interested) that converts the Excel sheet into per-platform, per-language XML files that are stored in the Git repository.

These files are much more Git friendly since the exported XML is deterministic and easily merged. Any complex conflicts and the conflict resolver can resolve how they see fit and just re-export the Excel file to get the latest version.

It also means that should a branch need a modified version of the converted asset they can either do it within the Git repository or roll back to a version they want in P4 and export the file again. The version in P4 is always classed as the master version, so any conflicts when combining branches can be resolved by exporting the file from P4 again to make sure you’re up to date.

Along with this we do have some additional branch requirements that help assets that might not be in Perforce (such as generating Texture Atlases from source textures) but that’s another topic I won’t go into yet.