Unlocking our potential

Original Author: Alex Moore

In my last article I looked at how content driven games are made, concluding that any sense of control over the story is, ultimately, an illusion. With this article I aim to open the door and look beyond our current limitations and ask instead: is it possible to make a content driven game that isn’t constrained by its creator, but instead crafted by the player?

(Mini warning: this post turned from a simple idea to a fairly epic 2500 words.  Get a cuppa and a chocolate biscuit before attempting to read.)

To answer that, we need to look at what the current constrains actually are. There are lots of small issues but they nearly all fall into one of two categories: cost or artistic desire.


It’s a sad fact but the cost of making AAA games has sky rocketed. On average, on the original PlayStation, a team of about 10 people could make a 60$ game in about a year. For the PS2 that went up to more like 30 – 50 people for two years. For this generation of console there is no average – it’s as many people as you can get for as long as you can get. 100 people taking 3 years is fairly normal, but it’s also common for teams made up of many more. If the price of games reflected the increased man-year development time, they’d now cost about 1800$ each.


As a very broad stroke: all those pretty images take a lot of developing. The quality of the art in games is far higher than ever before, and the code bases that drive everything are far larger and complex than we ever imagined they’d be.

The biggest single factor slowing us down, and thus costing us money, is our tools and pipelines. Turnaround for an idea has slowed to an almost archaic crawl because of the time it takes to create content in a AAA game right now. The knock on of this is that it has a detrimental effect on the quality of the games we are making, and people don’t want to pay 60$ a pop anymore, let alone 1800$. There are two ways of fixing this: as an industry we need to invest in finding a different approach to creating content or we need to develop better tools.  Or, preferably, both.

Different approaches to creating content

Once upon a time, coders creating everything in the game.  Art was a means to an end.  Nowadays we have teams of artists sat at desks across the globe, all beavering away on the small details.  That nice crumbly wall in Uncharted 3 probably had three different artists working on it: one for the basic collision mesh, one for the render mesh and one creating the texture and normal map in the first place.  Throw in a lighting artist or two, and a particle effects guy to make the nice clouds of dust when it gets shot.  Oh, and a guy to make the impact sound effects.  Let alone the coders required to make all these systems work together.

Back in 1993, a role playing game called Dungeon Hack came out. It wasn’t unique by any stretch, but I mention it partly because of how movement systems used to work. The dungeon, as with most games of the time, existed on a grid. Each move you took moved you to the adjacent square, assuming there wasn’t a wall there. The world was rendered in first person, but you had no freedom to look around at anything other than 90 degrees, and move one square at a time. (Completely unrelated to me writing this article Legend of Grimrock, which uses this movement system in a modern game, has just been released.)

From a top down view, a dungeon looked a bit like this:

The main reason I mention Dungeon Hack is because of how the dungeons were actually created.  If you look again at the image above you’ll see that it’s possible to create the layout by using just four files, rotated into position:

By using a pre-defined set of tiles and an algorithm running a randomly generated number to lay them out, a unique dungeon was created every time you started a new game.  Rules then layered on top to place items and enemies.  In theory, there were limitless combinations and once the system for creating the content was up and running new levels could be generated almost instantly.  If user control was desired, an editor allowed players to make their own layout.

Tile based generation still exists in games like Civilisation, but it has proved very difficult to create modern 3D games with such a system, partly because of the demands on artistic quality and partly because of the desire for unique locations to visit.


Prefabs are a more modern way of generating content, and in essence work in a similar way to tiles, except they are stand alone units rather than part of an interconnected network.  Humans are very good at spotting repetition though, which reduces the effectiveness of using this system. Houses fairly quickly get spotted as clones, and players tend to lose interest in discovering new places if they think they’ve already seen everything there is to offer.

Procedural Generation

This is based heavily on maths and simple data structures.  The best example of what can be achieved with this method is still the demo scene:

As with the randomly generated dungeons there is a set of rules controlling the overall scope of the images here.  Similar methods can easily create large scale environments.  There are limitations though, the main one being creating areas that flow well into one and other.

On their own then, none of these approaches really gives us the flexibility or quality that we desire.  But by combining the three methods into a toolset we could help open up new ways of creating large scale content, rather than just relying on adding more artists into the equation.

Character Creation

Every story needs characters, and in this regard there are already tools that allow us to create a lot of unique models quickly.

I’ve used this video before because it’s a great example of what can be created given the right motivation and time.  We can also do very similar for animations: with a base set we can quickly add variety by using additive animations, such as a limp, or letting physics code take control.

The tools are the game

Another way is to give players the ability to create content themselves.  Creating levels for Doom was the reason I got into designing videogames in the first place, more recently Minecraft created a game that was, initially, a tool where players could create their own content.  There’s a huge demand for this type of game, where the editor is effectively the game: FarmVille and DrawSomething are other examples. They are very different types of game from the Mass Effects of the world, but it feels like there should be a way of learning from their methodologies to enable us to improve our content driven games too.

eight part series he suggests that other gamers could provide the context for the story, a sentiment that I agree with but relies on playing online.  EVE Online does this to a degree and some of the stories of conspiracy that come out of it are great reads.  There’s a flaw with this method though: even with relatively small player numbers (compared to World of Warcraft), each player feels like a very small fish in a very large pond.  Your influence on world events is minimal.  The reason people like playing the hero is because of the feeling of power and fantasy it gives them, and single player games give you this ability.

So, we can no doubt create worlds and characters much faster than we currently do, but that doesn’t solve the main issue I’m trying to tackle with this post: is there a way to create a story this way?

Story as a mechanic

Glance back up the page at the diagram of the dungeon for a moment.  There’s another way of showing its layout, and that is to instead show each square that requires a change in direction as a node, and the route through as connections:

If you read my previous article on content this should look familiar: this is, more or less, how we currently move through stories in content driven games.

In games today we take freedom of movement for granted: if we want to go and spend time looking at a blade of grass, we can. This is because movement systems have developed from the very simple set of rules Dungeon Hack et al had, to a complex mechanic. An almighty array of things are happening in the background to grant you that freedom of movement, from how to respond to the amount you’ve moved the analogue sticks through to how the scene behind that broken window renders.

Can we take the same steps with story? Can we create sets of rules that allow the player to drive the available story events at their will? Possibly even the freedom to create their own stories? It’s not going to be easy, and it’s not going to happen overnight, but I think we can.


Firstly, take the steps required to make content creation faster and cheaper.  Then we need to understand what a story actually is. It’s well documented that all stories fall into one of seven types, which seems like a good place to start:

Overcoming the Monster: The hero learns of a great evil and goes on a journey to destroy it. Star Wars qualifies. Braveheart. Jaws. Any movie with Nazis in it. Some of the Rocky movies. (Is it obvious I am a guy?)

Rags to Riches: A sad-sack beginning that leads to a happily ever after. A lot of Dickens’ stuff fits here. Disney princess movies. Harry Potter. Most every rom-com.

The Quest: Everybody loves a quest where the hero goes on a journey to find something, which can be a Lost Ark (literal of figurative), a body (Stand By Me), or even something unknown and unseen, which is known in Hollywood as a MacGuffin. Sometimes the hero brings his entourage, too. A lot of epics are Quest stories. Like The Goonies. Some of my favorite biblical stories are quests, like Abram and The Wise Men.

Voyage and Return: Like The Wizard of Oz, where Dorothy goes to a weird place with weird rules but ultimately returns home better off. I suppose I like Oz alright, but I’d rather give props to Back to the Future, because I’m of that ilk.

Comedies get their own category, too. For some reason, two people can’t be together, which creates all sorts of antics. They eventually figure it out, though. Again, most every rom-com ever, like When Harry Met Sally, or The Money Pit. (Note: you can make anything into a comedy. For example, Monty Python is a funny Quest movie, but the category here refers to a specific kind of plot, not just anything with humour.)

Tragedies are like riches to rags, where the villian gets it in the end. MacBeth and King Lear are classic examples. Or most slasher pictures if you go for that sort of thing.

Rebirth is like a tragedy but where the hero realizes his error before it’s too late, like in It’s a Wonderful Life. Which makes me wonder, are there any slasher movies where the bad guy cleans up and catches a ray of sun at the end?


Lessons Learned from Training Interns

Original Author: Ted Spence

Back in 2007, I received a referral from a friend. He knew a student at UCSD who was eager to get some practical programming experience; and I rather enjoyed the idea of helping to launch a promising candidate’s career.

Well, frankly, I wasn’t any good at it at first. But through a bit of luck and a bit of perseverance, over the past few years I’ve trained and graduated a dozen interns, many of whom joined my software development team as positions opened up. It’s been an incredible experience, and I’ve been grateful to all of them for the opportunity to see them grow in talent and ability.

I have also heard from lots of people who have had bad internship experiences; and I’d like to pass on what I can tell you about making the experience a success for everyone.

What Benefits Can You Expect?

Having an internship program can really help your organization, as well as provide meaningful learning opportunities for the interns themselves. I believe the only good internship programs are ones that are mutually beneficial, providing guidance and technical skills for the intern and effective management experience for the company.

There’s a good reason that videogame companies attract interns: we do fun and challenging work; and we use skills that are complex and multidisciplinary. An intern in the videogame field has the potential to learn a lot in a very short time, especially if you involve them in the business and show them how their work contributes to a goal.

Interns know this too; and that’s why they’re willing to put up with low pay and low status in order to get started. But your company shouldn’t just take advantage of this desire – you should be prepared to provide value back to the intern.

Doing so helps to establish a company culture of caring rather than taking. A good internship program can provide a halo to your company: successful interns who enjoyed your program, even the ones who get fulltime positions elsewhere, will raise your company’s status as an employer. And designing this mentoring process helps to build your business’ ability to grow all employees, not just interns.

Your Responsibilities

Before you start, make sure you’re prepared to go through the hard work to have an intern. It isn’t all fun and games!

  • Your staff should be able to dedicate 1-3 hours per day to mentoring each intern separately. The goal is to make the intern independent, but to still give them opportunity to learn. I find that 1-3 hours per day is a good start; try dividing it up between two senior employees, one the lead and one the secondary mentor.
  • You should have a clearly defined introductory task for the intern; ideally a side project that doesn’t require them to learn tons of processes before they can get started. Look around – there are probably a few one-off ideas lying dusty on the shelf. When the intern has succeeded at their first task, you should gradually increase the complexity and interconnectedness of their work.
  • Your intern should have a computer and enough space to get quiet work done. You want an intern to balance time between asking questions and researching their project; not all interns can shut out the distractions. If the office is noisy in general, offer headphones. Interns don’t do well with telecommuting; they need in-person supervision and reinforcement.
  • You should be prepared to pay your intern. Although the US government has some rules that permit some internships to be unpaid, the rules are tricky and you’re best off not going that route. If you read the six criteria, they’re pretty vague; and your company can be on the receiving end of a serious lawsuit. Pick a wage and offer something.
  • You must be excited about your own work before you can bring on an intern. Don’t neglect this! Interns don’t work well unless they have an opportunity to see and share in your passion for your business.

When you’ve ensured that your company can support an intern, it’s time to begin searching.

Acquiring your First Intern

Internship ads are written differently than traditional job listings. Make it clear that the internship is for a fixed length of time; I prefer three months. This is necessary to be fair to the intern; you need them to know that, at the end of this time period, they will be successfully on their way to the fulltime career they want.

Focus on the rich skills you will teach, the excitement of your business, and explain how your uniquely professional work environment will best prepare the intern for a full career. Limit your “requirements” to general talents, basic knowledge, and motivation; and encourage the candidate to explain to you what sets them apart. For example:

Mobile Developer Internship – Foo Inc, the world’s leading developer of Bars, has a three month paid internship position available to help its mobile app development team writing web services and user interface code. During your internship, you’ll learn the most advanced techniques for mobile user interface design from industry heavyweights. Requirements: Familiarity with mobile phones and HTML, a sharp mind, attention to detail, and a commitment to getting the job done.

Next, let’s get this advertisement in the field. Try to identify five to ten targets for your job posting. Craigslist is one place to start; each advertisement costs $25 and, in my experience, generates a decent response. Don’t forget to post the internship on your website and advertise it on LinkedIn, Facebook, and Twitter. You can also contact the dean of a local university or community college and offer your internship to their students, but be prepared to have your company vetted before you can share listings directly with students.

With the posting in hand, it’s time to begin! Schedule all the ads to start on the same day, and keep up the momentum. Interns only have a brief moment of time when they can consider an internship, so respond quickly, ideally within 24 hours after they send their resume to you. I find it works best to speak to two or three candidates each day by phone, and interview in person about a half dozen top tier candidates.

When you move on to in-person interviews, the goal is to identify three things within about an hour:

  • Does the intern have enough basic knowledge to be useful in a three month window?
  • Can the intern listen effectively and ask good questions?
  • Does the intern have the motivation and drive to succeed?

When you’ve found a candidate that passes the test, make your pick fast; probably within the first week after interviews start. Let the intern know clearly that you will give them a great opportunity, and in return for their hard work you’ll give them a career.

Daily Experience

Once the intern starts, you should monitor them every day and provide the following:

  • A limited but useful amount of mentoring. You can’t have the intern asking questions every five minutes and sapping the team’s concentration; but neither can you have an intern firing aimlessly when a quick question would get them on target. Tell your intern to ask two really good questions about their project every day.
  • Tasks that can be completed. Find simple tasks and get the intern to succeed at those before moving onto more complex and risky tasks. Many interns just need to see the fruits of their labor in use to level up. Gradually increase the task complexity, but only after a task is fully seen through to completion!
  • Opportunities to shape their own unique career path. After each task completes, ask them what they liked about it and what they didn’t. Seize every chance to give the intern work they find interesting. I am reminded of Harry Truman’s phrase, “The best way to give advice to your children is to find out what they want and then advise them to do it.”

In return, you need to start communicating expectations to the intern. Since many interns come straight out of college or out of a different career path, you may want to explain to them the standards required by your business, such as punctuality, professionalism, office culture, dress codes, HR rules, or adjusting to the workload. Be patient with them, and give them time to cope, but also be clear: meeting the business rules is a condition of the internship.

As work progresses, find opportunities to compliment your intern and gradually expose them to more and more advanced projects. I like to provide about nine compliments for each criticism I give. If you fall far south of that line, perhaps either the intern or your motivational style needs to change. Constructive criticism is your friend; honest and straightforward feedback is a critical necessity for career development. If an intern doesn’t improve after a warning, it’s best to let them move on to a place they can be more successful.

Ending an Internship

At the end of three months – or whatever time frame you picked – you should release your intern. During the last few weeks of the time frame, try to give them general career counseling. Set them up a few meetings where you can answer their questions about jobs in their chosen field.

If you are lucky enough to have a full-time position open that is suitable for the intern, encourage them to apply for the position and give them a proper hearing. Doing so professionally results in a clear transition from intern to employee.

If your business doesn’t have a position available, don’t fret. Hold your intern’s hand up high and celebrate their work. Give them a strong reference, and offer to hand-deliver their information to colleagues you may know at other companies. Take the intern out to lunch to celebrate the launch of a successful career – you’ve both earned it!

Game Engines for Indies

Original Author: Kyle-Kulyk

Itzy Interactive formed with mobile game development in mind and multiplatform development was important to us as we set out to start our business.  We were looking for a “complete package” solution.  Bear in mind, I haven’t had the opportunity to work on all the engines mentioned so some of my points are based off the opinions of other developers and fans on various forums and there are certainly other engines available depending on the type of work you’re attempting.



Most are familiar with the Unreal Development Kit.  It’s a proven engine that’s been used in a tonne of AAA titles, but how does it fare for indie developers?  The first thing you’ll notice with UDK is the learning curve.  It’s steep.  Developers I’ve spoken to have all expressed this same sentiment, and my own experiences with UDK left me feeling that UDK seemed needlessly complicated.  I had taken a few courses using the UDK in the past and while practice makes perfect, even when I became more familiar with UDK I found I simply didn’t like using it compared to other alternatives.  The second strike is the need to learn Unreal Script.  It’s a fairly straight forward language in my opinion however just that you need be confined to Unreal Script can take away from valuable development time when you’re starting off.

UDK is capable of delivering high quality graphics out of the box but it seems geared towards First Person Shooters (much like CryEngine).  I’ve heard some complain about the difficulties involved in trying to bend UDK to other genres.  FPS games developed with UDK also have a tendency to end up feeling like Unreal Tournament clones.  UDK now supports iOS development in addition to Windows but don’t expect to port your projects over to anything else.

UDK is free for non-commercial use.  Plan on selling your game and you’ll need to fork over $100 with no royalties to worry about until $50,000.  After that, expect to pay a 25% royalty, which when you consider IOS development and the 30% Apple takes, can certainly add up.  UDK is a bit of a sacred cow for some in the development community, but for indie developers it’s big, unwieldy and just limited in supported platforms.  If you’re looking to add your shooter to a saturated shooter market, UDK may work for you but I wouldn’t recommend for smaller teams of developers.

Notable title:  The Ball



is our first release on both platforms using the Unity engine.  To me it just seems the complete game solution for indie developers.  What helped sell us on Unity3d was the ease at which you could build your project with a one click button solution to build to different platforms.  Unity3d supports Android builds, Web, iOS, Windows and Mac.  The option for console development exists as well, however you must first jump through the hoops necessary to be recognized by the console makers and then Unity will provide a license per title (similar to UDK).  They also recently added Flash support.  Aside from specific tweaking for things like the way each mobile platform handles their Storekits, the amount of customization necessary to publish on one platform compared to the other seemed minimal.  Programming for Unity3d was also a breeze as Unity3d is able to handle C#, JavaScript and Boo.  One of my pet peeves is having to learn some obscure, scripting language to use a product.

Unity3D also has a robust development community with excellent support from other users sharing scripts and tutorials.  As well, the Unity Asset store has some excellent plugins that can shave weeks off development time and most are reasonably priced.

Although a free license is available, anyone serious about game development will want to shell out for the pro licensing to take advantage of more advanced features, from built-in pathfinding and physics to shadows, occlusion culling and for the ability to strip out all the unrequired assets when creating your builds.  No other fees required.  Unity Pro with the Android Pro and iOS Pro licences will set you back $4500, but if you keep your eyes open it’s not uncommon to see them offer the pro licenses for 20% off.  Still, this is pretty steep for an indie developer starting off, but once you have these upfront costs out-of-the-way, that’s it.  It’s free to try and there are cheaper licenses available.  I would certainly recommend giving it a spin.

Notable title:  Battlestar Galactica Online



C4 Engine

The Torque3D engine was originally based off the Tribes 2 engine from over a decade ago and allows users access to the source code.   While many fondly remember Tribes 2, unfortunately the general consensus seems that Torque hasn’t been able to keep pace, with many complaining about an unchanged engine and broken tools.  Also, like UDK, Torque uses a non-standard scripting language – “Torquescript”.  Generally, Torque is serviceable but most of its features met with a resounding “Meh” on the indie forums.

What Torque has going for it is some nice networking code and a low price, although be warned that Torque3d, Torque2d and Torque2dIOS are all separate programs with separate licenses.  Also, expect to shell out for pretty much everything, from basic tool packs and editors to genre framework packs.  You can easily end up paying hundreds extra for some basic features.  Android support appears non-existent.

Notable title:  Penny Arcade

Final Thoughts

By no means is this meant as a complete list of available solutions out there.  Certainly there are other options available with a few geared towards specific types of development and developer skill level but I hope that if you’re considering becoming an independent game developer and are looking for a more complete solution, these summaries will help start you on your way.

Software Rasterizer Part 1

Original Author: Simon Yeung

Technology/ Code / Visual Arts /


Software rasterizer can be used for occlusion culling, some games such as Killzone 3 use this to cull objects.  So I decided to write one by myself. The steps are first to transform vertices to homogenous coordinates, clip the triangles to the viewport and then fill the triangles with interpolated parameters.  Note that the clipping process should be done in homogenous coordinates before the perspective division, otherwise lots of the extra work are need to clip the triangles properly and this post will explain why clipping should be done before the perspective division.

Points in Homogenous coordinates

In our usual Cartesian Coordinate system, we can represent any points in 3D space in the form of (X, Y, Z). While in Homogenous coordinates, a redundant component w is added which resulting in a form of (x, y, z, w). Multiplying any constant (except zero) to that 4-components vector is still representing the same point in homogenous coordinates. To convert a homogenous point back to our usual Cartesian Coordinate, we would multiply a point in homogenous coordinates so that the w component is equals to one:

(xyzw) -> (x/y/z/w, 1) -> (XYZ)

In the following figure, we consider the xw plane, a point (xyzw) is transformed back to the usual Cartesian Coordinates (XYZ) by projecting onto the w=1 plane:

figure 1. projecting point to w=1 plane

The interesting point comes when the w component is equals to zero. Imagine the w component is getting smaller and smaller, approaching zero, the coordinates of point (x/y/z/w, 1) will getting larger and larger. When w is equals to zero, we can represent a point at infinity.

Line Segments in Homogenous coordinates

In Homogenous coordinates, we still can represent a line segment between two points P0= (x0y0z0w0) and  P1= (x1y1z1w1) in parametric form:

L= P0 + t * (P1-P0),   where t is within [0, 1]

Then we can get a line having the shape:

figure 2. internal line segment

The projected line on w=1 is called internal line segment in the above case.

But what if the coordinates of P0 and P1 having the coordinates where w0 < 0 and w1 > 0 ?

figure 3. external line segment

In this case, it will result in the above figure, forming an external line segment. It is because the homogenous line segment have the form L= P0 + t * (P1-P0), when moving the parameter from t=0 to t= 1, since w0 < 0 and w1 > 0, there exist a point on the homogenous line where w=0. This point is at infinity when projected to the w=1 plane, resulting the projected line segment joining P0 and P1 passes through the point at infinity, forming an external line segment.

The figure below shows how points are transformed before and after perspective projection and divided by w:

figure 4. region mapping
The blue line shows the viewing frustum, nothing unusual for the region in front of the eye. The unusual things are the points behind the eye. After perspective transformation and projected to w=1 plane, those points are transformed in front of the eye too. So for line segment with one point in front of the eye and the other behind the eye, it would be transformed to the external line segment after the perspective division.

Triangles in Homogenous coordinates
In the last section, we know that there are internal and external line segments after the perspective division, we also have internal and external triangles. The internal triangles are the one that we usually sees. The external triangles must be formed by 1 internal line segment and 2 external line segments:
figure 5. external triangle

In the above figure, the shaded area represents the external triangle formed by the points P0, P1 and P2. This kind of external triangles may appear after the perspective projection transform. And this happens in our real world too:

an external triangle in real world
the full triangle of the left photo

In the left photo, it shows an external triangle with one of the triangle vertex far behind the camera while the right photo shows the full view of the triangle and the cross marked the position of the camera where the left photo is taken.

Triangles clipping

To avoid the case of external triangles, lines/triangles should be clipped in homogenous coordinates before divided by the w-component. The homogenous point (xyzw) will be tested with the following inequalities:

(-<= <= w) &&   —— inequality. 1
(-<= <= w) &&   —— inequality. 2
(-<= <= w ) &&   —— inequality. 3
> 0    —— inequality. 4

(The z clipping plane inequality is 0 <= <= w in the case for D3D, it depends on how the normalized device coordinates are defined.) Clipping by inequality 1,2,3 will effectively clip all points that with < 0 because if < 0, say = -3:

3 <= x <= -3     =>     3 <= -3
which is impossible. But the point (0, 0, 0, 0) is still satisfy the first 3 inequalities and forming external cases, so inequality 4 is added. Consider a homogenous line with one end as (0, 0, 0, 0), it will equals to:
L= (0, 0, 0, 0) + t * [ (x, y, z, w) – (0, 0, 0, 0) ] = t * (xyzw)

which represent only a single point in homogenous coordinates. So triangle (after clipped by inequality 1, 2, 3) having one or two vertices with w=0 will result in either a line or a point which can be discarded. Hence, after clipping, no external triangles will be produced when dividing by w-component. To clip a triangle against a plane, the triangle may result in either  1 or 2 triangles depends on whether there are 1 or 2 vertex outside the clipping plane:

figure 6. clipping internal triangles

Then the clipped triangles can be passed to the next stage to be rasterized either by a half-space algorithm.

Below is the clipping result of an external triangles with 1 vertex behind the camera.

clipping external triangle in software rasterizer

Below is another rasterized result:

rasterized duck model
reference of the duck model


In this post, the maths behind the clipping of triangles are explained. Clipping should be done before projecting the homogenous point to the w=1 to avoid taking special cares to clip the external triangles. In the next post, I will talk about the perspective interpolation and the source code will be given in the next post (written in  javascript, drawing to html canvas).

And lastly special thanks to Fabian Giesen for giving feedback during the draft of this post.






Technical Sound Design: An Interview with Damian Kastbauer

Original Author: Ariel Gross

I caught up with Damian Kastbauer, technical sound designer, in the sticky jungles of the Congo last week. He was questing for the fabled paisley hippopotamus. Legend says that when the paisley hippopotamus is kissed upon its patterned lips, the kisser is granted a treasure of immense value.

I found him hiding behind a giant leaf, wearing a giant leaf. He was pressing binoculars against his eyes, gazing towards a nearby algae-covered pool. I crouched next to him and started asking him questions.

Ariel Gross: So, Damian, what is it that you do? Aside from questing after fabled beasts of yore?

Damian Kastbauer: I’m a technical designer for games, which is a somewhat nebulous term. It’s defined pretty well by Rob Bridgett in this article. Essentially, I try to serve as a bridge between sound content and programming/engine-side integration in order to create systems for sound playback, or just plain getting sounds in the game working and sounding right.

AG: Nice. My opinion is that sound design in games is only as good as its implementation. What do you think about that?

DK: It’s often said that the best sound can end up sounding bad if it’s not implemented properly, so, a lot of the time I’m trying to let the sound do its thang… which is not always easy. Since I don’t create content, and I’m not a programmer by definition, I usually spend a lot of time either building tools, streamlining pipelines, and strategizing audio features with the help of the programming team or working with audio middleware tools to similarly create the smoothest integration possible.

Damian finally released the binoculars, which then fell and dangled around his neck, but kept his hands in place, and continued to look off towards the pool with his hand-noculars. It was an interesting technique that I had never seen used in the field before. He made a refocusing gesture and continued to peer out through his hand holes.

AG: So… do you find yourself being a middle man between Team Audio and Team Programming? That could be challenging, what with all the unrequited love and/or red-hot rage between those two disciplines.

DK: Definitely, I’m helping to bridge the gap between the two worlds. Whenever possible, I like to enable content designers to just “design in the box” all day from the DAW without having to worry about the sometimes-labyrinthine pipeline to get sound into the game. In this way, they can focus on what they’re really into and I can go to town with making sure it all works within the context of the game. It’s a balance that requires human interaction, but I’m pretty keep on that as well.

AG: Wow, you just said labyrinthine. I’m golf clapping right now. In my head. And in my heart. I don’t want to scare off the paisley hippo. You’re welcome. So, have you found any common issues between Team Audio and Team Programming among multiple developers, and do you take measures to help both sides see eye to eye?

DK: I think one of the missing or underdeveloped pieces of some studios is the direct communication between Team Audio and the rest of the teams. It’s common for audio to touch almost every corner of the pipeline and development process.

With Team Audio often sequestered within a sound-proof cave deep within the bowels of a studio, it can be hard to get the kind of happy accident or magic moment that can come from being out in the pit or lined up in a hallway.

AG: This is painfully true.

DK: One of the things that has always seemed right about my involvement with different game studios on-site is the necessity of being “on the floor” with the other disciplines… which has always been a positive in bridging that gap.

When I work remotely, I put a tremendous amount of effort into achieving a high level of communication between the different teams on a project. Whether it’s e-mail, IM, Skype, or occasion on-site visits during development, the communication between people is among the most important aspects of working on a project.

If I am outside the fence, having an on-site advocate for audio is a definite plus, and in most cases, it is key to making my role as a facilitator for sound actually happen.

Some foliage near the murky pool began to shake. Damian dropped his hand-noculars and placed one hand over his mouth to ensure complete silence. He placed his other hand over my mouth, too. I resisted the urge to lick his hand. A rabbit bounced from the foliage. We both sighed deeply.

AG: Miv viff fuhm coffm prbflrm…

Damian removed his hand from my mouth.

AG: Is this a common problem at studios you’ve worked with? That Team Audio doesn’t pop their heads up through the manhole enough to be part of the greater discussion?

DK: Whew, well, I think that as a matter of course it is hard to have visibility across the team in the same way as being out in an open floor plan, when part of your daily work requires you to be locked away in a padded room.

That is, people can’t SEE you and demand your attention by rolling up to your workstation. I mean, there’s a door… you have to turn the knob… like, maybe even knock… and then get blasted in the face by high decibel explosions in order to interface. We know how averse most people are to moving in the first place, let alone opening a door.

I chuckled.

DK: A little dry, mid-west sarcasm for you.

I chuckled more, leading to a whole-body belly laugh, leading to a maniacal, snorting cackle, ending finally with a very feminine giggle. Then we sat motionless for what felt like an eternity.

AG: Sorry. What you say is completely true. Before tracking you down in the Congo, I hadn’t left my seat in four years. Not even to use the restroom. I held it. What you saw at the GDC was my hologram. A hologram with mass. I digress. What is the number one reason why a studio hires a technical sound designer?

DK: The number one reason I am hired is to help ship games. Having shipped my fair share, I would like to say that I know all the tricks in the book, but this is absolutely not true.

What I hope that I bring to a project is an ability to work together with people to get things done, through thick and thin, regardless of the task at hand.

Sometimes I don’t even know what I’m going to be doing until I get involved and assess the situation… this can be true of the studios as well. They’re not sure what exactly needs to be done, but they need someone to help pull it off in the eleventh hour.

AG: You’re basically a hero that is brought in to save the day.

DK: If I can come in and take the burden off of content designers, help design complex playback systems for physics or procedural animations, help wrangle memory budgets or streaming look-ahead times, then that is one less thing for someone, who already has too much on their plate, to worry about.

For remote work, it can be at any time during the project, I work from the home studio connected to a VPN using source control while making local builds from day one.

When I do work on-site for an extended period of time, it’s usually toward the end of a project for a few months.

At this moment, the waters of the pool began to ripple as a large, paisley-patterned mound broke the surface of the water. Then, two paisley ears emerged, followed by blinking eyes. Damian fidgeted with excitement, but stayed put.

AG: You mentioned memory and streaming.  This is something that some people don’t realize. Someone on Team Audio is usually tasked with wrangling memory budgets and streaming bandwidth. It’s really techy work.

This isn’t always expertise that is considered when hiring an audio designer. I’ve found that studios tend to focus more on whether or not an applicant can make rad sounds, with less emphasis on the technical side of things.

There’s usually some requirement that says, “familiarity with middleware tools and implementation,” but I get the sense that the weighting at most companies is 80% creative aptitude and 20% technical prowess. Does that seem accurate to you? Or am I totally wrong? And ugly looking?

DK: I think there are a majority of people who come to studios with a sound design background. Until recently, it has been almost non-existent to have someone come out of school with an education in the technical side of game audio.

It used to be that “audio implementer” was the job that someone new to the industry would take on their way to becoming a sound designer… or even a musician or composer for games… that was really the only way to learn the ropes.

When I got into games, it was clear to me that the technical side was the only thing that I wanted to do. Thankfully, in the past few years, it has become a niche that has benefited from a group of people whose interests are purely in the technical, which has helped to establish legitimacy for the specialization.

AG: We used an in-game editor to place ambient audio emitters on Saints Row 2. Sometimes the frame rate would tank while I was moving an ambient emitter around, and I’d fling the emitter object 100 miles away, sometimes into a different game entirely. I think a few of those emitters landed in Red Faction: Guerrilla.

DK: Hahaha… user interface physics lerping… TOOLS!

AG: Do you ever find yourself caught up in the mystery of missing sounds? Like, the audio team is sure that a sound should be playing, but it’s not. Is that something you run into a lot?

DK: Totally! Someone did a model swap but forgot to bring the audio hooks with it, or moved the entire level geometry across the level away from the audio emitters that were placed on a different layer in the editor… happens ALL the time, and the resources to figure it out are sometimes underestimated.

With all the different ways that people can approach game audio these days: mods, Unity, UDK, FMOD, Wwise, SDK documents, Max/MSP or pd, it’s becoming common to have fresh faces joining the team who already have a good handle on the tasks at hand and are ready for the challenge.

The fabled paisley beast began to slowly emerge from the algae-covered pool. Watching the water cascade off of it’s psychedelic hide was glorious. Swirls of color and waves of sweet smells washed over me. Granted, I had also been munching on some random red berries that I’d found on the ground.

AG: I know plenty of sound designers that don’t want to do the implementation. They look at it as a boring chore. A chore-bore. A borch. Anyway, I could see some Team Audios out there wanting to hire you on full time. Do you ever get this? Have you considered an in-house gig?

DK: I’ve worked remotely on a couple of projects for a longer duration (1-2 years) serving as a conduit for getting content in the game. I’ve worked a ton with Bay Area Sound as an outsource audio solution involving their audio and music content creation in conjunction with my management of the technical side in what has been a perfect solution for smaller developers and teams that need supplemental assistance.

I worked for five years in this capacity on the games for Telltale. I integrated content provided by Bay Area Sound and Harmony Machine into The Saboteur from Pandemic Studios for about a year to help augment their in-house audio team, which included building vehicle systems in Wwise to work with the simulation team who was responsible for the cars driving around in the open world.

Most recently I’ve been working again with Bay Area Sound to integrate content and tool the pipeline for a couple of MMOs using the Hero Engine in conjunction with FMOD.

AG: How about documentation? I have seen with mine own eyes the lack of documentation out there. Do you find yourself having to trudge through undocumented systems a lot? Do you end up documenting them yourself as part of your services?

DK: Documentation is always a tricky one. A lot of developers who have been iterating on in-house engines and tool sets carry with them a ton of institutional knowledge… usually within their brains. There’s usually a pretty intense time of education at the beginning of the project where I do my best to document… for my own sanity as well as anyone who might find themselves in a similar position in the future.

I recently worked with a company and provided an evaluation of their audio middleware integration in addition to recommendations for how to move forward with some key changes. In that case, it was documentation and justification for the different recommendations, as well as education in order to get everyone thinking about the possibilities.

AG: Do you find that most companies that you work with have an audio programmer on stand-by to compliment you? Compliment both in terms of assisting you in a complimentary fashion, and also to tell you that your shirt looks very nice tucked in.

DK: It’s definitely becoming more common for developers to recognize the need for audio programming support throughout the project. While I think it’s becoming, I think there is still a false expectation that you can just throw on the UI programmer for a couple of months and sew everything up.

The need for dedicated audio programmers is definitely growing as we continue to scale in meeting the demands in quality during the current generation.

So, I guess, having an audio programmer is necessary to compliment the technical side of sound design, especially a dedicated professional who is invested in sound. Their ability to bring not just programming, but a knowledge of audio to the table can only mean an increase in the quality of the audio and the quality of life for Team Audio.

That, and yeah… someone to tell me when my dashiki doesn’t match my socks.

AG: Last question, and then you can get back to your quest. What do you think will change from a tech audio standpoint with the next generation of consoles?

DK: I think the industry at large will further close the gap between pre-rendered and in-engine… across all disciplines, a continuation to move forward into real-time.

What this means for audio is building on the sample-based methodology that has firmly taken root in the current generation while using more procedural audio, synthesis, and DSP to modify or accent sample-based content dynamically at runtime.

Additionally, increased transparency and better communication in both directions between the game audio and audio engines will develop, which will in turn necessitate a further push towards enabling this functionality within usable tool sets. This accessibility will reach into every corner of the audio production pipeline, simultaneously exposing the technology while making it easier to adapt it to the interactive needs of the game.

Hopefully the new consoles will also make espresso.

AG: Espresso feature would be cool. I’m hoping that someday I can feed a slice of bologna directly into the console. It would recognize the meat as bologna and procedurally create a game called Bologna Quest, where a young adventurer named Ariel, clad in the finest bologna armour, would climb the mountain of… Damian?

Damian, recognizing that I had started my typical bologna-related sign-off, was already planting a kiss upon the lips of the sleeping paisley hippopotamus. Then, a flash of light, and they both vanished, never to be seen again.

C/C++ Low Level Curriculum Part 7: More Conditionals

Original Author: Alex Darby

Hello humans. Welcome to the 7th part of the C/C++ Low Level Curriculum series I’ve been writing. This post covers the conditional operator, and switch statements. As per usual I will be showing snippets of C++ code and throwing the corresponding x86 assembler at you (as produced by VS2010) to show you what your high level code is actually doing at the assembler level.

Disclaimer: in an ideal world I’d like to try to avoid assumed knowledge, but keeping up the level of detail in each post that this entails is, frankly, too much work. Consequently I will from now on point you at post 6 as a “how to” and then get on with it…

Here are the backlinks for preceding articles in the series (warning: it might take you a while, the first few are quite long):

  1. /2011/11/09/a-low-level-curriculum-for-c-and-c/
  2. /2011/11/24/c-c-low-level-curriculum-part-2-data-types/
  3. /2011/12/14/c-c-low-level-curriculum-part-3-the-stack/
  4. /2011/12/24/c-c-low-level-curriculum-part-4-more-stack/
  5. /2012/02/07/c-c-low-level-curriculum-part-5-even-more-stack/
  6. /2012/03/07/c-c-low-level-curriculum-part-6-conditionals/ [see near the top of this post for details on compiling & running the code snippets]

The conditional operator

I assume that everyone’s familiar with the conditional operator, also known as the “question mark”, or the ternary operator (“ternary” because it’s the only C/C++ operator that takes three operands).

If you’re not, here’s a link so you can catch up (I predict that you will be so stoked to find out about it that you will be over-using it within the week).

Personally I heartily approve of the conditional operator when used judiciously, but it’s not always great for source level debugging because it’s basically a single line if-else and can be hard to follow in the debugger (in fact I’ve heard of it being banned under the coding standards at more than one company, but there you are we can’t all be sane can we?).

Anyway, let’s have a quick look at it with some code:

#include "stdafx.h"
  int main(int argc, char* argv[])
      // the line after this comment is logically equivalent to the following line of code:
      // int iLocal; if( argc > 2 ){ iLocal = 3; }else{ iLocal = 7; }
      int iLocal = (argc > 2) ? 3 : 7;
      return 0;

If you remember the the assembler that a basic if-else generated in the last article, then the assembler generated here will probably bust your mind gaskets…


  1. I’ve deliberately left the function prologue and epilogue out of the asm below, and just left the assembler involved with the conditonal assignment
  2. if your disassembly view doesn’t show the variable names, then you need to right click the window and check “Show Symbol Names”
     5:     int iLocal = (argc > 2) ? 3 : 7;
  01311249  xor         eax,eax
  0131124B  cmp         dword ptr [argc],2
  0131124F  setle       al
  01311252  lea         eax,[eax*4+3]
  01311259  mov         dword ptr [iLocal],eax

Clearly this is not very much like the code for the simple if-else that we looked at previously.

This is because there is trickery afoot and the compiler has chosen to do sneaky branchless code to implement the logic specified by the C++ code.

So, let’s examine it line by line:

  • line 1 – uses the xor instruction to set eax to 0. Anything XORed with itself is 0.
  • line 2 – as in the previous if examples this uses cmp to test the condition, setting flags in a special purpose CPU register based on the result of the comparison.
  • line 3 – this is a new one! The instruction setless equal sets its operand to 1 if the 1st operand of the preceding cmp was less than or equal to the 2nd operand, and to 0 if it was greater. We’ve not seen the operand al before, it’s a legacy (386) register name which now maps to the lowest byte of the eax register (if you’re a sensible person and are stepping through this code in your debugger with the register window open, you will see that this instruction causes the eax register to be set to 1 – also note that this only works because eax has already been set to 0).
  • line 4 – uses the load effective address instruction do do some sneaky maths that relies on the value of eax set by setle in line 3.
  • line 5moves the value from eax into the memory address storing the value of iLocal

That’s all fine, but how does it work?

Firstly, note that at the assembler level the comparative instruction setle is (as in the previous post’s examples) testing the opposite condition to the conditonal specified in the C++ code.

This means that the eax register will be set to 0 in line 3 if argc is greater than 2, which in turn means that the eax*4+3 part of line 4 will evaluate to (0*4)+3 – i.e. 3.

Conversely, if argc is less than or equal to 2, the eax register will be set to 1 which in turn means that line 4 will evaluate to (1*4)+3 – i.e. 7.

So, as you can see, the assembler is doing the same branchless set of instructions  regardless of the condition, but using the 0 or 1 result of the conditional instruction in the maths to cancel out or include one of the terms and give what I like to call a “mathematical if”. Clever.

Incidentally this sort of branchless-but-still-conditional code has been a bit / lot of a hot topic over the last few years, especially on consoles  since their CPUs are particularly branch mis-prediction sensitive.

Judicious use of the “branchless condtional” idiom is a tool that can be used to combat branch (mis-)prediction related performance issues – for an example of this, see the use of the fsel PPU instruction in this article by Igor Ostrovsky (who works for Microsoft).

The conditional operator (part deux)

So, clearly our above super-simple-sample resulted in the compiler generating clever assembler because of the constant values in it; interesting certainly, but not necessarily representative of most “real world” assembler.

Let’s see what happens if we use variables with the conditional operator…

#include "stdafx.h"
  int main(int argc, char* argv[])
      int iOperandTwo = 3;
      int iOperandThree = 7;
      int iLocal = (argc > 2) ? iOperandTwo : iOperandThree;
      return 0;

And, here’s the relevant disassembly:

     5:     int iOperandTwo = 3;
  00CF1619  mov         dword ptr [iOperandTwo],3
       6:     int iOperandThree = 7;
  00CF1620  mov         dword ptr [iOperandThree],7
       7:     int iLocal = (argc > 2) ? iOperandTwo : iOperandThree;
  00CF1627  cmp         dword ptr [argc],2
  00CF162B  jle         main+25h (0CF1635h)
  00CF162D  mov         eax,dword ptr [iOperandTwo]
  00CF1630  mov         dword ptr [ebp-50h],eax
  00CF1633  jmp         main+2Bh (0CF163Bh)
  00CF1635  mov         ecx,dword ptr [iOperandThree]
  00CF1638  mov         dword ptr [ebp-50h],ecx
  00CF163B  mov         edx,dword ptr [ebp-50h]
  00CF163E  mov         dword ptr [iLocal],edx

Since the conditional operator is now assigning from variables we’d expect it to generate something that looks more like the sort of code we saw from the basic if-else we looked at last time, which it has.

We have the expected cmp followed by a conditional jump testing against the opposite of the conditional, then two blocks of assembler, the first of which (lines 7 to 9) unconditionally jumps over the second (lines 10 and 11) if it executes, so essentially it’s behaving more or less as expected; however there’s clearly some interesting stuff happening in there:

  1. the two branches use different registers to store their intermediate values; the first uses eax, the second uses ecx
  2. both branches store their result to the same memory address in the Stack (see this post if you don’t know or can’t remember about Stack Frames) – i.e. [ebp-50h]
  3. the code that assigns the value to iLocal (lines 12 and 13) only exists once and is executed regardless of which branch was taken; it takes the value from[ebp-50h] and writes it into iLocal using uses a third register (edx)

The use of different registers for the different branches in step 1 looks like it might be significant but (according to several expert sources) this is apparently perfectly normal compiler behaviour and not anything to read into.

Steps 2 and 3 show that the that generated from the conditional operator (at least with VS2010) isn’t directly equivalent to the intuitively equivalent if-else statement:

// intuitively equivalent if-else of
  // int iLocal = (argc > 2 ) ? iOperandTwo : iOperandThree;
  int iLocal;
  if( argc > 2 )
      iLocal = iOperandTwo;
      iLocal = iOperandThree;

Rather than choosing between one of two assignments like this if-else, the assembler generated for our use of the conditional operator does exactly what we told it to: choose one of two values (store it temorarily in the Stack) and assign iLocal from it.

A few final notes on the ? operator:

  1. You can see that less lines of C++ code does not equate to less assembler
  2. It can be nested, but don’t do it! It’s hideous and will also be very hard to follow when source-level debugging
  3. Be very careful with operator precedence when using it. Use brackets to ensure it will resolve the way you intend.

Switch Statements

The final type of conditional statement we’ll be looking at is the switch statement. Like the conditional operator, the switch statement is an often abused and maligned construct that you wouldn’t want to live without.

To be fair to the switch statement it’s not the fault of the switch statement that it’s possible for maniacs to write brittle and insane code using them.

An aside about switch statements

Where I have consistently found really horrific examples of switch statements is when an originally stateless synchronous system has been forced to become asyncronous and state driven under time pressure. This specific situation seems always to somehow spawn the kind of monolithic, hard to follow, difficult to change, architecturally brittle switch statements that have given the switch statement a bad rep over the years.

Code that has had network functionality retrofitted to it is (in my experience) an extremely common place to find problem switch statements. It’s always better to fix a system properly if it starts to look systemically broken than it is to soldier on regardless, and if it looks like you need to introduce a set of states into a system to then (in my experience) it’s architecturally more sensible to use polymorphic behaviour (e.g. a state class with one or more virtual functions) than a switch statement.

Where were we?

Sorry, let’s get on and take a look at a switch statement…

#include "stdafx.h"
  int main(int argc, char* argv[])
      int iLocal = 0;
      // n.b. no "break" in case 1 so we can
      // see what "fall through" looks like
      switch( argc )
      case 1:
          iLocal = 6;
      case 3:
          iLocal = 7;
      case 5:
          iLocal = 8;
          iLocal = 9;
      return 0;

And here’s the disassembly…

     9:     switch( argc )
  00C61620  mov         eax,dword ptr [argc]
  00C61623  mov         dword ptr [ebp-48h],eax
  00C61626  cmp         dword ptr [ebp-48h],1
  00C6162A  je          main+2Ah (0C6163Ah)
  00C6162C  cmp         dword ptr [ebp-48h],3
  00C61630  je          main+31h (0C61641h)
  00C61632  cmp         dword ptr [ebp-48h],5
  00C61636  je          main+3Ah (0C6164Ah)
  00C61638  jmp         main+43h (0C61653h)
      10:     {
      11:     case 1:
      12:         iLocal = 6;
  00C6163A  mov         dword ptr [iLocal],6
      13:     case 3:
      14:         iLocal = 7;
  00C61641  mov         dword ptr [iLocal],7
      15:         break;
  00C61648  jmp         main+4Ah (0C6165Ah)
      16:     case 5:
      17:         iLocal = 8;
  00C6164A  mov         dword ptr [iLocal],8
      18:         break;
  00C61651  jmp         main+4Ah (0C6165Ah)
      19:     default:
      20:         iLocal = 9;
  00C61653  mov         dword ptr [iLocal],9
      21:         break;
      22:     }

This is more or less exactly what you’d expect:

  • line 1 stores argc into the Stack at [ebp-48h]
  • then block from line 2 to 9 implements the logic of the switch by a series of comparisons of this value against the constants specified in the case statements and associated conditional jumps to the assembler generated by the code in the corresponding case statement
  • if none of the conditional jumps are triggered, the logic causes an unconditional jump to the default: case.
  • in particular, note that:
  1. wherever the break keyword is used this causes an unconditional jump past the end of the assembler generated by the switch
  2. the “drop through” from case 1: into case 3: in the high level code happens at the assembler level as a by product of the organisation of the adjacent blocks of instructions generated for the switch by the compiler, and the lack of unconditional jump at the end of the assembler for case 1:

If you look at assembler from the sample if-else-if-else in the last article; you should be able to see that the assembler generated for this switch is (more or less) what would happen if we had written the switch as an if-else-if-else and then re-organised the assembler so all the logic was in one place at the top, and the assembler generated for each code block was left where it was.

So other than the fact that the switch statement is a very useful C/C++ language convenience for managing what would often otherwise be messy looking and error prone chains of if-else-if-else statements, based on this example it doesn’t appear to be doing anything which might offer a significant advantage at the assembler level – so why would I have claimed that the compiler might generate “pretty cool assembler” for a switch?

Before we assume we’ve seen it all, let’s try using a contiguous range of values for the constants in the cases of the switch. You know, just for fun – and for the sake of simplicity let’s start at 0.

#include "stdafx.h"
  int main(int argc, char* argv[])
      int iLocal = 0;
      switch( argc )
      case 0:
          iLocal = 4;
      case 1:
          iLocal = 5;
      case 2:
          iLocal = 6;
      case 3:
          iLocal = 7;
      return 0;

And here’s the disassembly it generates…

Ok, so this time something more interesting is definitely going on – n.b. I’ve used a screenshot rather than just pasting the text because we need to look in a memory window to make sense of it.

So what exactly is it doing?

  • it moves argc into eax, then stores it into the Stack at [ebp-48h]
  • it then compares the value stored in the address [ebp-48h] with 3 (i.e. our maximum case constant)
  • if this value is greater than 3 then ja (jump above) on the next line will cause execution to jump to 8D1658h – the 1st instruction after the code generated by the case blocks, skipping the switch
  • if the value is less than or equal to 3 then the value is moved into ecx, and we then have an unconditional jump to … somewhere :-/

Ok, so that final unconditional jump has some syntax we’ve not yet seen for its address operand, and which clearly isn’t a constant:

jmp    dword ptr    (1B1664h)[ecx*4]

This says “jump to the location stored in the memory address at an offset of 4 times the value of ecx from the memory address 8D1664h“, so how is this implementing the logic of the C++ switch statement?

To answer this question we need to look in a memory window at the address 8D1664h (n.b. to open a memory window from the menu in VS2010 when debugging go Debug -> Windows -> Memory -> … and choose one of the memory windows. To set the address just copy and paste it from the disassembly into the “Address:” input box. You will also need to right click and choose “4-byte integer” and set the “Columns:” list box to 1 to have it look like the screenshot above).

So, if you cast your eyes up to the memory window on the left of the screenshot above, you will see that the top 4 rows are highlighted, these values start at address 8D1664h and are 4 byte integers (hence the ecx*4 in the operand) – which specifically in this case are pointers.

The instruction jmp dword ptr (8D1664h)[ecx*4] will jump to the value stored in the address:

  • 8D1664h + 0 = 8D1664h if the value in ecx is 0
  • 8D1664h + 4  = 8D1668h if the value of ecx is 1
  • 8D1664h + 8  = 8D166Ch if the value of ecx is 2
  • 8D1664h + Ch  = 8D1670h if the value of ecx is 3

So, the four highlighted rows make up a jump table – since our case constant’s range is from 0 to 3 it is an array of 4 pointers – with each element of the array pointing to the execution address of the case block matching its array index.

You can verify this by checking the addresses of the first instruction generated for each case against the 4 values stored in the array.

Maybe it’s just me, but I think this is some pretty cool assembler. It’s certainly more elegant that the assembler generated by the first switch we looked at, but what – if anything – is the advantage of this over the assembler that was generated for the previous case statement?

In theory this jump table form reaches the code in constant time for all cases, whereas in the if-else-if-else form the time to reach the code corresponding to each case will be proportional to the number of previous cases in the switch statement.

You’re pretty unlikely to find that a switch statement is a performance bottleneck in your code (unless you’ve done something silly) but, all things being equal, the jump table appoach uses less instructions to get to the conditional which is normally A Good Thing and – in theory – should make it faster on average.

One final note on switch statements; I am reliably informed that in addition to the if-else-if-else alike linear search behaviour for resolving the correct case to execute, most modern compilers are also capable of generating a binary search for the cases of switch statements with appropriate ranges of case constant values.

Using a binary search rather than a linear search will improve average search time from linear to logarithmic (i.e. O(n) to O(log n)). However, in the average case a binary searched switch will still almost always take more instructions and branches to reach the correct case than a jump table switch.

It’s also possible that the compiler might choose to use one or more of these methods in a single switch, though this would probably require a large number of cases in the switch and ranges of case constants with very specific properties so it’s not likely you will come across these very often.

A couple of final things to note about switch statements:

  1. the compiler should be able to generate a jump table regardless of the order in of the constants in your code (e.g. case 2: … case 1: … case 3: … should still work fine)
  2. having a range of case constants that starts at 0 makes the conditional code around a jump table simpler, as it removes the lower bounds check
  3. a jump table should get created as long as the overall range of constants is large enough and/or closely packed enough for the compiler to decide it’s worthwhile even if they’re not completely contiguous. Look at the disassembly if you want to check.


So, this concludes our look at conditionals, hopefully you’ve found it interesting and illuminating 😉

A final point to take away from our look at conditionals is that whilst the compiler could generate the same assembler for an if-else as for the conditional operator it doesn’t. Similarly, it could generate the same assembler from an if-else-if-else as for a  switch statement but it doesn’t do that either.

In part, this shows the limits of the compiler but also shows the importance of using the appropriate conditional for purpose – the benefit is that which you use makes your intent clearer to human readers of your code.

We’ve now covered enough ground that you should be finding that you can apply the information I’ve given you to everyday programming problems such as debugging release code, or code you don’t have debugging information for.

The main things I’d like you to take away from our look at conditionals are all things that will help you when debugging without symbols:

  1. anytime you see cmp followed by a jxx to a nearby address in the disassembly you’re probably looking at code generated by a conditional statement in the C/C++ code
  2. if the address operand to the jump instruction is lower than the current instruction’s address (i.e. it’s jumping backwards) you’re most likely looking at a loop
  3. assembler generated from conditionals generally tests the opposite of the test being done in the C / C++ code

By using these heuristics, looking at the values in the registers, the values in the Stack that have been written by the assembler, and by looking up your current address in the symbol file to tell you which function you’re in (if you’re not generating a symbol file for all your builds you should be – look in the documentation for your platform’s compiler toolchain to find out how) you should be able to make an educated guess at what variables in the C/C++ code are likely to be causing the current issue and this will usually tell you why it crashed, or give you a lead so you can Sherlock Holmes your way to the root of the problem – it’s certainly a lot quicker than the ubiquitous insertion of the many printf()…

Our next topic will be loops, which obviously also use conditional jumps (which is why we covered conditionals first…)

One final thing…

Thanks to Tony, Bruce, and Fabian for extra information, advice, and proof reading.

And, for those of you who like to go off and look for yourselves (hopefully most of you!), I’ve recently discovered this wiki book on x86 Assembler  

Past the Past Pastel Dreams

Original Author: Claire Blackshaw

Take a moment, think back, past the nostalgia and sepia dreams so we can consider old, forgotten mechanics. The thing we love about games is that they are complex and detailed but primarily, they are games. Systems of interaction and exploration within a created framework. All creative works evolve, compete, succeed and in some cases die out.

Though within these sepia dreams and old memories live viable mechanics which when re-examined and explored anew provide exciting areas of creativity.

Among the games of my youth three digital games stand tall, in order played: Hero’s Quest: So You Want to be a Hero[1], X-Com: Terror from the Deep and Shadowrun on Sega Megadrive. Now some of these games have seen reboots or attempts at reboots which quite frankly angered fans and often missed the point. Thankfully through Kickstarter for Shadowrun and Firaxis for X-Com these two titles are getting a modern, caring treatment and re-examination.

X-Com Squad Turns

Many people passed over the unique quality of team-based turns. There have been many other games in the tactical genre but few have explored this idea of “I move all my units, then you move all your units”. This concept of moving multiple units simultaneously thus requiring a session of planning which manifests as a massive investment.

As control is taken away from the player, breaking a golden rule to strengthen this mechanic, a high point of tension is created as the plan unfolds.

Of interest to modern designers is that this investment, planning and then tension as you take away control from the player. For a modern interpretation with a different angle I suggest looking at Frozen Synapse. The turns are simultaneous and you plan 5 seconds increments but once again, control is taken away from the player as they watch the consequences of their plans.

Golden Rule Broken: Taking Control Away from the Player

Shadowrun Hacking

Almost a fully fledged game within the game but with deep seated roots in the core gameplay. So often “hacking” or another core ability is thrown in as a tangential mini-game. With limited or no interaction with the core-gameplay other than a binary outcome of success or failure. This follows the premise of not creating a game mode shift for the player and avoiding a development investment in what is essentially a “second game”.

Older games were much bolder in this. In Shadowrun hacking you hunted down better hacking decks with a whole subset of stats which could be upgraded. The camera shifted from isometric to third person over the shoulder with new UI and controls. You built contacts and went on missions to acquire that “better piece of software” or that “underground deck”. Your decker’s point of access, which related to hacking difficulty, is determined by their physical entry point into the system. Individual nodes on the hacking map relate to camera systems or subsystems of the physical security system. Triggering the alert system or disabling it, affect the real world alarm systems.

That massive investment in an alternate game mode layered on top of the primary mode added massive levels of depth to the world and further fleshed out the game. Looking for modern alternatives I was unable to find a good modern execution of this concept.

Golden Rule Broken: Avoid Gameplay Mode Shifts

Alternate Expression: Focus on a consistent experience.

Time Based Gameplay

In Quest for Glory many moments of interaction were determined by time of day or the day of the week. This occasionally meant as a player you were running around waiting for an event to happen or cleaning out the stables to earn some coins and some time. While some modern games have integrated NPCs timetables much more complex than their predecessors, they have been made insignificant by removing their game altering potential and turning them into minor points of flavour.

The depth of gameplay this added to the world was significant. You had to work around a real world. As a side note the fact the game required you to grind some monsters or chores like stable sweeping to earn your coin allowed you to effectively use your downtime. In Skyrim I can wake up a town blacksmith and purchase armour, removing all gameplay impact of the town schedule. This mechanic lives on in many modern games but in our fear of inconveniencing the player it has been neutered. I encourage designers to look at the gameplay affecting elements this play style offers.

Golden Rule Broken: Never Inconvenience the Player

Alternate Expression: Never waste the players time

Mixing it Up

Quest for Glory additionally mixed combat, role playing, adventure elements while providing multiple solution paths gto many problems. Environmental storytelling, usage based levelling and many other elements which have survived into modern design lexicons.

X-Com famously mixed the tactical and geoscape layer in a very complex interwoven gameplay. Though as Brendon Chung‘s GDC talk into his trouble with Atom Zombie Smashers highlighted this is not a simple task. I’ve also already talked about the Shadowrun hacking element as another example. For modern designers still bravely exploring mixing of genre and mechanics you do have to look outside mainstream metric focused development. Though I worry that their lack of polish and budget in many cases restricts their ability to smooth out the seams and truly integrate genres.

Golden Rule Broken: Keep It Simple Stupid (KISS)


As many of these older mechanics show, breaking what we consider a golden rule today can sometimes be key to development of an interesting mechanic. These are just a few picked examples from these games. Many other elements exist in these older games which have been gathering dust.

Many of these older mechanics first came about due to technical limitation and were discarded with the limitation. Our Golden Rules were not forged by the gaming gods but discovered through trial,error and exploration. Some of them may lead to an evolutionary dead end in design, an appendix when no longer needed. I encourage you to mine old games. Not for the IP, nostalgia, or history but for the game. Uncovering the hidden machinery of the past, broken paths and discarded branches of game evolution for old and interesting ideas that can be made new again.

[1] Later renamed to Quest for Glory: So you want to be a Hero to avoid confusion with another game, Hero Quest.

TDD for legacy code, graphics code, and legacy graphics code?

Original Author: Rob-Galanakis

We’re currently undergoing a push to do more ‘Agile testing’ at work. At CCP, we “do” Agile pretty well, but I don’t think we “code” Agile really well. A large part (the largest part?) of Agile coding and Agile testing is TDD/Unit Testing, of which I’m a huge fan if not an experienced practitioner.

I know how to do TDD for what I do; that is, side-by-side replacement of an internal legacy codebase in a high-level language. What I don’t have experience in is TDD for expansion and maintenance of a huge, high performance, very active, legacy codebase, and specifically the graphics components and the C++ bits.

So if you have experience in these sorts of things, I’d love to hear them.

At this point I’m sticking my neck out as a TDD and unit testing advocate studio-wide, but am reluctant to evangelize too strongly outside of my areas of expertise. I don’t think it’d be fair to the very talented people in those areas, and I also don’t want to be wrong, even if I know I’m right 🙂 So I’d really like to hear about your experiences with TDD and unit testing in the games and graphics space, and on legacy codebases, because people are coming to me with questions and I don’t have good answers. I’d love to give them places to turn, articles to read, people to contact.

Thanks for any help.

Oxel: A Tool for Occluder Generation

Original Author: Nick Darnell

Its been awhile since I’ve done an update to my research into occluder generation.  If you need a refresher, take a look at:

Lets start with the newest and most important information…

Update 4/13/2012

The project is now hosted on BitBucket,


Oxel 1.0.0 – Win32.zip



Oxel is a tool for generating occluders – primarily for use with the Hierarchical Z-Buffer method of occlusion culling.  Open an .obj file then go to Build > Voxelize to generate the proxy.  Try it out, let me know what you think.

There are some further improvements that have been made over the method that is laid out in the original Generating Occlusion Volumes post,

  • Retriangulation
  • Evaluating Occlusion-ness
  • Filtering Polygons


I went back to the CSG article Sander van Rossen and Matthew Baranowski wrote.  I noticed near the end that they recommended retriangulating the final CSG mesh because their library doesn’t handle it.

The easiest method I found to do the retriangulation was to just use David Eberly’s Wild Magic math library which can do it, see here.  Before you perform the retriangulation you need to collect all the external/internal edge loops on each surface and then merge collinear edges before performing the retriangulation.

It’s was definitely worth doing, performing the retriangulation saved about 30% of the triangles.

Evaluating Occlusion-ness

When you’re generating occluder geometry you need to ensure that the volumes you’re adding are useful enough to pay the additional polygon tax they will incur.

Oxel achieves this by measuring the number of pixels written to the color buffer by rendering the original visual mesh into the stencil buffer, and then using the stencil buffer to clip a full screen quad while a “samples passed” hardware query is performed to record the number of pixels written.  This is performed from about 64 different camera angles – then when all the samples are summed together this defines the ground truth occlusion of the mesh.

Knowing that information we can evaluate every new box we plan to add to the final occlusion proxy geometry and ask – Does this increase the coverage enough to make it worth it?.  The default threshold is set to 3%.  If a new volume does not at least cover 3% of new silhouette pixels, we don’t include it in the final occluder mesh.

Filtering Polygons

Not every game needs the occluders to be visible from all angles.  For example most racing games will never have the cars jumping so high they see on top of a building.  In those circumstances you may not want to have the occluders bothering to have polygons on the top.

with without

So the tool offers the ability to remove Top and Bottom polygons from the final occluder mesh after everything has finished being processed.

If you decide to filter out all top polygons they’re all removed.  For bottom surfaces this has to be handled slightly differently, see below –


Imagine a bridge structure or a large over hang on a building.  You wouldn’t want the bottom portion of the occluder to be removed from the overhang since it would allow seeing into and through the occluder – because presumably you wouldn’t have double sided rendering enabled.  So to distinguish between bottom surfaces not meant to ever be seen by the player, and bottom surfaces that may be important to higher up potions of the structure, only bottom surfaces within 1 voxel’s height from the base of the mesh are removed.  (See picture above)

Work Continues

I’ve got some more ideas I need to test out but I wanted to go ahead and cut a version of where I’m at and let others take a look and give me some feedback.

One area I need to investigate next is a better way to generate the boxes.  Currently they are just expanded in all directions equally, but that’s not ideal.  I’m thinking about trying a parallel brute force method that would test all possible boxes at a specific origin point to find the best shaped box to generate from that point.

Oxel: A Tool for Occluder Generation @ nickdarnell.com

Floating-point complexities

Original Author: Bruce-Dawson

Binary floating-point math is complex and subtle. I’ve collected here a few of my favorite oddball facts about floating-point math, based on the articles so far in my floating-point series. The focus in this list is on float but the same concepts all apply to double.

These oddities don’t make floating-point math bad, and in many cases these oddities can be ignored. But when you try to simulate the infinite expanse of the real-number line with 32-bit or 64-bit numbers then there will inevitably be places where the abstraction breaks down, and it’s good to know about them.

Some of these facts are useful, and some of them are surprising. You get to decide which is which.

  • Adjacent floats (of the same sign) have adjacent integer representations, which makes generating the next (or all) floats trivial
  • FLT_MIN is not the smallest positive float (FLT_MIN is the smallest positive normalized float)
  • The smallest positive float is 8,388,608 times smaller than FLT_MIN
  • FLT_MAX is not the largest positive float (it’s the largest finite float, but the special value infinity is larger)
  • 0.1 cannot be exactly represented in a float
  • All floats can be exactly represented in decimal
  • Over a hundred decimal digits of mantissa are required to exactly show the value of some floats
  • 9 decimal digits of mantissa (plus sign and exponent) are sufficient to uniquely identify any float
  • The Visual C++ debugger displays floats with 8 mantissa digits
  • The integer representation of a float is a piecewise linear approximation of the base-2 logarithm of that float
  • You can calculate the base-2 log of an integer by assigning it to a float
  • Most float math gives inexact results due to rounding
  • The basic IEEE math operations guarantee perfect rounding
  • Subtraction of floats with similar values (f2 * 0.5 <= f1 <= f2 * 2.0) gives an exact result, with no rounding
  • Subtraction of floats with similar values can result in a loss of virtually all significant figures (even if the result is exact)
  • Minor rearrangements in a calculation can take it from catastrophic cancellation to 100% accurate
  • Storing elapsed game time in a float is a bad idea
  • Comparing floats requires care, especially around zero
  • sin(float(pi)) calculates a very accurate approximation to pi-float(pi)
  • From 2^24 to 2^31, an int32_t has more precision than a float – in that range an int32_t can hold every value that a float can hold, and millions more
  • pow(2.0f, -149) should calculate the smallest denormal float, but with VC++ it generates zero. pow(0.5f, 149) works.
  • IEEE float arithmetic guarantees that “if (x != y) return z / (x-y);” will never cause a divide by zero, but this guarantee only applies if denormals are supported
  • Denormals have horrible performance on some older hardware, which leads to some developers disabling them
  • If x is a floating-point number then “x == x” may return false – if x is a NaN
  • Calculations done with inconsistent results
  • Double rounding can lead to inaccurate results, even when doing something as simple as assigning a constant to a float
  • You can printf and scanf every positive float in less than fifteen minutes

Do you know of some other surprising or useful aspects of floats? Respond in the comments.