Skin Shading in Unity3d

Original Author: Jon Moore

I’ve been away from AltDev for a while, a bit longer than I originally expected because after a period of crunch before IGF submission in late October, I went back to the comforts of getting a normal nights sleep, something that working a normal game development job this past summer spoiled me into enjoying. Now the Christmas holiday has given me not only the time to finally begin blogging again, but the chance to do a little side project in something I greatly enjoy: the rendering of human skin, which was also an excuse for giving the latest Unity3D 3.5 beta a whirl:


This has entirely been a hobby project for me, because I do not foresee its application into any project I’m involved with. However, I’ve realized that Unity is rapidly approaching the point where someone may want to use a more complicated skin shading model than just basic wrap lighting (an example of which is provided in the documentation). With this in mind, I thought it might be of interest to do a post pulling together some of the better resources I’ve found on the topic as well as posting some of my more useful code so that perhaps it might help serve as a jumping off point for a more serious endeavor.

The one issue that is always at the core of skin shading is that of Subsurface Scattering, which is the effect of light bouncing around underneath the surface of the skin and re-exiting elsewhere. A simply using a Lambert model causes very harsh edges, because the scattering is what gives skin its softer appearance. Here’s what the built-in “Bumped Diffuse” shader looks like, the basis that is trying to be improved:

Background: Texture Space Diffusion

No discussion of skin rendering starts with anything other than mentioning the NVIDIA Human Head demo, which has a detailed description of its implementation in GPU Gems 3. The technique relies upon Texture Space Diffusion, where the lighting for the mesh is rendered into a texture and then blurred various amounts (based off of the scattering of light inside of human skin). The results of those blurs is combined together to form the actual diffuse lighting term. I actually played around with this technique in Unity around Christmas time last year, but this proved to be difficult given the nature of the TSD and the fact that Unity did not at the time easily support some nice features like Linear Space lighting.

There have been some very useful resources on skin shading in since the publication of GPU Gems 3. There are does subsurface scattering calculations in screen space, which removes the cost per mesh of TSD (a serious limitation for it’s use in an actual application). This seems to have garnered some adoption in game engines, but I understand that it is still a reasonably expensive technique (note: I’m less knowledgeable of Screen Space Subsurface Scattering, so take that comment with a grain of salt).

With regards to tools for doing Skin Rendering on your own time, a good quality head scan has entered into the a public beta for Unity 3.5, which now features Linear Space lighting and HDR rendering conveniently built into the engine.

The Latest Hotness: Pre-Integrated Skin Shadding (PISS)

I’ve been impressed with the quality articles in GPU Pro 2 since I picked it up this past summer, one of which is Eric Penner’s article detailing his technique “Pre-Integrated Skin Shading.” He also gave a talk describing it at Siggraph, and the slides are available online. There are three main parts to it: scattering due to curvature, scattering on small details (i.e. the bump map), and scattering in shadow falloff. I did the first two, no shadows yet for me, but I’ll do a follow up post if I get around to it.

The basic motivation is to pre-calculate the diffuse falloff into a look-up texture. The key to making the effect look good is that instead of having a 1-dimensional texture for NdotL, it is a 2D texture that encompasses different falloffs for different thicknesses of geometry. This will allow the falloff at the nose to differ from that in the forehead.

I’ve adapted the sample code to precompute the lookup texture into a little editor wizard for Unity. Simply make a folder titled “Editor” in Unity’s assets and drop it in, and you’ll have it extended to the editor (the function is added to the menu “GameObject/Generate Lookup Textures”). I’ve included the full script at the bottom of this blog post, which computes both a falloff texture that appears to be about the same as the one that Penner shows in GPU Pro 2. Here’s what my lookup texture looked like when I was all said and done, this is to be sampled with 1/d in the y, and NdotL in the x:

An important note: if you activate Linear Rendering in Unity 3.5, know that the texture importer has an option to sample a texture in Linear Space. GPU Gems 3 has an entire article about the importance of being linear, I noticed that after I turned it on in Unity, the lookup texture had the effect of making the head look like it was still in Gamma Space. I then realized I needed to flip on that option in the importer. In fact you may notice that the lookup texture in GPU Pro 2 looks different than the one in Penner’s Siggraph slides, this is because the one in the slides is in linear space. The one in the book is not, which is also the case with the one pictured above.

The parameter 1/d is approximated with a calculation of curvature based on similar triangles and derivatives (there’s a great illustration of this in the slides), and I’ve included the snippet of GLSL from my shader. To see the output of curvature on my model (easily tuned with a uniform parameter in the shader to offset the calculation):

// Calculate curvature
  float curvature = clamp(length(fwidth(localSurface2World[2])), 0.0, 1.0) / (length(fwidth(position)) * _TuneCurvature);

This curvature calculation is actually a serious problem with implementing the technique in Unity. This is because ddx and ddy (fwidth(x) is just: abs(ddx(x)) + abs(ddy(x))) are not supported by ARB, which means Unity can’t use the shader in OpenGL. Those of you familiar with Unity, will know that CG is the shading language of choice, and it is then compiled for each platform. A thorny issue, that I have been stumped with in the past, especially because I do many of my side projects in OSX on my laptop. Tim Cooper (for doing GLSL in Unity.

(NOTE: Aras from Unity posted a suggestion down in the comments that might cause my choice to use GLSL directly to be un-needed. I will properly update the post after I try it out myself)

Softening Finer Details

As I mentioned, I’m sharing my experience with trying out 2 of the 3 techniques detailed by Penner. The second part is smoothing of small detailed bumps. While the pre-integrated texture will account for scattering at a broader sense, the fine details contained in the normal map are still much too harsh. Here’s what my model looks like with just the pre-integrated scattering:

“Too Crispy” is my favorite way of describing the problem. Penner addresses this by proposing blending between a smoother normal map and the high detail normal map. The high detail is still used for specular calculations, but for the diffuse, you pretend that red, green, and blue all come from separate normals. This is very similar to what Hable details as the technique used in normal gameplay in Uncharted 2 in his Siggraph 2010 slides. By treating them separately the red channel can be made to be softer, Penner advises using the profile data from GPU Gems 3 to try to match real life.

In order to avoid the additional memory of having 2 normal maps, Penner uses a second sampler for the normal map that is clamped to a lower mip resolution (as opposed to using the surface normal like Uncharted 2). However, Unity does not allow easy access to samplers to my knowledge, so the only way to set up a sampler like that is to actually duplicate the texture asset, which won’t save any memory. So, my solution was to just instead apply a LOD bias to the texture lookup (an optional third parameter in tex2D/texture2D in case you’re not familiar). Here’s what the same shot from above looks like with the blended normals applied using a mip bias of 3.0:


While many articles on skin shading focus almost entirely on the scattering of diffuse, the GPU Gems 3 article provides a good treatment of using a specular model that is better than Blinn-Phong. The authors chose the Kelemen/Szirmay-Kalos Specular BRDF, which relies on a precomputed Beckmann lookup texture and takes into account Schlick approximation. Being that I had played around with that model a year ago when I was experimenting with TSD in Unity, I simply rolled that code into Penner’s work. Here’s the resulting specular:

A shot before applying specular:

And finally the specular and diffuse combined together:

Concluding Thoughts / Future Work

This has been a fun little side project for me, and I think it turned out decently well. I’ve made and fixed a few stupid mistakes along the way, and I wonder if I’ll find a few more yet, don’t be afraid to point out anything terrible if you spot it. My two main goals left are tighter integration with Unity : proper point/spot calculations that match CG shaders in Unity, and shadow support. When it comes to shadows I’m at a but of a crossroads between trying to hook into Unity’s main shadow support vs. rolling some form of my own. I suspect rolling my own would be less useful in the effort to make the skin shader completely seamless with unity, but might be less of a headache to accomplish. Either way, if I do make make any drastic improvements, I promise I’ll do a follow up post 🙂

Finally, if you’re interested in the IGF project, Dust, that I mentioned as the root cause of my hiatus from AltDev, you can check out the project on it’s website at, and the trailer for it if clicking a link is too much effort:


Source Code

As I promised, here is the source for my Unity script that generates my lookup textures:

// GenerateLookupTexturesWizard.cs
  // Place this script in Editor folder
  // Generated textures are placed inside of the Editor folder as well
  using UnityEditor;
  using UnityEngine;
  using System.IO;
  class GenerateLookupTexturesWizard : ScriptableWizard {
  	public int width = 512;
  	public int height = 512;
  	public bool generateBeckmann = true;
  	public bool generateDiffuseScattering = true;
      [MenuItem ("GameObject/Generate Lookup Textures")]
      static void CreateWizard () {
          ScriptableWizard.DisplayWizard<GenerateLookupTexturesWizard>("PreIntegrate Lookup Textures", "Create");
  	float PHBeckmann(float ndoth, float m)
  		float alpha = Mathf.Acos(ndoth);
  		float ta = Mathf.Tan(alpha);
  		float val = 1f/(m*m*Mathf.Pow(ndoth,4f)) * Mathf.Exp(-(ta * ta) / (m * m));
  		return val;
  	Vector3 IntegrateDiffuseScatteringOnRing(float cosTheta, float skinRadius)
  		// Angle from lighting direction
  		float theta = Mathf.Acos(cosTheta);
  		Vector3 totalWeights =;
  		Vector3 totalLight =;
  		float a = -(Mathf.PI/2.0f);
  		const float inc = 0.05f;
  		while (a <= (Mathf.PI/2.0f))
  			float sampleAngle = theta + a;
  			float diffuse = Mathf.Clamp01( Mathf.Cos(sampleAngle) );
  			// Distance
  			float sampleDist = Mathf.Abs( 2.0f * skinRadius * Mathf.Sin(a * 0.5f) );
  			// Profile Weight
  			Vector3 weights = Scatter(sampleDist);
  			totalWeights += weights;
  			totalLight += diffuse * weights;
  		Vector3 result = new Vector3(totalLight.x / totalWeights.x, totalLight.y / totalWeights.y, totalLight.z / totalWeights.z);
  		return result;
  	float Gaussian (float v, float r)
  		return 1.0f / Mathf.Sqrt(2.0f * Mathf.PI * v) * Mathf.Exp(-(r * r) / (2 * v));
  	Vector3 Scatter (float r)
  		// Values from GPU Gems 3 "Advanced Skin Rendering"
  		// Originally taken from real life samples
  		return Gaussian(0.0064f * 1.414f, r) * new Vector3(0.233f, 0.455f, 0.649f)
  			+ Gaussian(0.0484f * 1.414f, r) * new Vector3(0.100f, 0.336f, 0.344f)
  			+ Gaussian(0.1870f * 1.414f, r) * new Vector3(0.118f, 0.198f, 0.000f)
  			+ Gaussian(0.5670f * 1.414f, r) * new Vector3(0.113f, 0.007f, 0.007f)
  			+ Gaussian(1.9900f * 1.414f, r) * new Vector3(0.358f, 0.004f, 0.00001f)
  			+ Gaussian(7.4100f * 1.414f, r) * new Vector3(0.078f, 0.00001f, 0.00001f);
      void OnWizardCreate () {
  		// Beckmann Texture for specular
  		if (generateBeckmann)
  	        Texture2D beckmann = new Texture2D(width, height, TextureFormat.ARGB32, false);
  			for (int j = 0; j < height; ++j)
  				for (int i = 0; i < width; ++i)
  					float val = 0.5f * Mathf.Pow(PHBeckmann(i/(float) width, j/(float)height), 0.1f);
  					beckmann.SetPixel(i, j, new Color(val,val,val,val));
  			byte[] bytes = beckmann.EncodeToPNG();
  			File.WriteAllBytes(Application.dataPath + "/Editor/BeckmannTexture.png", bytes);
  		// Diffuse Scattering
  		if (generateDiffuseScattering)
  	        Texture2D diffuseScattering = new Texture2D(width, height, TextureFormat.ARGB32, false);
  			for (int j = 0; j < height; ++j)
  				for (int i = 0; i < width; ++i)
  					// Lookup by:
  					// x: NDotL
  					// y: 1 / r
  					float y = 2.0f * 1f / ((j + 1) / (float) height);
  					Vector3 val = IntegrateDiffuseScatteringOnRing(Mathf.Lerp(-1f, 1f, i/(float) width), y);
  					diffuseScattering.SetPixel(i, j, new Color(val.x,val.y,val.z,1f));
  			byte[] bytes = diffuseScattering.EncodeToPNG();
  			File.WriteAllBytes(Application.dataPath + "/Editor/DiffuseScatteringOnRing.png", bytes);
      void OnWizardUpdate () {
          helpString = "Press Create to calculate texture. Saved to editor folder";

I haven’t included the actual shader because it’s a little messy still and currently only supports the OpenGL path. The numerous resources that I’ve mentioned should guide you in the right direction though.

Leaderboard Benchmarking

Original Author: David Czarnecki


I recently got a new MacBook Pro and I wanted to do some benchmarking. Naturally I turned to the leaderboard library I maintain which already had a base set of benchmarks that I could compare the results to on my new computer. You may also recall I’ve written about leaderboards before here on #AltDevBlogADay. In this blog post I’ll discuss two “knobs” I tweaked to get more performance out of the leaderboard library, the underlying interpreter and an underlying client library. While there was definitely a speed improvement when switching the underlying interpreter to a more recent version, there was a more significant speed improvement when switching the underlying client library.


The benchmarks were run on a 2.5 GHz Intel Core i7 MacBook Pro with 8 GB of memory under OS X Lion 10.7.2 using Redis 2.4.5. They measured the total amount of time to rank 10 million people in a leaderboard using sequential scores, total amount of time to rank 10 million people in a leaderboard using random scores and the average time to retrieve an arbitrary page from the leaderboard. The basic benchmarking code can be found in the README on the leaderboard project page.

Switching Ruby Interpreters

Below are the benchmark results from switching between different Ruby interpreters, Ruby 1.8.7, Ruby 1.9.2 and Ruby 1.9.3. Values are in seconds.

Ruby 1.8.7

Time to rank 10 million people in a leaderboard (sequential scores): 794.157420158386

Time to rank 10 million people in a leaderboard (random scores): 849.301838159561

Average time to retrieve an arbitrary page from the leaderboard (50,000 requests): 0.00165219999999999

Ruby 1.9.2

Time to rank 10 million people in a leaderboard (sequential scores): 656.387526

Time to rank 10 million people in a leaderboard (random scores): 748.955826

Average time to retrieve an arbitrary page from the leaderboard (50,000 requests): 0.0011321999999999895

Ruby 1.9.3

Time to rank 10 million people in a leaderboard (sequential scores): 651.057383

Time to rank 10 million people in a leaderboard (random scores): 719.157958

Average time to retrieve an arbitrary page from the leaderboard (50,000 requests): 0.001079199999999996


Ruby 1.8.7 to Ruby 1.9.3 (sequential scores): 18% improvement

Ruby 1.8.7 to Ruby 1.9.3 (random scores): 15% improvement

Ruby 1.8.7 to Ruby 1.9.3 (average page request): 34% improvement

Switching Redis Client Library

I decided not to benchmark Ruby 1.8.7 or Ruby 1.9.2 in tweaking the client driver “knob”.

Below are the benchmark results using Ruby 1.9.3 with the hiredis-rb client library. Values are in seconds.

Ruby 1.9.3 and hiredis

Time to rank 10 million people in a leaderboard (sequential scores): 472.544572

Time to rank 10 million people in a leaderboard (random scores): 549.911350

Average time to retrieve an arbitrary page from the leaderboard (50,000 requests): 0.0003803999999999928


Ruby 1.8.7 to Ruby 1.9.3/hiredis (sequential scores): 40% improvement

Ruby 1.8.7 to Ruby 1.9.3/hiredis (random scores): 35% improvement

Ruby 1.8.7 to Ruby 1.9.3/hiredis (average page request): 76% improvement

Ruby 1.9.3 to Ruby 1.9.3/hiredis (sequential scores): 27% improvement

Ruby 1.9.3 to Ruby 1.9.3/hiredis (random scores): 23% improvement

Ruby 1.9.3 to Ruby 1.9.3/hiredis (average page request): 64% improvement


It seems every aspect of video game technology is tweaked to maximize performance, whether it be frames per second or requests per second. Fiddling with very basic “knobs” here, I was able to get decent improvements in writing to and reading from a leaderboard of non-trivial size. Further speed improvements could potentially be realized by using a socket-based connection versus a TCP-based connection as well as faster hardware optimized for a server environment versus an off-the-shelf laptop.

Thanks for indulging my leaderboard nerdery yet again. You can find more hilarity over on my Twitter account, @CzarneckiD.

The Craft of Game Systems: Practical Examples

Original Author: Daniel Achterman

My previous articles were about system design at a conceptual level, focusing on goals and best practices for system designers. This article gives an example of how to put those principles into practice. To demonstrate, I’ll walk through the character stat systems for one of the games I did system design for: Dungeon Siege 2. I’ll describe how its systems work, why we set things up the way we did, and how we did the math to translate our goals into content.

I’ve posted the part of our system spreadsheets where we tracked and compared these values. As you read, follow along with the numbers here:

Dungeon Siege 2 System Spreadsheet

Dungeon Siege 2 Character Statistic Progressions

As an RPG, characters in Dungeon Siege 2 increase in power as they level up. Their stats increase, they get more health, and they get better weapons and armor. In order to make content that is appropriately challenging for characters throughout the game, we needed to be able to predict how strong characters would be at each level. We decided that characters’ attributes and the base statistics of their weapons and armor would form their “base power level”, so we started by defining those values and progressions. This base power level would later be increased by additional factors like skills, powers, and random loot modifiers to give their total overall power.

All of these formulas have “Level” as a parameter. In the case of characters, this is their character level. In the case of items, it’s the level we expect characters to be when they are using the item. Item level restricts when the random treasure generator can drop it.

Damage per Second

We had a couple simple goals for base character damage per second:

  • The base damage per second formula should be the same for all classes. Dungeon Siege 2’s classes have different strengths and weaknesses, but they all start with the same DPS. This changes as players start specializing their roles by spending skill points, but it’s nice to have one base formula for everyone.
  • The base damage per second formula should be stupidly simple. It appears in countless other tuning calculations, like monster health and item bonus values, so it should be an easy formula to work with.

To keep the DPS formula simple, we made it increase linearly with level, the simplest kind of progression rate. It’s just this:

Base Damage Per Second = 8 + 2 * Level

You can see it is identical for all classes in rows 6, 16, and 27 of the spreadsheet. The “8” term represents how much damage a character does at the start of the game. The “2” term represents the rate his base DPS changes with each level.

To find a weapon or spell’s damage, we multiplied the expected base damage per second from the spreadsheet by the weapon’s time between attacks to get its damage per attack. For melee and ranged weapons, we then subtracted the expected damage bonus characters would get from statistics like Strength or Dexterity. Here’s an example of a level 10 two handed sword that swings once every 1.5 seconds:

Level 10 Sword Damage = Base DPS at Level 10 * Time per Swing – Strength Bonus

Level 10 Sword Damage = (8 + 2 * 10) * 1.5 – (0.2 * (10 + 3 * 10))

Level 10 Sword Damage = 34

So, a level 10 fighter wielding a sword that deals 34 damage per swing will deal the desired base DPS. Here’s a visual of how much of a fighter’s damage comes from his weapon vs. his strength at different weapon speeds:

If I had to do it again, I’d make the formula coefficients higher. Sometimes at low levels a player got a weapon or piece of armor that was one or two points better than his previous item, which just didn’t feel very good. There was nothing to stop us from making all the numbers in the game bigger, so we should have done it to make those early upgrades feel more exciting.

Fighter Armor

The armor value progression is related to the game’s damage calculation formula. There are a so complicated no one understands how they work. Dungeon Siege 2’s is pretty simple by industry standards:

Actual Damage = Attack Damage * (Expected Attacker DPS / Armor)

Actual Damage = Attack Damage * ((8 + 2 * Attacker Level) / Armor)

The “((8 + 2 * Attacker Level) / Armor)” term is essentially the percentage of attack damage the victim takes. We wanted that percentage to stay roughly constant throughout the game, and 50% sounded like a good target for durable fighter characters. Keeping the percentage constant allowed us to tweak difficulty by just changing monster health and damage, without having to factor in changing damage reduction.

In order to get the 50% ratio, base armor for fighters and monsters must be double base damage per second (see row 7 of the spreadsheet):

Armor = 2 * (8 + 2 * Level) = 16 + 4 * Level

Ranger and Mage Armor

Say that a ranger has X% as much armor as a fighter. Our simple damage formula makes it easy to determine that he’ll take 1/X times as much damage from attacks as the fighter, so he’ll be able to withstand X% as much punishment before dying.

The defense values on ranger and mage armor were noticeably lower than the values of fighter armor so players could see the distinction between gear for different classes. We ended up making ranger armor defense values 70% of fighter armor and mage armor 55% of fighter armor. You can see these formulas in rows 17 and 28 of the spreadsheet.

Fighter Health

To figure out how much health characters have, we thought in terms of how long it would take to kill a character. Since base damage per second increases linearly with level, base health should also increase linearly with level, so that it takes about equally as long to kill a character at low and high levels.

Fighter health is the baseline that ranger and mage health were compared to, so we started with it. We started by multiplying the damage per second formula by different numbers to get a progression that looked right:

Health = 42.4 + 10.6 * Level

After playtesting a bit, we realized that low levels were pretty lethal, we increased the starting health value a bit to give low level players more durability:

Health = 56 + 10.6 * Level

You can see the values in row 8 of the spreadsheet follow this progression, but the formula for it looks very different. One of the things Dungeon Siege 2 carried over from the first Dungeon Siege is that character health and mana are a function of their attributes – Strength, Dexterity, and Intelligence. Strength has the biggest impact on health, then Dexterity, then Intelligence. The health formula has this format:

Health = A * Strength + B * Dexterity + C * Intelligence

To find the actual values of A, B, and C, we needed to also consider ranger and mage health.

Ranger and Mage Health

We knew we wanted rangers and mages to have less health than fighters. As with armor, we decided to make their health a percentage of fighter health. Rangers should have 90% as much health as fighters of the same level, and mages should have 75% as much health.

This would have been easy if different classes had different health formulas, as they do in games like Diablo 2. But in Dungeon Siege 2, every class uses the same health formula. Character health is based on attributes, which grow at different rates for different classes. We had to derive health formula coefficients that would multiply their different attribute values to result in the expected health percentage for each class. Time for some math!

Here’s the health formula again:

Health = A * Strength + B * Dexterity + C * Intelligence

Rows 3-5, 13-15, and 24-26 of the spreadsheet show how attributes progress for each class. Replacing Strength, Dexterity, and Intelligence with their expected values for each class, we get the following three formulas:

Fighter Health = A * (10 + 3 * Level) + B * (10 + Level) + C * (10 + Level)

Ranger Health = A * (10 + Level) + B * (10 + 3 * Level) + C * (10 + Level)

Mage Health = A * (10 + Level) + B * (10 + Level) + C * (10 + 3 * Level)

Replacing “Fighter Health” with the desired health progression and replacing “Ranger Health” and “Mage Health” with the desired ratios of Fighter Health gives us the following:

56 + 10.6 * Level = A * (10 + 3 * Level) + B * (10 + Level) + C * (10 + Level)

0.9 * (56 + 10.6 * Level) = A * (10 + Level) + B * (10 + 3 * Level) + C * (10 + Level)

0.75 * (56 + 10.6 * Level) = A * (10 + Level) + B * (10 + Level) + C * (10 + 3 * Level)

In Dungeon Siege 2, all characters start at level 0 with the same amount of health. We can’t give fighters, rangers, and mages different health at level 0. The best we can do is give their rates of change the desired ratios. To focus on that, we can throw out all the constants in the formulas:

10.6 * Level = 3A * Level + B * Level + C * Level

9.5 * Level = A * Level + 3B * Level + C * Level

8.0 * Level = A * Level + B * Level + 3C * Level

Then divide both sides of the equations by “Level” to clean up the formulas:

10.6 = 3A + B + C

9.5 = A + 3B + C

8.0 = A + B + 3C

This is a system of simultaneous equations. This wikipedia page explains how to solve for values of A, B, and C, resulting in this final health formula:

Health = 2.5 * Strength + 1.95 * Dexterity + 1.15 * Intelligence

That’s the formula that appears in lines 8, 18, and 29 in the spreadsheet. Lines 20 and 31 and the chart show that ranger and mage health settle into the expected ratios of fighter health, so we know our math is right.

That works, but getting there was really complicated. This design also doubles up the functionality of attributes. We couldn’t increase a fighter’s strength to increase his melee damage without significantly increasing his health as well. If I were designing a game from scratch, I would have done something like derive health directly from class levels, without involving attributes at all.

A Word on Math

Applying math to system design can take some practice. It can be difficult to look at some formulas and figure out how they behave in different situations, or how to get the result you want, especially if you’re just starting to do this kind of quantitative design work. Here are some tips for game system math:

  • Focus on setting up equations correctly. It doesn’t matter if your math is simple or complex, but it does need to be right. If I started with my intended base DPS progression but messed up the math to solve for a weapon’s damage value, then the reality of the game wouldn’t match my intended progression at all, and I’d get unexpected results and make bad conclusions.
  • The hardest part is keeping your units straight. If you know a value is “damage per second”, then multiplying by a time value will result in a damage value. Keep track of your units and don’t mix up inverses. For instance, frequency is not the same as time between attacks.
  • Don’t try to be too precise. Math in games is like math in physics – it’s a broad-strokes tool for approximating a complex system. I wasn’t able to get exactly the result I wanted with the ranger and mage health formulas, but I got close enough for my purposes. So long as your progressions are set up correctly, it’s okay if the reality of your game doesn’t match your ideal version completely.
  • Test your results by graphing them. If you’re not sure your math is right, check your work by graphing your result. Erroneous results will often be apparent, letting you know that you need to review your math to see what went wrong.
  • Practice, practice, practice. Look up formulas that other games use and try to determine why the designers set them up that way.

The Final Word: The First System Doesn’t Really Matter (Yet)

In the grand scheme of making a game, all of the above work on the first game system – every formula and value chosen – is completely arbitrary and doesn’t matter very much. Don’t obsess over it. Write some formulas down, plug some numbers in, and move on.


There are two reasons not to worry too much about the details for the first system in your game:

  • None of this is “tuning” yet. Making armor and weapon damage progression values isn’t tuning until you make the content that intersects them, such as monster health and damage. You can make that stuff match whatever you choose to do here.
  • There’s a huge range of values for game content that players will perceive as balanced. The vast majority of PvE players don’t engage with numbers in game systems in a deep way. So long as there aren’t obvious anomalies in your game, players are inclined to perceive a game as balanced.

The first system is often the hardest to design. There are so many ways you could set up your game content and make your rules work, and it’s easy to over-think it. The most important thing is for your systems to be comprehensible and workable so that you can avoid or fix any big irregularities in your content. Just choose simple rules and progressions and don’t worry about making every number perfect.

Usability Evaluation for Video Games (Part 1)

Original Author: Georgios Christou

A lot of the readers of #ADBAD are students, young developers and hobbyists, so I’ve decided to write down a few things about usability evaluation methods for games. I feel that very little is out there about how to evaluate the usability of a video game, and it seems that most people do this experientially, or not at all. It is also true that the interface is the only way that a game designer/developer has a real dialogue with players, so if the user interface sucks, players are not going to play a game for long. Of course, there are exceptions (take Skyrim, for example) where an atrocious interface is beat to submission by players, because the game’s other features (storyline, progression, etc.) are so good. But let’s face it, that’s rare! I’m going to break this topic in a series of posts, because of the sheer amount of information that exists about this topic. I hope that you’ll find the series interesting!

Seven Stages of Action

To understand how to evaluate a video game, first one needs to understand users. More specifically, to understand how users work with a user interface, what they expect, and how they react to the unexpected. So first, let me present Donald Norman’s ‘Seven Stages of Action’ model which is described in his excellent book ‘The Design of Everyday Things’ (Norman, 2002). I highly suggest this book to all my students, as well as to all people who are involved with designing anything!

The Seven Stages of Action model by Don Norman (2002)

Norman proposes that when users interact with a user interface, they go through the following stages (Norman, 2001, p. 48):

  • Forming the goal
  • Forming the intention
  • Specifying an action
  • Executing the action
  • Perceiving the state of the world
  • Interpreting the state of the world
  • Evaluating the outcome

Gulf of Execution

The stages above describe the way that users conceive what they want to do, through forming a goal in their minds. Then they form their intentions as decisions to apply some actions that will bring about the completion of their goal. Here lies the first pitfall in designing user interfaces. The users know what actions they want to perform, but they do not know how to execute this. This is called the gulf of execution. A gulf that users must cross if they want to turn their intentions into a specific sequence of actions that are allowed by the user interface given to them so that they can reach their conceived goal. An example can be taken from text adventure games (for those who are “young” enough to have played them!), and the way these games handled interaction with their players. Once players entered the world of the game, they needed to know the commands with which to move around and execute actions. That’s why manuals were so important. Otherwise, the players just read a textual description of the starting room of the game, and saw the prompt staring blankly back at them.

A Mind Forever Voyaging text adventure by Infocom

Thus, they needed external help to turn their intentions into action sequences that the game understood. This, together with a very small vocabulary that text adventures understood, created problems of immersion. And that is why point-and-click adventure games later on had such an impact. They gave players a more natural way to interact with the game world, without requiring them to know exactly how to phrase an action so that the game would understand what the players wanted to do.

Gulf of Evaluation

This whole process talks about how users affect change on the game world. Once the change is affected, users require feedback to understand what they have actually done. Without feedback, there is no reason to go into the second part of the seven stages of action model. However, even the feedback that players receive presents problems. Once players have performed their action sequence, they perceive the new state of the world. And here is where the second pitfall lies. Between perception of the state of the world and the interpretation of this perception lies the gulf of evaluation. This gulf is the ease (or the difficulty) with which users can correctly surmise what has happened in the world. Again I’ll use text adventures as the example. Players could type their commands, but until they actually told the game to ‘look around’ they did not know whether their affected change in the game state was the one that achieved their goal. This was because text adventure games (sometimes) did not provide a description of the new room the players entered, until the players actually looked at the new room and its contents. Once the players looked around, then they could evaluate the outcome of their actions and see how close they were to accomplishing their goal of moving to a target room in the game. Again, graphic adventures made the gulf of evaluation smaller by presenting graphical versions of the world, where the players could see their surroundings and realize that they had passed into a different state in the game world.

Screencap from King’s Quest IV graphic adventure by Sierra On-Line, Inc.


The understanding of the gulfs of execution and evaluation arm the game designer and the game developer with some important tools. The problems that are created by a gulf of execution are mostly due to actions that don’t map well to the game world. For example, ‘World of Warcraft’ players were rewarded if they started playing ‘Star Wars: The Old Republic’ through a very familiar user interface, and a control set that almost matches that of ‘World of Warcraft’ out of the box. Thus, mastering the interface is not a challenge that needs to be overcome by the player, who would probably prefer to master the gameplay challenges instead.

The concept of the gulf of execution meshes nicely with user interface standards. Although I am not a proponent of standardizing user interfaces , it is nice to have some knowledge transfer, at least of the controls  in games that belong to the same genre (think about the ‘WASD’ control keys, for example). Remember here that one of the goals when designing a video game is to challenge the players as they gain further mastery of the game, not of the user interface of the game. And if your user interface is one that is entirely new, then you should allow players to acclimate themselves and breach the gulf of execution before creating serious gameplay challenges.

On the gulf of evaluation side of things now, there are quite a few things that we can use to create feedback that is not obtrusive. While it’s great to show players of, say, FPS games that they are getting shot in the back, there are subtler ways than printing it on the screen in front of them! Some games flash a red border to the players, and others provide not only visual, but sound feedback as well. However, again, the primary goal should be to make this feedback understandable quickly. Think about how many mods have been created for ‘World of Warcraft’, designed to make the life of healers easier during raid battles. Why is that? Because one of the primary difficulties in WoW’s interface was that it didn’t allow immediate understanding of the health of all the raid members. As the developers saw and listened to their users however, the interface became more and more pliable, although not entirely to the point that healers are throwing away their Whammys and their HealBots.


In conclusion, the gulfs of execution and evaluation are probably two of the most widely used and widely applied concepts in usability evaluation. So next time you start your game design, maybe you’ll think about how to get your players to cross those two gulfs in an easier manner!

This blog post is the first in a series of posts about how to use usability evaluation principles for your games. If you like this, let me know so that I continue writing on the subject. You can also let me know if you are interested in a specific aspect of usability evaluation, and I’ll do my best to either explain, or to study up and let you know what I find.


Norman, D. (2002). The Design of Everyday Things. Basic Books.

Code Build Optimisation Part 4 – Incremental Linking and the Search for the Holy Grail.

Original Author: Andy Firth

So you’ve done your full rebuild, waited out your now much shorter build time while talking with colleagues over coffee… but wait…you forgot to make that 1 edit to test your function… there done…you see your file building… its done and BAM  “Linking…” time to go get more coffee… this baby is going to take a while.

Typically on small programs linking is measured in mere seconds (often less) and is largely unnoticeable. However on larger projects with millions of lines of code, heavy template usage and optimizations such as inline functions and weak references (declspec selectany) your symbol count goes off the charts. Combo this with the probable use of libraries to compartmentalize your code and the likelihood that you’re compiling more than 1 target at a time (dll’s, tools, runtime). The linker is likely doing a significant amount of the work for you AFTER compiling your single file. As an example for me within our current code @ Bungie a single file change in the code I usually work on can cost as much as 10mins in linking (many targets). If i opt to build a single target (say the runtime) then this is reduced to ~2 mins. This is without any linker optimization such as Link Time Code Generation which is known to heavily increase Link time.

In my experience I’d say most of my changes (and most of the changes my colleagues make) before doing a compile involve a single file. More often than not its simpler than that: a single line. So the iteration is something like

  • make change
  • hit compile
    • 1-2 seconds of compile
    • 1-2 mins to link single runtime target.
  • load game
  • test
  • repeat
Now Loading the game is always going to be a problem*, but but the build itself is completely dominated by linking. If only that step could be faster.

Incremental Linking

Incremental linking sets up an ilk file. The first time you link with /INCREMENTAL enabled  it will generate a database of the symbols within the Target executable and the location that supplied them within object files of said executable. This file is used in subsequent links to cross reference a changed object file (the single file you changed) with final executable allowing the linker to “patch” the symbols that are different with the new version. The Patch is achieved using a code thunk so the resultant code can be slower. The recommendation is to do a full rebuild before doing any performance testing.

Incremental linking is on for link.exe by default. New Projects generated with MSVC will explicitly disable it for Release however. I would assume that most if not all of your simple programs are using it in debug. Sadly the most common “working” case is the one that needs it least; small programs have short link times. In my experience the more complex your project the less likely incremental linking is working for you. I have spoken with many engineers over the years about incremental linking and almost all believe it to be a flawed feature of MSVC. Sometimes it works, sometimes it doesn’t. When it doesn’t work it actually increases link times and therefore, for many engineers it isn’t worth wrangling with.

I experienced the same, sometimes it works and its fantastic. Sometimes it simply doesn’t… most of the time it tells you a reason.

If your project uses any linker options that alter the symbols the linker uses or explicitly lists the order in which those symbols are used then internally the linker will disable incremental linking (the MSDN docs explicitly mention this).

  • /ORDER (forcing comdat order): 
  • /OPT:ICF (comdat folding): MSDN definition is somewhat verbose but it boils down to: a library is a container for a set of objects. Link resolves external references by searching first in libraries specified then default libs.

Concurrent World Editing, On the Cheap

Original Author: Jonathan Blow

A small team with limited manpower, we wanted a low-effort way to use our existing version control system (svn) to facilitate concurrent world editing for our open-world 3D game (i.e. we desire that multiple users can edit the world at the same time and merge their changes).  We developed a system wherein the data for each entity is stored as an individual text file.  This sounds crazy, but it works well: the use of individual files gives us robustness in the face of merge conflicts, and despite the large number of files, the game starts up quickly.  Entities are identified using integer serial numbers.  We have a scheme for allocating ranges of these numbers to individual users so there is no overlap (though this scheme could certainly be better).  The system seems now to be robust enough to last us through shipping the game, with seven people and one automated build process concurrently editing the world across 18 different computers.

We don’t know whether this method would work for AAA-sized teams, though it seems that it might with some straightforward extensions.  But for us the method works surprisingly well, given how low-tech it is.

About the Game

We have been developing a 3D open-world game called The Witness since late 2008.  For much of that time, development happened in a preproduction kind of way, where not many people were working on the game.  Recently, though, we went into more of a shipping mode, so the number of developers has grown: we’re up to about 10 or 11 people now (pretty big for a non-high-budget, independent game).  Not all of these people directly touch the game data, but about 7 do now, and this number will rise to 9 in the near future.

The design of the game requires the world to be spatially compact, relative to other open-world games, so it is far from Skyrim-esque in terms of area or number of entities.  At the time of this writing, there are 10,422 entities in the game world.  I expect this number to grow before the game is done, but probably not by more than 2x or 3x the current amount.

Because the world is not too big, and some game-level events require entities to coordinate across the length of the world, it makes sense to keep all the entities in one space, that is to say, there is no zoning of the world or partitioning of it into separate sets of data.  This makes the engine simpler, and it reduces the number of implementation details that we need to think about when editing the world.  It also introduces a bit of a challenge, because it requires us to support concurrent editing of the world without an explicit control structure governing who can edit what.  One such control structure might be: if you imagine the world were divided into a 10×10 grid; so there are 100 different zones; and where anyone editing the world must lock whatever zones they want to edit, make their changes, check them in, then unlock those zones; this would provide a very clear method of ordering world edits.  (Because of the locking mechanism, two people would not be able to change the same world entity at the same time, so there doesn’t have to be any resolution process to decide whose changes are authoritative). However, the straightforward implementation of such a control scheme would make for inconvenient workflow, as it is tedious to lock and unlock things.  Even with further implementation work and a well-streamlined UI, there would still be big inconveniences (I really want to finish building a particular puzzle right now, but a corner of that puzzle just borders on some region that a different user has locked, and he is sick this week so he probably won’t finish his work and unlock it until next week.  Or, if my network connection is flaky then I may not be able to edit; if I am off the net then I definitely could not edit.)

Thus, rather than implementing some kind of manual locking, I found it preferable to envision a system that Just Magically Works.  Since we already used the Subversion source-control system to manage source code and binary source assets such as texture maps and meshes, it didn’t seem like too big of a stretch for Subversion to handle entity data as well.

The World Editor and Entity Data

Our world editor is built into the game; you press a key to open it at any time.  Here’s what it looks like:

The editor view is rendered in exactly the same way as the gameplay view, but you fly the camera around and select entities (I have selected the red metal door at the center of the scene, so it is surrounded by a violet wireframe).  In the upper right, I have a panel where I can edit all the relevant properties of this Door; the properties are organized into four tabs and here you can see only one, the “Common” properties.

These properties, which you can edit by typing into the fields of this Entity panel, are all that the game needs to instantiate and render this particular entity (apart from heavier-weight shared assets like meshes and texture maps).  If you serialize all these fields, then you can unserialize them and generate a duplicate of this Door.  That’s how the editor saves the world state: serialize every entity, and for each entity, serialize all fields.

Originally, the game saved all entities into one file in a binary format.  This is most efficient, CPU-performance-wise, and the natural thing that most performance-minded programmers would do first.  Unfortunately, though, svn can’t make sense of such a file, so if two people make edits, even to disjoint parts of the world, svn is likely to produce a conflict; once that happens, it’s basically impossible to resolve the situation without throwing away someone’s changes or taking drastic data recovery steps.

Our natural first step was to change this file from binary into a textual format.  Originally I had worries about the load-time performance of parsing a textual file, but this turned out not to be an issue.  I decided to use a custom file format, because the demands in this case are very simple, and we have maximum control over the format and can modify it to suit our tastes.

Many modern programmers seem to have some kind of knee-jerk inclination to use XML whenever a textual format is desired, but I think XML is one of the worst file formats ever created, and I have no idea why anyone uses it for anything at all, except that perhaps they drank the XML Kool-Aid or have XML Stockholm Syndrome.  Yes, there are XML-specific tools out there in the world, and whatever, but I didn’t see how any such tool would be of practical use to us.  One of our primary concerns is human-readability, so that people can make intelligent decisions when attempting to resolve revision control conflicts.  XML is not particularly readable, but we can modify our own format to be as readable as it needs to be in response to whatever real-world problems arise.  You’ll see a snippet below of the format we use.

Evolution of the Textual Format

The first version of our textual format was a straightforward adaptation of our binary format.  The gameplay-level code that saves and loads entities is the same for either format; the only difference is just a boolean parameter, choosing textual or binary, which is passed to the lower-level buffering code.  After the buffer is done being packed, it’s written to a file.

In our engine, entities are generally referenced using integer serial numbers (that act as handles).  These numbers are the same when serialized as they are at runtime.  Originally, in the binary file, the order in which the entities were written didn’t matter.  But if we want to handle concurrent edits, we want to make merging as easy as possible for the source control system, so we sort entities by serial number, writing them into the file in increasing order.  Thus we didn’t generate any spurious file changes due to haphazardly-changing serialization order.

In theory, Subversion could now handle simultaneous world edits and just merge them, the same way it does with our source code.  Unsurprisingly, though, this initial attempt was not sufficient.  Though the file was now textual, it contained a bare minimum of annotations and concessions to readability.  Though I knew I would eventually want to add more annotations, I didn’t want to go overboard if it wasn’t necessary, because these inflate the file size and the time required to parse the files (though I never measured by how much, so it’s likely the increases are negligible performance-wise and I was engaging in classic premature optimization).  As it happens, the kind of redundant information that increases human-readability also helps the source control system understand file changes unambiguously, so as we made the format more-readable, the number of conflicts fell.

Here’s what our file format currently looks like, for entity #6555 (the Door we had selected in the editor screenshot).

; position
    46.103531 : 42386a04
    108.079681 : 42d828cc
    8.694907 : 410b1e57
; orientation
    0.000000 : 0
    0.000000 : 0
    -0.069125 : bd8d9184
    0.997608 : 3f7f633d
; entity_flags
; entity_name
; group_id
; mount_parent_id
; mount_position
    0.035274 : 3d107ba5
    0.006069 : 3bc6e19c
    0.000038 : 381ffffc
; mount_orientation
    0.000000 : 0
    0.000000 : 0
    0.000027 : 37e5f004
    1.000000 : 3f800000

(The first line, “Door”, is the type name; 72 is the entity data version, which is used to load older versions of entities as their properties change over time; 6555 is the entity’s ID.  root_z and cluster_id (seen in the screenshot) are not saved into the file, because these properties have metadata that say they are only saved or loaded under certain conditions which are not true here.  For the sake of making the listing short, I’ve shown only the properties listed on that first tab in the screenshot.)
You’ll note that the floating-point numbers are written out a bit oddly, as in this first line under ‘position’:

    46.103531 : 42386a04

This is how we preserve the exact values of floating-point numbers despite saving and loading them in a text file.  If you do not take some kind of precaution, and use naive string formatting and parsing, your floating-point values are likely to change by an LSB or two, which can be troublesome.

The number on the left is a straightforward decimal representation of the value, written with a reasonable number of digits.  The : tells us that we have another representation to look at; the number on the right, which the : introduces, is the hexadecimal representation of the same number.  The IEEE 754 floating-point standard defines this representation very specifically, so we can use this hexadecimal representation to reproduce the number exactly, even across different hardware platforms.  The decimal representation is mostly just here to aid readability.  However, if we decide we want to hand-edit the file, we can just delete the : and everything after it, changing that first number to a 47.5 or something; seeing no :, the game will parse the decimal representation and use that instead.

There are other ways to solve this floating-point exactness problem; for example, you can implement an algorithm that reads and writes the decimal representation in a manner that carefully preserves the values.  This is complicated, but if you’re interested in that kind of thing, here’s what Chris Hecker says:

Everybody just uses David Gay’s code:

You need to get the flags set right on the compiler for it to work. /fp:precise at least, I also do #pragma optimize (“p”,on), and make sure you write a test program for it with your compiler settings.

(Twitchy, and look at how much code that is!  What we did is a couple of lines.) In a document on his web site, Charles Bloom discusses a scheme that was in use at Oddworld Inhabitants, similar to what we’re doing in The Witness but involving a comparison between the two representations:

… Plain text extraction of floats drifts because of the ASCII conversions; to prevent this, we simply store the binary version in the text file in addition to the human readable version; if they are reasonably close to each other, we use the binary version instead so that floats can be persisted exactly.

Subversion Confusion

After switching to this format, conflicts still happened.  I categorize conflicts into two types: “legitimate”, where two people edit the same entity in contradictory ways and the source control system doesn’t know who to believe; and “illegitimate”, where Subversion was just getting confused.  We did have a number of legitimate conflicts for a while, yet which were not the users’ fault, because our world editor tended to do things that are bad in a concurrent-editing environment.  (See the Logistics section at the end of this article for some examples).  But even after we ironed these out, we still found that Subversion was still getting confused.

Here’s an example of a common problem: suppose we start with a world containing data on entities A, B, and D, so our file contains: ABD.  You edit the file and delete entity B.  I edit the file and also delete entity B, but add another entity C.  (That we both deleted B may be a very common use-case, especially if this is due to some automatic editor function).  So, you want to check in a file that just says: ABD – B = AD, so the new result is AD.  I want to check in a file that says: ABD – B + C = ACD (the C goes in the middle, because remember, the entities are sorted!)  In theory this is an easy merge: the correct result is just ACD.  But Subversion would get confused in cases like this, requiring the user to hand-merge the file.

This is a disaster, because even with a visual merge tool such as the one that comes with TortoiseSVN, you don’t want non-engine-programmers trying to merge your entity data file.  Even if that file is extremely readable, at some point someone will make a mistake; the file will fail to parse, and then you have a data recovery problem that requires the attention of an engine programmer.  When this happens it kills the productivity of two people (the engine guy, because he has to merge the data, and also the guy who generated the conflict, because until it gets resolved his game is in an un-runnable state, and anyway, the engine guy has probably commandeered the other guy’s computer to fix this).

After we had this problem a few times too many, Ignacio suggested moving to a more-structured file format like JSON, which, in theory, source-control systems would handle more solidly.  But I thought that this would only reduce the frequency of the problem, but not solve anything fundamental; even if we achieved a situation of only generating legitimate conflicts, each conflict would still be a potential game-crippling disaster.

Splitting the File

It occurred to me, finally, that it could hardly get cleaner than to store each entity in its own file.  This initial thought might cause a performance-oriented programmer to recoil in horror; we have 10,422 entities right now, and it must be very slow to parse and load 10,000 text files every time the game starts up!

But I knew that we already had over 4,000 individual files laying around for assets like meshes and texture maps, currently loaded individually, the idea being that at ship time we would combine these into one package so as to avoid extra disk seeks and the like.  But on our development machines, these files still loaded quickly.  I figured that if we can already load 4,000 big files, why not try loading an extra 10,000 small files?  If loading all these files turned out to be slow, we could resort to this only when entity data has changed via a source control update; after loading all the text files, we can save out a single binary file containing all the entity data, which would be much faster to load, just like we used to do before we supported concurrent editing.

I tried this and it worked great.  So, now we have a data folder called ‘entities’, which holds one file per serialized entity.  The Door selected in the editor screenshot is stored in a file named after its entity ID, “6555.entity_text”, which contains the text you see in the listing above.

As mentioned, there are currently 10422 entity files; the text files take up about 8MB of space, whereas the binary file is 1.4MB.  We’re not making any effort to compress either of these formats; they are very small compared to the rest of our data.



Benefits of Using Individual Files

Storing each entity in a separate file solved our problems with merging: both the illegitimate source control confusions as well as the problems due to hand-guided file merging.

The illegitimate conflicts go away because transactions become very clear.  If I delete an entity, and you delete an entity, then we both just issue a file delete command, and it’s very easy for the source control system to see that we agree on the action.  Furthermore, if we both delete B but I add C, because the change to C is happening in a completely separate file, there is no way for the system to get confused about what happened.

Legitimate conflicts do still happen, as multiple people edit the same entities, but these cases are much easier to handle than they were before.  The usual response is to either revert one whole entity or stomp it with your own.  Because we usually want to treat individual entities atomically — using either one person’s version of that entity or the other’s — individual files are a great match, because now the usual actions we want to perform are top-level source control commands (usually from the TortoiseSVN pulldown menu); before, this wasn’t true, and merging a single entity in an atomic way required careful use of the merge tool.  If an entity ever gets corrupted somehow, for whatever reason, we can just revert that one file, knowing the damage is very limited in scope; before we’ve reverted it, the worst that can happen is that that single entity will not load; it won’t prevent the rest of the entities in the world from loading.

It would be difficult to overemphasize the robustness gained.  I feel that these sentences do not quite convey the subtle magic; it feels a little like the state change when a material transitions from a liquid to a solid.

Sometimes, someone will update their game data in a non-vigilant way; not noticing that a conflict happened, so they won’t resolve it; and then they will run the game.  Before, this would have been a disaster; Subversion inserts annotations into text files to try to help humans manually resolve conflicts (this is an unfortunate throwback to programs like SCCS), which would have caused much of the world to fail to parse.  Now, though, the damage from these annotations is constrained to individual entities. It’s also easier to detect: if there is a conflict in the file for our Door #6555, then in addition to writing these annotations into 6555.entity_text (making it unparseable by the game), Subversion will also write out files 6555.entity_text.mine and 6555.entity_text.theirs, which contain the two intact but conflicting versions of the file.  The game can detect these very straightforwardly, put an error message on the HUD, and even choose one to load by default.  It is all very clean.

This is one of those interesting cases where reality-in-practice is starkly different from computer-science-in-theory.  In theory, a directory is just a data structure organizing a bunch of black-box data, and we happen to view this organization in a way that looks like a list of files; we could just do the same thing interior to our own file format, and it would be no big deal.  In practice, due to the way our tools work, these two arrangements are tremendously different.  The individual files just feel solid and reliable to work with, whereas one big structured file is always going to be precarious.

Startup Procedure

When the game starts up, we need to load all the entities from disk.  In the simplest case, this is straightforward: we just iterate through every file in the entities directory and load it.  However, as I mentioned, we wanted to keep a binary version of the entities that is preferred by the game if it exists and is up to date; partly, this might help startup speed a little (we don’t have to parse all these strings), but mainly, that is the file format and code path we are going to use when the game ships, and we don’t want that code to rot in the meantime.  Using it every day is the best way to prevent it from rotting.

So: when the game first starts up, we check for the binary entity package.  If it’s not there, we load all the individual entities files, then immediately save out the binary package.  (This binary package is a private, local file; if we were to add it to source control, everyone would always conflict with everyone else.)  But suppose we start up and the binary package is already there (which it usually will be).  We need then to determine if any of the textual entities have changed since the binary package was built.  The most-general way to do this is just by looking at the file timestamps; if we were to rely on some other signal being written to the disk that says “hey, entity data has changed,” then we would only be allowed to modify the entity files using sanctioned and customized tools; that would be annoying and would cause all kinds of robustness issues.

In concrete terms: we want to know the timestamp of the newest .entity_text file, inclusive of the directory containing these files (if we just did a Subversion update that only deleted entities, then all the .entity_text timestamps will be old, but the directory timestamp will be new; we want to ensure we properly rebuild the binary file in this case!)  To find the newest timestamp, we just iterate through all the timestamps in a brute-force way:

bool os_get_newest_file_modification_time(char *input_dir_name, Gamelib_Int64 *file_time_return) {
    // We include the modification time of the directory as well.

    char *dir_name = mprintf("%s/*", input_dir_name);
    if (strlen(dir_name) >= MY_MAX_PATH - 5) return false;

    // best_time starts as the time for the directory (if it exists!)
    Gamelib_Int64 best_time = 0;
    bool success = get_file_modification_time(input_dir_name, &best_time);
    if (!success) {
        Log::print("Unable to get modification time for directory '%s'!\n", input_dir_name);

    // Now consider the file timestamps.
    WIN32_FIND_DATA find_data;
    HANDLE handle = FindFirstFile(dir_name, &find_data);
    if (handle == INVALID_HANDLE_VALUE) return false;

    while (1) {
        if (find_data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) {
        } else {
            Gamelib_Int64 file_time = to64(find_data.ftLastWriteTime);
            if (file_time > best_time) best_time = file_time;

        BOOL success = FindNextFile(handle, &find_data);
        if (!success) break;


    if (file_time_return) *file_time_return = best_time;

    return true;

bool get_file_modification_time(const char *name, Gamelib_Int64 *result_return) {
    if (h == INVALID_HANDLE_VALUE) return false;

    FILETIME write_time;
    GetFileTime(h, NULL, NULL, &write_time);

    if (result_return) *result_return = to64(write_time);
    return true;

You’ll notice that this is Windows-specific code, but we don’t have to worry about porting it, because we only run the world editor on Windows.  Fortunately, in Windows, the file timestamp is right there in the directory data, so we don’t have to open each file.  (At least in NTFS this is true, rather than being an API illusion; the timestamps are stored in the Master File Table [MFT].)

What’s the performance like?  I typically run the game on two different machines: a high-performance Tower PC (CPU: Intel Core i7-970, hard drive: OCZ Vertex 3 SSD, RAM: 12GB) and a gaming Laptop (CPU: Intel Core i7-2820QM, hard drive: Intel SSD 500 [SSDSC2MH250A2], RAM: 12GB).

The time spent in os_get_newest_file_modification_time is:  Tower: .0095 sec, +/- .0015 sec; Laptop: .039 sec, +/- .005 sec.  (Timings were computed by averaging six trials, after a discarded warm-up run.)  This is definitely one of those cases where I’m glad I didn’t prematurely optimize, as I would have expected this to be much slower!

We typically install SSDs in all our work machines, but even when people have home machines that use spinning-platter hard drives, startup times are not much slower (likely due to the MFT data staying in cache).

If for a large enough number of entities this timestamp-gathering process becomes too slow, we can always write some code to read the timestamps out of the MFT directly, avoiding all the API call overhead, as in this forum posting.  To date we’ve had no need to do this.

Some people have reported performance problems when Windows has more than 100,000 files in one directory, but this can be worked around: just make 10 directories, with IDs ending with each digit from 0-9 grouped in common directories.  (Why use the least-significant “ending” digit, rather than the most-significant “starting” digit, which might be the impulsive choice?  Because if you do it that way, Benford’s Law says your entities are likely to be unevenly distributed across the directories.  Why not just hash them into a folder?  Because it’s useful to be able to manually navigate to the entity you want, without searching.  (However, partitioning by least-significant digit may be inconvenient if you want to manually edit a sequential range of entities for some reason, because that range will span many directories.  If this is an important use case, perhaps there’s a good compromise like using the tens digit or hundreds digit — though I have the itching feeling that a clearly superior solution would exist.))

As things stand in terms of editing the game world, we are probably not yet at the peak entity count, but there’s unlikely to be more than 3x the number of entities we have now.  Given how low the timing measurements are now, it’s clear that performance will not be a factor in the ability to use this system up until shipping the game.

Partitioning the ID Space

As I mentioned, in our engine, entity IDs are integer serial numbers.  We like this because they are easy to type (we have to type them in the editor sometimes) and at runtime, looking up an entity compiles to a fast indexing of a flat array, with a bounds check.

This introduces another problem for us to deal with in world editing, though: we need to ensure that two users, who are both creating new entities, do not both try to use the same number.  This would result in both users attempting to add completely incompatible .entity_text files with the same name.

One obvious and bulletproof solution is to have the editor connect to the network and atomically grab an ID from a central server each time one is needed.  But again, we don’t want to require users to be on the network in order to edit.

My solution was a somewhat lazy one: we allocate a range of IDs to each computer that edits the world.  These are kept in a file called “id_ranges”; at startup time, the game queries its machine’s MAC address, then looks in the file for the corresponding ID range; if not found, it claims a new range and adds a new line to the file:

 6000 10000 BC-AE-C5-66-28-E9 #Jonathan_office
10000 13000 BC-AE-C5-66-28-48 #Ignacio_office
13000 16000 00-1C-7B-75-00-D8 #Jonathan_laptop
16000 19000 BC-AE-C5-66-28-93 #Shannon_office
19000 22000 BC-AE-C5-56-42-51 #Jonathan_home
22000 25000 C0-3F-0E-AA-F3-F2 #icastano_home
25000 28000 F4-6D-04-1F-D4-36 #Shannon_home
28000 31000 Baker #Baker
31000 34000 BC-AE-C5-56-43-FE #Misc_PC_office
34000 37000 00-E0-81-B0-7A-A9 #eaa
37000 40000 00-21-00-A0-8F-E3 #Orsi
40000 43000 64-99-5D-FD-97-64 #Jonathan
43000 46000 08-86-3B-42-0B-8D #Misc
46000 49000 90-E6-BA-88-9B-B7 #andrewlackey
49000 52000 48-5B-39-8C-80-5E #eaa
52000 55000 50-E5-49-55-64-BF #facaelectrica
55000 58000 00-FF-D4-C7-15-A3 #facaelectrica
58000 70000 Clusters #Clusters
71000 75000 Overflow #Overflow
75000 78000 F4-6D-04-1F-CC-77 #Nacho
78000 81000 BC-AE-C5-74-7B-A3 #Thekla

On each line, everything after the # is just a comment telling us in a human-readable way whose machine is responsible for that range (it defaults to the Windows username, but we can hand-edit that if we want).

(“Baker” is for an automated process, so I made a special case that uses the string “Baker” instead of the MAC address, just to minimize the potential for the build machine to claim a new range without us noticing and failing to check that in; see below.  “Clusters” is for an editor operation that is also run in an automated process, but which can also be done manually by anyone; cluster generation removes all previous clusters from the world, so if clusters didn’t have their own space, you would end up frequently deleting entities in other peoples’ ranges; this seems like it might be a bad idea, though maybe, in reality, it’s fine and there would be no problem.  But as it stands we know that anything in the Clusters range is completely disposable, so if a conflict ever arises there we can just revert or delete those entities without a second thought.)

This ID space is naturally going to have a lot of big holes in it, but before we ship we will renumber the entities so that we can pack them tightly into an array.  In the meantime, it is sparse but not pathologically so; we can still get away with using a linear array to index the entities, though it will use more memory now than at ship.

This id_ranges file is a bit of a lazy solution, and it causes a few annoyances.  The most annoying thing is that, any time someone runs the editor for the first time on a new machine, they need to immediately check in the id_ranges file to publish their allocation to the rest of the team; and they need to coordinate so that nobody else is trying to do the same thing.  Because the editor just allocates the next 3000 IDs starting with the highest ID in the file, if two people try to allocate IDs simultaneously, they will both claim the same range.  We could solve this by requiring the editor to hold a source control lock to modify the id_ranges file, but I just haven’t gotten around to it, because this hasn’t been a big-enough problem.  Furthermore, we could have the editor automatically commit the changes to the file; but currently, nothing is ever published to the rest of the team automatically — publishing always requires manual action, and this seems like it may be a good thing to preserve.

But, not having implemented these solutions, we have been taken by surprise a couple of times by interactions with a second lazy choice of this system: the fact that we file only records one MAC address per computer.

Most modern computers have multiple MAC addresses (for example: if your computer has a Wifi adapter and two Ethernet ports, it’s probably got at least 3 MAC addresses).  My lazy code just asks Windows to iterate the MAC addresses and reports the first one for use in the file.  Sometimes, though, the reported address will change, which causes someone’s machine to modify the id_ranges file when they didn’t expect it; we had this problem once when an audio contractor, who works offsite, installed some VPN software.  If that user is not vigilant about source control (which non-technical people often aren’t), it can be a while before this problem is discovered; meanwhile, that user has been creating entities that may conflict with someone else’s.

Most or all occurrences of this problem could be solved by recording all of the user’s MAC addresses in the file, instead of one; I just haven’t done so.

If we weren’t using integer serial numbers, we wouldn’t really need id_ranges at all.  For example, if we looked up entities at runtime by hashing unique string identifiers, then we would only need to ensure that each user has a unique string prefix.  We take a slight overkill approach, where each entity ID is a somewhat long string starting with the user’s MAC address; or we could even generate a Windows GUID for each entity.  MAC addresses are 48 bits wide.  If you concatenate the MAC address with a 4-byte integer, that’s just 10 bytes per entity ID, which isn’t much (we currently use 4, already!).  Of course encoding these as strings and storing them in a hash table is going to require more space, but it isn’t ridiculous.  Looking things up in a hash table all the time is slower than indexing a flat array, so we are sticking with the way we do things for now; but this would solve some problems and simplify the overall system, so it sounds like the kind of decision I would change my mind on later, as computers get even faster than they currently are.  So if you implement a scheme like this, you can ignore everything I said about id_ranges.  The biggest problem becomes making it easy to type longer entity IDs into the in-game editor.

Logistical Issues in Editor and File Format Design

In closing, I want to discuss changes we made to the editor and file format, in order to make them compatible with the paradigm of concurrent editing.


Just about every game wants to serialize arrays into files.  When transitioning from a binary format, the most natural thing for a performance-minded programmer is to write the array length out first, then write each member of the array.  That way, when reading the file back, we know how much memory to allocate, and we avoid all the slow things one has to do when reading an array of unknown length.  However, encoding arrays in this way results in a textual format that always conflicts, because even when people are making compatible edits to an array, nobody is going to agree on what that array count should be; also, it becomes impossible to manually edit conflicts without typing the count in: if two people add elements to an array, the merged array will contain all the items, but neither version of the file will contain the correct count!

To solve this, of course, we go to the common text-file convention of using a syntactic designation that an array is now over, rather than pre-declaring how many elements there are.  This is obvious, but I mention it here because it is a relatively universal concern.

Automatic Changes

You will also want to minimize or work around cases where the editor automatically changes entity properties without user intervention, because often these will lead to conflicts.  A simple example from our game world: the sun is represented by a few entities that are instantiated in the world, a glowing disc and a couple of particle systems.  Every frame, the gameplay code positions these entities with respect to the current camera transform so that, when the player moves around the world, the sun will seem to be infinitely far away.  This happens in the editor too, because when editing we want to see the world exactly the way it will look in-game.  So, as soon as we enabled concurrent editing, everyone was checking in different coordinates for the sun (depending on where the camera happened to be when they saved!) and this caused a lot of conflicts.  My first solution was to add a hack that zeroes the position of sun entities when saving, but upon loading, this interacted badly with the way particle systems interpolate their positions.  Instead, I added an entity flag that signifies that an entity’s pose doesn’t matter, with respect to thinking about whether it has changed or not, so we will only modify the sun’s .entity_text files if some more-important properties are modified.  (This article is already long, but I’ll say that in general, we detect differences between an entity’s current state and its state when it was loaded, and we only write out .entity_text files when properties have changed.)

Relative Transforms

In our engine, all entities have ‘position’ and ‘orientation’ properties, which always represent the entity’s frame in worldspace, flat and non-hierarchically.  If we want an entity to be attached to another entity’s transform, we set the properties ‘mount_parent_id’, ‘mount_position’, and ‘mount_orientation’ (you can see all these properties in the listing for Door #6555 above); once per frame, the game engine computes the entity’s new worldspace transform based on its mount information, properly handling long dependency chains.  Once we started using textual entity files, we found that entities’ worldspace coordinates would often differ by an LSB or two, even when their parents had not moved (differences especially happening, for example, between Debug and Release builds of the game).  Even without concurrent editing, it’s good to clean this up as a matter of basic hygiene, because otherwise you’re checking in spurious changes all the time, increasing load on the source control system and making it harder for a human to see useful information in the change list.  I chose to solve this problem by making the mount-update code more complex: it detects whether an entity has moved of its own accord, and only updates child entities’ worldspace positions if the parent entity really moved (or if the child entity really moved!).

Terrain Editing

There is one piece of editing workflow that used to make sense before concurrent editing, but which now does not.  We have a terrain system wherein we can move control points around in worldspace, then select a terrain entity, which has a height-field mesh, and press a key to recompute that height field with respect to the new control points.  I originally made this recomputation happen only on manually-selected entities, rather than automatically on the entire world, for UI smoothness purposes: the recomputation takes some time.  But when you only recompute one block of terrain and not its neighbors, you get discontinuities in the terrain; the longer things go on, the more discontinuities you get.  Once in a while I would fix this by selecting all terrain entities in the world, recomputing them simultaneously.

The catch is that when terrain changes, it affects other entities.  We re-plant grass, so that the grass matches the new elevations; we change the positions of trees and rocks and other objects that are rooted to the terrain at a predetermined height.  Whenever you recompute some terrain, we collect all grass/trees/rocks within a conservative bounding radius, as these are all the things that might possibly change.  When dealing with small groups of terrain blocks, this already is a bit of a recipe for trouble (it is not too hard to overlap someone else’s edits this way); but it also means that if I recompute the entire terrain to eliminate those discontinuities, I am in turn editing all grass, trees, and rocks in the world, and will inevitably conflict with anyone else who has moved any of them. For now, we simply shy away from this kind of global operation; when you get a conflict in some entity that you probably didn’t care about, like grass, it is easy just to accept the remote version of the file.  Long-term, I expect some kind of heavier-weight solution would have to happen here, though I hope it does not involve much locking.

Issuing Source Control Commands

Our editor automatically issues source control commands when entities are created or destroyed, because it would be extremely tedious and error-prone to expect users to do this.  Initially we had some problems with source control commands unexpectedly failing, which quickly leads to chaos.  After some work, issuing source control commands seems to work with full consistency, but we are still a little wary of it; so we don’t, for example, want to issue the commands via a background process, adding more complexity to the system.  The problem with this is that svn commands are a bit slow.  Sometimes when I have been editing the world for a while, it can take 15 to 30 seconds to save my changes, which would have happened instantaneously without the svn commands.

We also have a tool compatibility constraint.  Users often issue svn commands using the TortoiseSVN UI, but the editor needs to issue command lines.  Tortoise does not come with command-line binaries, so we use SlikSVN for that.  The versions of these two packages must be kept in sync; we have run into client-side corruption problems when using packages that were compiled with different versions of svn.  This means that if a user gets uppity and upgrades one of these packages, there is the potential to break something.  This is made more annoying by the fact that Tortoise performs a software update check and badgers you if a newer version has been released, and there is just about always a newer version. (Though thankfully around svn 1.7 time, they changed this from a pop-up box to a less-obnoxious red text link in the commit dialog.)  (As of svn 1.7, Tortoise now supplies command-line tools.)


We’ve put together a system that lets us concurrently edit an open world, using source control to manage our world edits.  It works well enough for us to ship the game, so I am happy with it.  If the scheme ever fails in some way, or requires heavy modification, I will post an update!

(This article was crossposted from the development blog for The Witness.)

Yes, it has been done before….

Original Author: Jason Harris

Yes, it has been done before….

I am well aware of that many posts have been made regarding usability. Unfortunately, no matter how much we talk about it, you can easily find bad examples of it everywhere. I am currently using the WordPress Android app, and that is what started my need to post this subject.

The app itself is a very great idea that can be very useful to many people. When I found this app on the marketplace, I felt that I had found my saving grace to help my blogging career.



This was me when I found this app. Installation carried the typical android ease, but once I tried to log in things went downhill.


I literally spent about 4 hours trying to understand the log-in interface, and while there was some idiocy on my part, the integration of hosted and not hosted web sites could be a lot easier. For example, why is it not explained in plain language that you need to choose a separate option to log-in if your website is not specifically


The same issue applies to our games. There are many times that I hear people say that they do not like to play a game because it is too cumbersome.

I hate to keep picking on the free to play games, but Forsaken World is a perfect example. The story lines and overall game design are excellent, but the experience, in my opinion, is completely ruined by the controls and UI. Many gamers that I talk to will only play games made by Perfect World because they are on low-income but still want their MMO fix.

Then there is the ill-fated Lair for the PS3. This awesome storyline and visually enthralling piece of art was doomed from the start. Why, you may ask, if the game was so artfully inked and written, would it be doomed to failure? Two words, Six Axis. The over use of a cumbersome and at the time unknown controller gimmick was the reason why this great game was such a failure. While flying, the dragon tilts and turns according to how parallel the controller is in relevance to the gravitational pull beneath you. (So if your room is slightly slanted, level with the floor is wrong.) Then there are the tricky wrist movements and quick motions that must be made in order to complete actions in flight. The only one I was ever able to do successfully and repeatably was the “looping reverse”, where you swoop the controller down and up in a forward motion, and the dragon does a half loop-de-loop to quickly go in the direction he just came from. Overall, I firmly believe that if this game had more support from the non-six axis parts of the controller, the usability, and also the sales, of this title would have been much more fiscally productive…. not to mention that the players would be over joyed. This game was almost the answer to every dragon fanatic’s prayers. Too bad the usability team dropped the ball.


This is not the way it should be! Great games deserve recognition. To get that recognition, the     players need to be able to enjoy them. Constantly having to adjust camera angles and control maps do not allow for good game play. Imagine that you are playing a 3rd person shooter. You are online with your friends, using the game as a contest. The person with the lowest kdr (kill to death ratio) at the end of three matches is the poor schmuck that has to buy dinner and beer for the whole crew at Usability Sucks Workshop ™. You are using the standard shotgun and grenade load-out to get the most “bang” for your buck. In the middle of the match, you realize you are having issues with the camera, because you cannot lock the camera to follow. You also realize that the camera controls are mapped to keys that are too close to the movement and attack keys, so while fighting your view (and targeting reticule) are jerked away from the target. You also cannot remap these keys, because your team wanted to ensure everyone had the same (miserable) experience with your product.  In the end, because of the usability issues with this game, not to mention the horrible UI that gave you a migraine while playing (due to poor layout, color scheme and small font size) you end up having to pay the bill for the team’s party, but do so from your bed while nursing that migraine.

This, naturally, will lead to poor customer response and reviews of the game, and most likely, if shipped in this poor state, will cause Usability Sucks Workshop ™ to go out of business not long after launching their flagship title.  I mean, who wants to play or recommend a title that causes headaches and nausea?

As a group, the gaming industry needs to take a stand on usability. Bring in the most non-game savvy people available for play testing. If they can get the hang of it, then our job is done!



Please, support usability, and remember, bad games make baby Jesus cry. Don’t make Jesus cry on his birthday!

Happy Holidays!

Mathematics for Game Developers 6: Primes

Original Author: John-McKenna

Prime numbers

Today I’m going to be talking Number Theory.  The subject of number theory is the positive integers 1, 2, 3, …, so (just for today) when I say “number”, they’re the ones I mean.

Most people know what a prime number is, even the ones who run with terror at the thought of mathematics.  A prime is simply a number that has exactly two divisors – 1 and itself.  It is important to note that this definition excludes 1 from the primes, since it has only one divisor.

You can work out by hand that the first few primes are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29.  If you keep going, it becomes apparant that they start out fairly common (nearly half of the numbers from 1 to 10), and become increasingly uncommon (from 21 to 30 there are only 2).  So an obvious question presents itself: do they eventually stop appearing altogether?

The answer is no.  The classic proof appears in Euclid’s Elements (Book IX, proposition 20).  A modern version of it goes as follows:

Take any finite set of prime numbers $latex p_1, p_2, p_3, ldots, p_r$ and consider $latex n = p_1p_2p_3ldots p_r + 1$.  This number is not divisible by any of the primes in the set, because they all leave a remainder of 1 on division.  But all positive integers greater than 1 are divisible by at least one prime (the number itself, if it happens to be prime).  So there must be a prime that is not in our set.  And we can repeat this process, adding the new primes to the set and then finding more, indefinitely.  That means there are infinitely many primes.

That’s a perfectly fine proof, but it isn’t the one I wanted to show you.  But if you’re going to understand that proof, I’m going to have to introduce you to a little group theory.


One thing that mathematicians like to do is to notice similarities between apparantly different things, and formalise them.  Then we can use our knowledge of one thing to gain some insight into the others.  Group theory is one of those tools.  It seems a needlessly abstract on a first encounter, but all it is doing is creating a language for talking about a certain set of properties that many different mathematical objects have in common.

One example of a group is the set of integers under addition.  I’ll use it to clarify the defintion.

So what is a group?  It’s so simple you’ll wonder how the subject could be interesting.  A group is a set coupled with a binary operation, that obeys certain rules.  A binary operation is something like addition or divison: it takes two elements from the set (the order may or may not matter, depending on the operator) and gives you a third.

That’s the first rule: the set must be closed under the operator.  To make things concrete, let’s call the set $latex G$ and the operator $latex circ$ (and, in a bit of an abuse of notation, I’ll call the group $latex G$ as well.  It’s generally clear from context whether the set or the group is meant).  Then if $latex f$ and $latex g$ are any elements of $latex G$, $latex f circ g$ must also be an element of $latex G$.  For our example group, if we add any two integers we get another integer, so the integers are closed under addition.

The next rule is that there is a special element of $latex G$, called the identity and conventionally given the symbol $latex e$, which has the property that if $latex f$ is any element of $latex G$, then $latex e circ f = f circ e = f$.  Zero plays this role in our example group: $latex 0 + f = f + 0 = f$ for any integer $latex f$.

Next, each element $latex f$ must have an inverse $latex f^{-1}$ that satisfies $latex fcirc f^{-1} = f^{-1} circ f = e$.  Although I’ve written this with the same notation as exponentiation, it’s an entirely different thing (although there is a connection, which I might get to at another time).  For the integers, the inverse of $latex f$ is $latex -f$.  -2 is the inverse of 2, and so on.

Finally, the operator must be associative.  That means if $latex f$, $latex g$, and $latex h$ are elements of the set, then $latex (f circ g) circ h = f circ (g circ h)$.  You already know that is true for addition on the integers (and, indeed, on any other numbers).  (1+2)+8 = 3+8 = 11, and 1+(2+8) = 1+10 = 11.

And that’s it.  These four rules leave enough unsaid that you can find groups everywhere, but provides just enough structure that it’s possible to say interesting things about groups in general.  Anything that we can prove from only these rules will apply to anything that is a group, and that’s a very powerful tool.

If this was a proper introduction to group theory, I’d describe a few more examples of groups; symmetries and permutations are the traditional ones, and group theory is a useful tool for analysing Rubik’s cube.  That can come another time.  Right now, there’s still a fair way to go before we’re ready for the proof.  But it would be useful to look at one more example.

Finite groups

Groups can be classified in a number of ways.  One of the most important is whether a group is finite or infinite.  This is just whether the set has a finite or infinite number of elements.  The integers under addition is an example of an infinite group.  The integers $latex 0, 1, 2, ldots, n-1$ under addition modulo n form a finite group.  Addition modulo n is just like normal addition, only you subtract n if the result is above n-1.  Hours are a familiar example: if it’s 9 o’clock now, then in 6 hours it will be 3 o’clock: $latex 9 +_{12} 6 = 3$ (the subscript 12 on the + is to show that it is addition modulo 12).

I claimed that this is a group.  It’s easy to check:

  1. Yes, it is closed.
  2. 0 is the identity.
  3. The inverse of r is 12-r, or 0 if r = 0.
  4. Addition is still associative.

This group is called $latex mathbb{Z}_{12}$.

A multiplicative group

One more example of a group.  If $latex p$ is a prime, then the numbers $latex 1, 2, 3, ldots, p-1$ form a group number multiplication modulo $latex p$ (that’s just like modular addition, but you might have to subtract a multiple of $latex p$ from the result to get it back to the right range).  If you’re feeling brave, you might like to prove that this is a group.  Or you can just take my word for it.  The only tricky bit is proving that every element has an inverse.

For example, we could take $latex p = 5$.  Then the set $latex {1,2,3,4}$ with the operator $latex times_{5}$ is a group.  1 is the identity, and the inverses are $latex 1^{-1} = 1$, $latex 2^{-1} = 3$, $latex 3^{-1} = 2$, $latex 4^{-1} = 4$.  This group is sometimes called $latex mathbb{Z}^*_5$ .

The order of an element

And one more definition.   Consider the group $latex mathbb{Z}_{12}$ again.  What happens if we repeatedly add 2 to itself?  It goes around in a circle: 2, 4, 6, 8, 10, 0, 2, … What abour 5?  That goes 5, 10, 3, 8, 1, 6, 11, 4, 9, 2, 7, 0, 5, … If the sequence reaches the identity, as both of these do, then the number of steps it takes to get there is called the order of the element.  The order of 2 is 6, and the order of 5 is 12.  If the sequence never reaches the identity, then that element is said to have infinite order.

In the group $latex mathbb{Z}^*_5$, we find that $latex 1^1 = 1$, $latex 2^4 = 1$, $latex 3^4 = 1$, and $latex 4^2 = 1$, so the orders of 1, 2, 3, 4 are 1, 4, 4, 2 respectively (this time, $latex a^b$ means exponentiation modulo 5).

A property of the order of an element which will be useful in our proof is that every position of the identity in the sequence is a multiple of the order.

And another property, which is important enough to have been given the name Lagrange’s theorem, is that in a finite group, the order of each element is a divisor of the number of elements in the group.  You can see this in the previous examples: $latex mathbb{Z}_{12}$ has 12 elements, and the orders we saw for 2 and 5 are 6 and 12 – both divisors of 12.  And in $latex mathbb{Z}^*_5$, with 4 elements, the orders 1, 4, 4, 2 are all divisors of 4.

The proofs of these properties are fairly simple, and maybe I’ll present them another time.  But for now…

The proof

I think we’re finally ready.

This will be a proof by contradiction.  We will assume that there are finitely many primes, then derive a contradiction from this.  That will mean the original assumption is false, and there are infinitely many primes.

So suppose that there are finitely many primes.  Then there must be a largest prime, which we will call $latex p$.  Now, consider the number $latex 2^p – 1$.  It will be divisible by at least one prime (possibly itself).  We choose any one of them and call it $latex q$.

Since $latex 2^p-1$ is divisible by $latex q$, $latex 2^p$ must give a remainder of 1.  Now, 1 is the identify of the group $latex mathbb{Z}^*_q$.  And since $latex p$ is a prime, that means the order of 2 in this group is $latex p$.

Now we apply Lagrange’s theorem.  $latex mathbb{Z}^*_q$ has $latex q-1$ elements, and one of those elements has order $latex p$.  That means $latex p$ is a divisor of $latex q-1$, which means q is greater than p.

But q is a prime, and we’ve just proved that it’s larger than the largest prime.  This is the contradiction we were after.  The original assumption must be false, and we conclude that there are infinitely many primes.


Ah.  And now WordPress has decided that it doesn’t like me enough to show me a preview of this post.  Please forgive the inevitable errors.

It’s ready when it’s ready, dammit!

Original Author: Kyle-Kulyk

“…It’s a mobile game titled Itzy, and we’re excitedly working toward an early November launch.”

Ah.  One of my many, self-imposed deadlines that made almost a sighing sound as they wooshed by.  I guess that’s something to be thankful for.  As an indie, no one is breathing down our necks to meet a deadline, pounding on our desks and demanding that we make games faster, especially not our ever-understanding and loving wives.  It’s a good thing too because any progress goals we set ended up being absolutely meaningless.  The truth of the matter is, as we put the finishing touches on our first title we really had no idea how long anything was going to take us.  We thought we did, but we didn’t.

I remember not that long ago (has it really been 8 months?) sitting down with the group, putting together a task list and estimating how long each task would take.  We then threw the entire list up on Accunote as we all started tracking our progress, making sure we weren’t stepping on each other’s toes (or scripts) and away we went!  We had roughly 100 tasks to complete before we could launch the game and I was optimistically shooting for a late Aug, early Sept launch.

Now, I know what you might be thinking.  Scope creep or laziness.  We’re indies, after-all.  If we weren’t lazy, we’d be employed, so we must be lazy.  To you I say “Nay!”  We’ve been working our asses off and neither of these ended up being our particular hurdles.  As dangerous as scope creep is, we didn’t fall victim to it.  We’re now 8 months into our game’s development with 100 tasks in addition to the 100 we thought we needed to complete the game at first.  I was asked after a presentation how our team handled scope creep.  Right from the word go we had a set vision for Itzy3D, a set list of gameplay functions and add-ons and we never strayed from what we stated we would include.  I had read often to “Watch for Scope Creep”, “Scope creep, like the rhythm, is gonna get you”, “Scope creep ate my baby”.  There was no way we were about to be hindered by scope creep before we ever started.  So we knew pretty much exactly what functionality was going into the game at the start.

So why are we now 8 months into the game with none of us seeing a pay-check yet?  What happened to our late Aug, early Sept – no Oct, no..early Nov…surely to god in time for Christmas launch?  Simple.  We didn’t know how long anything would take.  Basic tasks and simple concepts while developing our game proved to be much more involved than even the experienced programmers on our team estimated originally.  My advice to other indies concerning timelines would be this:  Unless you’ve done this before, don’t expect to have any inkling into how long development is going to take.  You won’t know what challenges are going to pop up until they’re staring you down.  That doesn’t mean you shouldn’t set goals and targets, but when you miss them make sure you learn from that experience.

There’s nothing funny about this.

I hope everyone has a Merry Christmas and a Happy New year.  Itzy Interactive will have a completed game for you shortly.  Maybe next month some time.  Probably by the end of February.  You know what…it’ll be done when it’s done.