## Tips for Managing Multiple Coordinate Systems

Original Author: James Podesta

Most programmers with any experience in 3D will have dealt with cartesian coordinates, in what we would call World Space. Graphics programmers will certainly have had a lot experience in homogeonous coordinates as well as View space and Screen space. Anyone dealing with animation or collision will have spent a lot of time in local space. All of these are different ways that a single absolute position in space can be represented. However, these are all taken at different stages in a pipeline and have very specific uses. Where I find it can get quite complicated is if you have multiple co-ordinate systems at the same level which only serve to show the same absolute co-ordinates from a different mathematical perspective.

My latest contract has me working in flight simulator territory. This now introduces geocentric coordinates as well as lon/lat/alt co-ordinates on top of the usualy world space which is the core of the rendering pipeline. Geocentric co-ordinates are basically a typical cartesian system (X,Y,Z) but where (0,0,0) represents the centre of the planet and the Y axis points to the North Pole. Orientation also has several representations in the application. There is a geocentric representation which is the transform from the geocentric identity as well as the render world space orientation which is the more traditional “local to world” transform in 3d applications. All these transformations have the potential to make any code very confusing indeed, as I soon discovered coming onto the project. I spent the next few weeks researching the co-ordinate systems and coming up with some modifications to the code base to make it readable to myself and others.

## Understand the co-ordinate systems you will be working in

The first thing you need to do is actually understand how the co-ordinate system works. For positional systems, this means understand exactly what (0,0,0) represents (assuming a 3 coordinate system of course). In some coordinate systems (0,0,0) might not represent a static position in space. Be aware of what direction positive is on each of the coordinates.

Some coordinates may represent curves through space. For instance, with lat/lon/alt coordinate systems, lattitude lines curved around the earth. This is significant if you are comparing distances between points in one space to differences between the same points in another space.

Understand the scale of the coordinates. Hopefully most cartesian systems will be in metres, but I’ve seen a few in centimetres. For lat/lon/alt, it is good to have a feel for how many metres a degree of longitude is. I see this as part of the classic mantra of “know your data”.

For orientation systems, understand exactly what it means to give one of your game objects an identity orientation in this coordinate system. Which direction will it face. I like to visualize the x,y,z vectors of a matrix at identity and see in which direction each axis represents with respect to an identity oriented game object. Understand what direction pitch, yaw and roll travel in. Understand what a cross product will give in this coordinate system. It only takes a few experiments to get data on all these and its generally not safe to assume that the maths libraries will do exactly as you expect if you didn’t write them yourself.

## Understand what each coordinate systems strengths are

Each coordinate system is there for a reason, so learn what that reason is. If there is NO reason, then you would shouldn’t be using that coordinate system and complicating your code unnecessarily. For my project, all our coordinate systems were mandatory due to other software we were interfacing with.

Each coordinate system is particularly good at certain maths, and weak at others. Geocentric is a nice simple cartesian system in metres, but the concept of “up” varies with your location so heightmap patches are expensive to apply. Longitude/Lattitude/Altitude is convenient for heightmaps since “up” is represented by “altitude” and 0 is at the surface of the planet. It’s weaknesses are that it is in degrees (instead of metres) and the first 2 coordinates represent curved space making distance and angle calculations very inaccurate when the points are far apart.

## Write simple explicit conversion routines for going between coordinate systems

Invent a unique string identifier for each coordinate system. This allows you to have consistent transformation functions between each space. While implicit transformations would be convenient and easy to implement, it is important for code readability to see clearly when data is transformed. For my project I have the 3 systems I need to consistently transform between. Geocentric (geo), Lon/Lat/Alt (LLA) and Local Open GL coordinates (Local). When I came onto the project, there was no consistent and easy way to transform between coordinates.   Some transformations used methods of a class, others used static functions and others were a composite of functions. Worse still, the naming of the methods had no obvious link to the concept that you where trying to transform from one space to another.

I’ve now wrapped all the various transformation methods with simple consistent helper functions.

```vector3 PositionGeoToLocal( const vector3 &vectorGeo )

vectorLLA PositionGeoToLLA( const vector3 &vectorGeo )

vector3 PositionLLAToGeo( const vectorLLA &vectorLLA )

vector3 PositionLLAToLocal( const vectorLLA &vectorLLA )

vector3 PositionLocalToGeo( const vector3 &vectorLocal )

vectorLLA PositionLocalToLLA( const vector3 &vectorLocal )

quaternion OrientationGeoToLocal( const quaternion &orientGeo )

quaternion OrientationLocalToGeo( const quaternion &orientLocal )```

## Use a variable naming convention for your coordinates

Since you may different coordinate systems interleaved through a single block of maths, it can help with code readability to use a naming convention that identifies the coordinate system you are using. My prefered method is a small postfix on the variables, but a prefix would work just as well.

Having different classes for each representation helps the compiler spot any misuse of variables but it doesn’t help the clarity of the code.

## Converting Velocities Safely

Velocities and directions are a little trickier to convert since they can depend on what position they are taken from. The simple trick here is to convert the current position, and calculate and convert the current position plus the velocity. Then if you take a difference between those positions, you get the converted velocity.

```vectorLLA positionLLA = PositionGeoToLLA( positionGeo )

vectorLLA velocityLLA = PositionGeoToLLA( positionGeo + velocityGeo ) - positionLLA;```

It important to realise here that applying the same velocity in LLA space will not give you the same results as applying it in a cartesian coordinate system.

## Other interesting uses of curved space

Something I’ve done on a couple of projects now is to use a 2d coordinate system that represents curved space. When coordinates a transformd into their cartesian counterpart for rendering,  space is effectively warped giving you a more interesting world to move in. For instance, a simple old school 2d game can bend around 3d space without complicating the actual movement or physics maths since those are done as simple x,y maths in curved space.

On my current game, Magelore, I use a curved coordinate system to add some noise to what would have been a very square-edged tile system. All game code is just done in a simple x,y grid,  but when transforming from grid space to cartesian space, it all gets warped somewhat.

Original Author: Wolfgang Engel

A shadow system in a modern game needs to be able to mimic a wide range of shadows. The following text describes a shadow system that was used in the RawK® demo that is tailored to Intel’s Sandy Bridge chipset [RawK].

This demo prototypes the characteristics of an open world game, when it comes to indoor shadow rendering.  In an open-world game where the viewer can go inside buildings and stay outside as well, there might be shadows for

• Cloud shadows, most of the time clouds just projected down
• Self-Shadowing for the main character or more characters: those are optional shadows with their own frustum that just cover characters bodies close to the camera
• Shadows from point, spot and other light types

For the first three types of shadows one might consider a shadow collector that collects the shadow data of all three types in a screen-space texture, that is then filtered and applied to the scene.

Shadows from point, spot and other light types might be cached. Trading memory against the effort of updating shadow maps makes sense on some platforms. The following text will focus on shadows coming from ellipsoidal and point lights but similar thoughts apply for light types other than directional lights.

Developing a shadow system for those light types usually means facing the following challenges:

4. Softening the Penumbra

For point light types and similar light types, the favorite storage method is a cube texture map. Compared to its main competitor the dual-paraboloid shadow map, it offers a more even error distribution. The hemispheric projection for dual-paraboloid shadow maps requires a high-level of tessellation that might not be common in a game where normal maps mimic the finer details.

Rendering into a cube shadow map can be done with a DirectX 10 and above capable graphics card in one draw call with the help of the geometry shader. The performance of the geometry shader on some graphics cards is not as good as one would expect. In those cases it helps to move some of the calculations from the geometry into the vertex shader. The inner loop of a typical geometry shader used to render into a cube map might look like this:

``` // Loop over cube faces [unroll] for (int i = 0; i < 6; i++) { // Translate the view projection matrix to the position of the light float4x4 pViewProjArray = viewProjArray[i];   // // translate // // access the row HLSL[row][column] pViewProjArray[0].w += dot(pViewProjArray[0].xyz, -In[0].lightpos.xyz); pViewProjArray[1].w += dot(pViewProjArray[1].xyz, -In[0].lightpos.xyz); pViewProjArray[2].w += dot(pViewProjArray[2].xyz, -In[0].lightpos.xyz); pViewProjArray[3].w += dot(pViewProjArray[3].xyz, -In[0].lightpos.xyz);   float4 pos[3]; pos[0] = mul(pViewProjArray, float4(In[0].position.xyz, 1.0)); pos[1] = mul(pViewProjArray, float4(In[1].position.xyz, 1.0)); pos[2] = mul(pViewProjArray, float4(In[2].position.xyz, 1.0));   // Use frustum culling to improve performance float4 t0 = saturate(pos[0].xyxy * float4(-1, -1, 1, 1) - pos[0].w); float4 t1 = saturate(pos[1].xyxy * float4(-1, -1, 1, 1) - pos[1].w); float4 t2 = saturate(pos[2].xyxy * float4(-1, -1, 1, 1) - pos[2].w); float4 t = t0 * t1 * t2;   [branch] if (!any(t)) { // Use backface culling to improve performance float2 d0 = pos[1].xy * pos[0].w - pos[0].xy * pos[1].w; float2 d1 = pos[2].xy * pos[0].w - pos[0].xy * pos[2].w;   [branch] if (d1.x * d0.y > d0.x * d1.y || min(min(pos[0].w, pos[1].w), pos[2].w) < 0.0) { Out.face = i;   [unroll] for (int k = 0; k < 3; k++) { Out.position = pos[k]; Stream.Append(Out); } Stream.RestartStrip(); } } } ```

To relieve the workload of the geometry shader the offset and transformation code was moved into the vertex shader:

``` [Vertex shader]   float4x4 viewProjArray[6]; float3 LightPos;   GsIn main(VsIn In) { GsIn Out;   float3 position = In.position - LightPos;   [unroll] for (int i=0; i<3; ++i) { Out.position[i] = mul(viewProjArray[i*2], float4(position.xyz, 1.0)); Out.extraZ[i] = mul(viewProjArray[i*2+1], float4(position.xyz, 1.0)).z; }   return Out; }   //------------------------------------------------------------------------------ [Geometry shader]   #define POSITIVE_X 0 #define NEGATIVE_X 1 #define POSITIVE_Y 2 #define NEGATIVE_Y 3 #define POSITIVE_Z 4 #define NEGATIVE_Z 5   float4 UnpackPositionForFace(GsIn data, int face) { float4 res = data.position[face/2];   [flatten] if (face%2) { res.w = -res.w; res.z = data.extraZ[face/2]; [flatten] if (face==NEGATIVE_Y) res.y = -res.y; else res.x = -res.x; } return res; }   [maxvertexcount(18)] void main(triangle GsIn In[3], inout TriangleStream<PsIn> Stream) { PsIn Out;   // Loop over cube faces [unroll] for (int i = 0; i < 6; i++) { float4 pos[3]; pos[0] = UnpackPositionForFace(In[0], i); pos[1] = UnpackPositionForFace(In[1], i); pos[2] = UnpackPositionForFace(In[2], i);   // Use frustum culling to improve performance float4 t0 = saturate(pos[0].xyxy * float4(-1, -1, 1, 1) - pos[0].w); float4 t1 = saturate(pos[1].xyxy * float4(-1, -1, 1, 1) - pos[1].w); float4 t2 = saturate(pos[2].xyxy * float4(-1, -1, 1, 1) - pos[2].w); float4 t = t0 * t1 * t2;   [branch] if (!any(t)) { // Use backface culling to improve performance float2 d0 = pos[1].xy * pos[0].w - pos[0].xy * pos[1].w; float2 d1 = pos[2].xy * pos[0].w - pos[0].xy * pos[2].w;   [branch] if (d1.x * d0.y > d0.x * d1.y || min(min(pos[0].w, pos[1].w), pos[2].w) < 0.0) { Out.face = i;   [unroll] for (int k = 0; k < 3; k++) { Out.position = pos[k]; Stream.Append(Out); } Stream.RestartStrip(); } } } } ```

Cube shadow maps are not only useful to store point light shadows but shadows from other light types as well. For example shadows from ellipsoidal lights, where each of the directions has its own attenuation value, can be stored in cube maps as well.

Image 1: Ellipsoid Lighting

Image 2 — 8 Ellipsoidal Light Shadow Maps

Image 3: Ellipsoid Lighting

Depending on the amount of memory that is available on the platform, caching 16-bit depth cube shadow maps might become an option. For example integrated graphics chips usually share memory with the CPU and might have a higher amount of – then usually slower- memory available. Storing for example 100 256x256x6 16-bit cube shadow maps is about 75 Mb.

To find a good caching algorithm, the following parameters might be useful:

• Distance from shadow to camera
• Size of shadow on screen
• Is there a moving object in the area of the light / shadow ?

From those parameters and others, the question if anything moves in the area of influence of the light / shadow is certainly the most important one. As long as nothing moves or changes in the area of the light, an update of the shadow map is not necessary and the shadow data can stay unaltered in memory.

Even if something is moving in the area of influence of the light, an update of the shadow map might not be necessary if the shadow is not easily visible from the point of view of the player. If the shadow is far away and it is hard to spot that an object is moving through the shadow, it would make sense to not update the map and to keep it cached.

The question if a light with a shadow map with a very small visible area on screen needs to be updated, follows a similar logic.

If there is not enough memory available, caching might be restricted by distance and then maps are moved in and out into the cache.

The classical shadow mapping algorithm generates a binary value based on a comparison. Because this comparison relies on hardware precision, it is prone to generate slight errors in edge cases.

In case of a regular 2D shadow map, the usual solution is to introduce a shadow bias value. Commonly this value needs to be picked by the user, which makes it scene dependent. In case of cube shadow maps that are attached to a moving light, there is no sensible way to pick a working value.

Approximating the binary comparison with an exponential function will lead to better overall results [Salvi].

Image 7: Exponential Shadow Mapping Function

``` float depth = tex2D(ShadowSampler, pos.xy).x; shadow = saturate(2.0 - exp((pos.z - depth) * k)); ```

Softening the Penumbra

There are many approaches that cover the softening of the Penumbra. Certainly all the probability based shadow filtering techniques that can elevate hardware filtering have a very good quality / performance ratio.

Screen-space filtering to achieve perceptually correct cube shadow maps is an area where game developers just started to do research. An implementation is described in [Engel].

Image 8: 16 Screen-Space Soft Point Light Shadows

Image 9: 32 Screen-Space Soft Point Light Shadows

Future Development

Game developers try to move away from pre-calculated lighting and shadowing and any other pre-calculated data. The main reasons to do this are:

• hard to mimic a 24 hour cycle
• storing those light or radiosity maps on disk or even the DVD / Blu-ray required a lot of memory
• streaming the light or radiosity maps from disk or hard-drive through hardware to the GPU consumes valuable memory bandwidth
• geometry with light maps or radiosity maps is not destructible anymore (this is a reason to avoid any solution with pre-computed maps)
• while the environment is lit nicely, it is hard to light characters in a consistent way with this environment

A shadow caching scheme might be one tool to remove pre-calculated data. Following the recent development in dynamic global illumination in the area of one-bounce lighting effects[Dachsbacher][DachsbacherSI][Kaplanyan], it is possible to store not only shadow data but also data for reflective shadow maps in cube maps. All the ideas mentioned above apply then to this approach.  One question that remains them is if it is best to cache the data in cube shadow maps or use a memory area with higher density for this, like a Light Propagation Volume.

In any case temporal coherence can be used to improve shadows and global illumination data over time.

Acknowledgements

I want to thank my business partner Peter Santoki for the help, feedback and encouragement while implementing the ideas covered above. I also would like to thank Tim Martin for help in researching the general topic of cube shadow map rendering and Igor Lobanchikov for the cube map optimizations trick.

References

[Engel] Wolfgang Engel, “Massive Screen-Space Soft Point Light Shadows”,
[Dachsbacher] Carsten Dachsbacher, Marc Stamminger, “Reflective Shadow Maps”,

[DachsbacherSii] Carsten Dachsbacher, Marc Stamminger, “Splatting Indirect Illumination”,
[Kaplanyan] Anton Kaplanyan, Wolfgang Engel, Carsten Dachsbacher,

“Diffuse Global Illumination with Temporally Coherent Light Propagation Volumes”,

pp 185 – 203, AK Peters, 2011

Marco Salvi’s website:

## FPStress

Original Author: Steven Tovey

This post is going to be much shorter as I’ve sadly been very pressed for time this last couple of weeks and as a result been unable to finish the other (longer) post I had original planned to the high standard that you all deserve. This post is talking about measuring stuff in frames per second and why you shouldn’t do it.

Many people who haven’t got any experience in profiling and optimising a game will approach performance measurements in a frankly scary way. I’m taking of course about measuring performance in terms of frame rate. Of course it’s unfair to say the games industry never mentions ‘frame rate’, we do! We have target frame rates for our games which are usually 60Hz or 30Hz (increasingly the latter), but for anything beyond this I think it’s fair to say that most sane people take the leap into a different, more logical and intuitive space. It’s called frame time and it’s measured in; you guessed it, units of time! Typical units of time used include ms, µs or even ns. The two have a relationship which allows you to convert between them given by t=1/r.

So why am I concerned about this if it’s trivial to convert between them? Well, the reason is that measuring anything in terms of the frame rate suffers from some big flaws which measuring in frame time does not and can lead to problems with your view of the situation:

1. Traditional frame rate measurements are often dependent on synchronisation with a hardware-driven event such as the vertical reset on your display device. If you’ve locked your application to this vertical sync to avoid frame tearing this can mean no matter how fast your application is going it will always appear to run at say, 60fps. If you’re application is straddling the border of 60fps and locked to vsync this can give rise to occassions where small fluctuations in the amount of time something takes can make the game to appear to go either two times quicker or slower!

2. Frame rate can be very deceptive! To show what I mean let’s use an example that crops up time and time again among beginners, it goes as follows:

Coder A: “Check this shit out, my game is running at 2000fps! I’m such a ninja!”

Coder B: “Nice one chief, but you’re accidentally frustum culling the player there…”

Coder A: “Ah shit, okay… Let me fix that… Sorted!”
Coder A hits F5 and looks on truly horrified at the frame rate counter.

Coder B: “Heh! Not so ninja now are you? Running at 300fps!”
Coder A slopes off looking dejected.

What happened here? Apparently performance has dropped drastically by a whopping 1700 frames per second to a mere three hundred! However, if these pairs of idiots worked in a set of sensible units they’d instantly see that it’s only taking 2.5ms to render that player mesh. Small drops in frame rate in an application with an already low frame rate can point to big performance problems, where as large drops in an application with a very big frame rate are usually not much to worry about.

3. And perhaps the most compelling reason which should give any remaining doubters a firm nudge into reality; All the wonderful profiling tools out there work in time, not rate.

At Stig (among others, such as Hulk Hogan and Austin Powers) for anyone that broke the build. In honour of Bizarre which sadly closed last Friday, I offer these final Stig inspired words to bring this post to a close: Next time you even contemplate telling your colleague that your code runs at a particular frame rate remember that what your saying is more backwards than Jeremy Clarkson telling you that the Stig got the Ariel Atom 500 round the top gear track at 37.54993342 metres per second!

This post is reproduced over at my personal blog, which you can find here.

## WebGL – Part 2: In the beginning there was…

Original Author: Savas Ziplies

…a Triangle! In the previous part I presented WebGL, a new competitor in the graphics programming market. You got to know some history and parallel branches in the history of 3D graphics in the browser environment. Now, in this part you will see how to get a WebGL application into your browser.

### The beginning

Last time was historical, this time we will get practical. In this part we will write one HTML page with embedded JavaScript code to draw a simple triangle through WebGL into an HTML5 Canvas. For that we will see how to set up a canvas, create a WebGL context, embed shaders and create a draw loop to present our triangle. Especially the shaders are an important topic regarding WebGL as WebGL is based on OpenGL ES 2.0 and OpenGL ES 2.0 does not support the fixed function transformation and fragment pipeline of OpenGL ES 1.x. Therefore, everything has to be done through shaders.

Note: What I present is an overview and user guide of WebGL, not OpenGL (nor OpenGL ES Shading Language)! WebGL could be seen as a wrapper for OpenGL ES 2.0 in your browser (and in the current state programmable by JavaScript). Therefore, I will not explain every OpenGL command or speciality. What is pointed out is just how to get your OpenGL knowledge into your browser. General programming knowledge and some HTML/JavaScript expertise may help. If you need help with OpenGL please refer to the masses of great tutorials in the net, especially OpenGL ES 2.0 Programming Guide (the main inspiration for this).

### The Page

First of all we will set up our frame, our basic HTML file which we will use throughout this post. To write and develop HTML, JavaScript and therefore WebGL yourself you are pretty open to what to use. You can use a normal text editor or rely on a WebDevelopment environment but there are no distinct WebGL IDEs.

For WebDevelopment you can use Notepad++.

Everything shown here has been developed in Eclipse, Notepad++ and tested in Chrome 9. Chrome 9 is currently the only release browser download with enabled WebGL and has very good Developer Tools. Both is recommended for JavaScript development as debugging is kind of a pain in the a** (we will come to this in later parts).

If you have everything ready, have a look at the initial HTML scaffold:

``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="keywords" content="altdev,webgl,insanitydesign" /> <meta name="author" content="INsanityDesign" />   <title>WebGL - Part 2: In the beginning there was...</title>   <style type="text/css"> #canvas { width: 640px; height: 480px; } </style>   <script type="text/javascript" src="webgl-utils.js" />   <script type="text/javascript"> ... </script> </head>   <body> <canvas id="canvas"> </canvas> </body> </html> ```

This is the scaffold we will use throughout this part to show how to embed WebGL in a HTML file. You can copy it and save it as e.g. an webgl-part2.html file to open it with your WebGL enabled browser. At the moment, there is nothing to see or do. We will consequently fill it up and start using WebGL to draw our triangle.

One thing you might notice immediately if you have seen or written HTML pages before is the first line: <!DOCTYPE html>! This is the reduced Doctype used to init a HTML 5 page. You no longer have to write these long Doctype XHTML 1.0 transitional etc. Just that! Afterwards comes pretty general HTML stuff and nothing very special for our test.

Between lines 11 and 14 we define the width and height of our canvas we want to draw in through CSS. For now and the following we will stick with a classic 640×480. There are several ways to define the width and height of the canvas. As it is an HTML entity you can use everything that HTML allows. Regarding the WebGL drawing viewport we can refer to this size later on, increase or decrease the glViewport.

At line 16 you can see how to embed external JavaScript files into the page. We already embed one .js file in this case: webgl-utils.js! These utils provide some common methods for creating a WebGL context on a canvas. These are cross-browser compatible and actively maintained at Khronos, therefore it is unnecessary to rewrite these by yourself. You can download the original file from the Khronos CVS. Download and save the file at the same location where you put your HTML file. I will rely on these in this and the following parts.

The lines 25 and 26 define the initial canvas which we will use to draw in. It is just a simple HTML entity. We identify it by the id attribute as “canvas” but the naming is up to you (but you would have change other occurrences).

Now, the interesting things will happen in between line 19 and 21. This is where the magic will occur. In this showcase we will fill it part by part to get our triangle in the browser. Every code shown in the following should be copied in between here (or download the full file at the end).

### Global Definition

At first, we will add some global variables that we will use in the following WebGL application. In addition, we define our triangle vertices and the mandatory fragment and vertex shader (add this where the dots are).

``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 //Some global variables var gl; var width; var height; var canvas;   //Our triangle var vertsTriangle = new Float32Array([ 0.0, 0.7, 0.0, //Top -0.7, -0.7, 0.0, //Bottom Left 0.7, -0.7, 0.0 //Bottom Right ]); var vertsTriangleBuffer;   //Fragment Shader var shaderFS = "precision highp float;n void main() {n gl_FragColor = vec4(1.0, 1.0, 0.2, 1.0);n }n";   //Vertex Shader var shaderVS = "attribute vec4 position;n void main() {n gl_Position = position;n }n";   //The shader program var shaderProgram; ```

We define handles for the created GL context we want to re-use throughout this page as well as the width and height of the canvas into which we want to draw. Do not forget the canvas itself, of course. Afterwards we set up the three vertices of our triangle in vertsTriangle.

You may have noticed that we do not type the width/height or gl handle. All variables are just noted as (mutable) var‘s. As JavaScript is a dynamically typed language this is fine. The definition happens as soon as it assigned. But it is never strongly typed for dynamic variables and therefore can rely on different sets of types. Nevertheless, you can also type your variables.

As JavaScript is required to be more and more with high performance inside your browser an initiative started to define some specific arrays to speed up the applications. These Typed Arrays such as the used Float32Array are still in a draft state as well as WebGL but will be released and supported probably in parallel to it.

After the vertices array you can see the fragment and the vertex shader code. Here, it is assigned as String to a JavaScript variable. If you would copy these and remove the special symbols you will get a normal shader as you might know. Here, we explicitly define the end of each line through n. This looks odd but is required to compile it later on as the shader source is required to be an array. Therefore, you could also write:

``` var shaderFS = [ "precision highp float;", "void main() {", "gl_FragColor = vec4(1.0, 1.0, 0.2, 1.0);", "}" ]; ```

Both things would work. It may seem pretty odd to write everything directly into these arrays/variables but for now it’s enough as we only need one shader each. In a ladder part we will see how to load/reload specific shaders into your application.

The last thing we already pre-define is a variable for our later assigned shader program.

### Entry point

JavaScript itself and in its browser environment have no specific entry point such as a classic main method. Therefore, we have to create an entry point ourself based on the principles of the DOM loading of the WebPage. As we want to start our WebGL application as soon as the actual body of the page (with our canvas) has been loaded, we add a trigger to the body itself. The trigger is docked onto the onLoad delegate of the body:

``` 24 <body onload="main();"> ```

By that we listen for the body to be loaded (the canvas has to be created in the DOM up to that) and fire our main() method where we start with our part of the application:

``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 /** * The main entry point */ function main() { // canvas = document.getElementById("canvas"); gl = WebGLUtils.setupWebGL(canvas); //Couldn't setup GL if(!gl) { alert("No WebGL!"); return; }   // width = canvas.width; height = canvas.height;   // if(!init()) { alert("Could not init!"); return; }   // draw(); } ```

In the first 8 lines we init our canvas and the WebGL context for it. First, we retrieve the canvas element from the DOM through its id “canvas”. Then we call a method from the downloaded WebGL Utils to create and enable the WebGL context of that canvas. Please have a look at part 1 to see what is done in that method. Basically, we utilize the generalized setupWebGL(canvas) method to be cross-browser compatible as there is still no distinct method to init a WebGL context.

Afterwards we retrieve the width and height from the canvas (as defined in the CSS) to reuse these in our glViewport later on. Then we fire a general init() method to setup everything WebGL we need:

``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 /** * Init our shaders, buffers and any additional setup */ function init() { // if(!initShaders()) { alert("Could not init shaders!"); return false; }   // if(!initBuffers()) { alert("Could not init buffers!"); return false; }   // gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.viewport(0, 0, width, height); gl.clearDepth(1.0);   // return true; } ```

In general, the init method follows three steps:

• Init our buffers (our triangle buffer)
• Set up the clear color, depth and our viewport

``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 /** * Init our shaders, load them, create the program and attach them */ function initShaders() { // var fragmentShader = createShader(gl.FRAGMENT_SHADER, shaderFS); var vertexShader = createShader(gl.VERTEX_SHADER, shaderVS);   // shaderProgram = gl.createProgram(); if(shaderProgram == null) { alert("No Shader Program!"); return; }   // gl.attachShader(shaderProgram, vertexShader); gl.attachShader(shaderProgram, fragmentShader); gl.linkProgram(shaderProgram);   // if(!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) { alert("Could not link shader!"); gl.deleteProgram(shaderProgram); return false; }   // gl.useProgram(shaderProgram);   // shaderProgram.position = gl.getAttribLocation(shaderProgram, "position"); gl.enableVertexAttribArray(shaderProgram.position);   return true; }   /** * */ function createShader(shaderType, shaderSource) { //Create a shader var shader = gl.createShader(shaderType); // if(shader == null) { return null; }   // gl.shaderSource(shader, shaderSource); gl.compileShader(shader);   // if(!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) { alert("Could not compile shader!"); gl.deleteShader(shader); return null; }   // return shader; } ```

At the beginning of initShaders we call our createShader method with the type of the shader and the according shader source variable. The called method createShader does exactly what the name suggests: It creates a shader based on the given type, compiles the given source and returns the created glShader.

Actually nothing special here as it is just a helper method. If we would hand over the shaderSource in a different format we will probably extend this method to convert the given shader source to a common format. But for now this is fine.

One speciality you can find is the error handling: Throughout the page you can find alerts and empty returns. As we have no specific exit() call in JavaScript or HTML (there is a possibility, but we will not use it later on) we alert the user, firing a MessageBox and empty return. As catched in higher instances this will result in a jump out of the application which leads to a manually “stopped” application. As we entered by ourself through the main() method, we can jump out any time we want. An alert() may not be the nicest way to show errors and exit the application but it at least gives feedback. Later we will see how to combine HTML entities, CSS and WebGL and will use the normal HTML functionality to fire error messages and according to the error lead the user to a solution or mail these errors to the developer.

After we loaded and created our shader we start creating our program at line 10. Again we check for an error that may have occurred. If no error happened we attach both shader objects, link the program and if nothing happened, use it.

If you know OpenGL you might say: Still nothing that special here!… and this is intended. WebGL was intended as a low-level programming opportunity without too many specialities away from the original uses of OpenGL. This was done with VRML and many other special plug-ins but never lead to what was expected as commonly used 3D in the browser.

Nevertheless, one speciality about JavaScript is being used now: The dynamic prototyping. You can see at the end that we return “the index of the generic vertex attribute that is bound to that attribute variable” and we assign it to the shaderProgram as position.

``` 32 33 shaderProgram.position = gl.getAttribLocation(shaderProgram, "position"); gl.enableVertexAttribArray(shaderProgram.position); ```

This is possible even if we haven’t predefined that variable in the shaderProgram object. We dynamically extend and create a new prototype by assigning the position without prior definition.

This should never be used as best practice as there is no contract we can rely on, e.g. through a specifically defined class, structure or interface. In this simple case as we do not need reusable and generic objects we just assign the handle for later reuse in our own “knowledge space”. Regarding classes and interfaces for contracts, these are all possible in JavaScript and we will come to that in a later part.

But now our shaders are (hopefully) all loaded and assigned so that we can continue setting up our triangle vertices buffer.

### Setup the Buffer

To set up the buffer based on our vertices we go straight forward:

``` 1 2 3 4 5 6 7 8 9 10 11 12 /** * Init our required buffers (in our case for our triangle) */ function initBuffers() { // vertsTriangleBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vertsTriangleBuffer); gl.bufferData(gl.ARRAY_BUFFER, vertsTriangle, gl.STATIC_DRAW);   // return true; } ```

Absolutely nothing special here! We create the buffer and assign it to our global variable, bind and set the according data. This will become more complicated in later parts, when we define structures, load models etc. But as we just want to draw one triangle this is enough.

### Let’s draw

What we really want to do after all this work now is to draw. Therefore, we define our draw() method:

``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 /** * Our draw/render main loop method */ function draw() { // request render to be called for the next frame. window.requestAnimFrame(draw, canvas);   // gl.clear(gl.COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT);   // gl.bindBuffer(gl.ARRAY_BUFFER, vertsTriangleBuffer); gl.vertexAttribPointer(shaderProgram.position, 3, gl.FLOAT, false, 0, 0);   // gl.drawArrays(gl.TRIANGLES, 0, 3); } ```

As you can see no immediate drawing here for this very simple example. This again relates to WebGL and its origin OpenGL ES 2.0: Besides no fixed-function pipeline there is no immediate drawing. We just clear, bind our buffer and draw!

But wait, how do we draw? I said we have no specific entry point and created one ourself. If we have no specific entry point we probably also have no specific draw() loop we can rely on. Again, we have to create on our own. Here again we will rely on a method from the WebGL Utils and the according best practice from Khronos.

In the end of our main method, after all initiations we call our draw() method once:

``` // draw(); } ```

After the initial call, in the first line of the draw method we call up a utils method: requestAnimFrame(draw, canvas)! This cross-browser compatible method takes the canvas in which to draw and the according method as callback. Inside is a timeout that will recall and keep the loop running. Currently this is the recommended way to do a loop. But it has to be noted that this not final. You can see the drawbacks immediately in the timeout and in the call directly in the draw method. For now until we see a final call and the draft has been approved we will stick to that as it is nevertheless cross-browser compatible and works fine.

Now, if you copied and setup everything correctly you should see the following:

## #AltDevUnconference (Burbank)

Original Author: Mike Acton

Last night was a blast!

The first #AltDevUnconference was in Burbank, CA and it was great fun. We met new people. We talked. We exchanged ideas. We drank an average of three beers each. (I can tell you that exactly, since the bar tab was included in attendance.)

Nothing ever goes as planned

We did have some problems with the venue. They accidentally double-booked our room! Luckily, we were able to work out something with the management and create a space for ourselves down on the main floor. It was a little bit noisy with the UFC fight going on at the same time (or as it was also referred to, “sweaty man wrestling”) but I think we made the best of the situation and still had a great time. We adapted the plan quickly and eventually settled on some smaller rotating groups that were able to talk and share.

We definitely earned the “un” in “unconference” – But when Rachel Blum and I imagined this event, we really had no idea how it was going to work or what we were doing. (And speaking for myself, I’m still pretty sure I have no idea what I’m doing when it comes to planning something like this – I have a much healthier respect now for the people who put on the big conferences and still get them to run smoothly!)

What went down?

It become clear really quickly that my original plan for some rapid-fire group presentations wasn’t going to work in our new space, so we adapted. For most of the night, we went through rounds of group discussions on various topics. I have been working with my own team at Insomniac on similar exercises, so I cribbed pretty heavily from those. Some examples of the topics we discussed:

• What will games look like in 10 years?
• What’s the difference between art and advertisement? Where do games fit in to that?
• What are you afraid of and how do you use it to drive the creative process?
• What’s the ‘ideal’ user interface for games?
• Gamification: Does it really make everything better?
• Feminism and games: Do you really want to define your audience primarily by their sex?

There were a lot of really great ideas generated. Which led to a ton of really interesting tangent conversations.

Who was there?

We had a pretty good mix of developers from different specialties and experience levels. We probably had more programmers than any other discipline, but that’s probably more to do being the group that Rachel and I interact with most often with, rather than specific to the content of the evening.

Dennis Crow was good enough to share some of his pictures from last night. (I’ll post more as we gather them up.)

The event went way over-budget. Largely due to the fact that we had to rent the space on the main floor on a fight-night. I ended up covering an extra few hundred dollars personally, but what really saved our asses was our sponsor. In retrospect, I can say that it simply would not have been possible without the sponsorship from Visceral Games. Thanks, guys! You really helped to make what could have been a disaster into a good night for everyone.

More?

I’m rushing out to catch my plane to San Francisco for GTC and GDC this week, but I wanted to get this quick update out before I took off. I’d like to gather up thoughts and stories from our attendees and share those with you later though! So stay tuned.

Mike.

## Redneck Cloud Computing

Original Author: Nick Darnell

Every now and then I wonder what decisions might be different if I had access to a cloud of machines dedicated to baking game assets or solving other highly parallelizable tasks. So I started looking into what options were available to someone wanting to distribute a ton of work over many machines. I found lots of options but all of them suffered from one or more of these problems,

• Specialized language
• Specialized framework (tied to language)
• Specialized operating system
• New versions would require manual deployment

I wanted a solution that didn’t require a specialized framework or language because chances are I would find something I wanted to distribute that I didn’t want to completely rewrite. No specialized OS, I want to be able to slave any unused windows machine in the office (dedicated farms are really expensive). Also if I need to perform new operations or fix a bug, I don’t want to reinstall / deploy new versions to everyone’s machine.

So I decided to roll my own solution and share the design. I’ll share the finished version of the software once I’ve completed it.

Let’s start with a usecase. I have an executable and a pile of data on disk that I would like to distribute over a ton of machines. I was able to quickly modify the existing program to have a command line option that allowed it to process a range of the data on disk instead of all of it.  How do I distribute this work over multiple machines so that I process all the data on disk?

Each time we execute the program with some amount of data to be processed, let’s call that a task. Each task will be processed by some machine in the cloud. To submit the tasks, we will need some common / cross language mechanism of communication. I chose named pipes just because they are easy to use on windows.

To submit tasks to the named pipe, we could either write a simple reusable program that reads task descriptions from a file and submits them to the named pipe, or we could wrap the named pipe communication in a C++ library so that C++ and C# (P-Invoke) could both reuse the logic inside of many various tools (including the generic reusable program that reads task descriptions from files or std::in).

Each machine has a single server running on it. I chose to write the server in C#, but it’s a blackbox as far as your tasks are concerned so you could use something different if you wanted.

Once the server receives the task description via the named pipe it looks at the list of servers it knows about.  The serves are detected using a simple UDP broadcast.

Using .Net remoting I then connect to each one of these machines to see what it can offer me.

Now each of these machines could have who knows what installed on them, and you’re about to transfer and run an exe that may have requirements, like .Net 4.0. So each task needs to contain a list of requirements. For now, I’ve got them scoped to 3 things, defined environment variables, registry key/values and .Net version. You could probably drop the .Net version if you just always keep your server written using the latest version so it’s a prerequisite for any machine on your network.

Now I can try and reserve a slot on the machine, if I fail to reserve a slot because of a race condition I move onto the next server.

Having reserved a slot (some machines that are idle or dedicated may have multiple slots), I need to transfer the executable and all the data files listed inside the task description that are claimed to be needed by the task.

I tell the remote server to then begin running the task in a new thread and I continue looking for empty slots in the cloud to submit tasks to.

After I discover that a task has finished, everything that is different in the remote folder where the task was dumped and subsequently executed is then transferred back to the local machine that submitted the task.

The named pipe that was used to submit one or more tasks remains open the whole time and is notified after each task is finished and the data successfully transferred back to the host machine.

So that’s the design in a nutshell. It’s not a solution designed to solve every distributed computing problem, but I like that it solves a very common pattern of problems I see fairly often in a non-intrusive manner.

I haven’t nailed down yet the best way to prevent the system from being abused by a malicious user.  However, I suspect having a SQL server with the list of MD5 hashes of the executables that are approved for deployment is one idea I’ve been toying with.

Here’s a simple diagram to help explain the task submission process. Because who doesn’t love diagrams.

Also posted on my personal blog.

## Battle Plan: Shipping Someone’s Problem Child

Original Author: Claire Blackshaw

Getting it out There

In my relatively short time in the industry I seem to have attracted a few fixer-uppers, reboots, ports and all round oh do you mind just finishing it off jobs. Some of the biggest of these I have been Lead Programmer or Lead Designer on. Through mistakes, and successes I now have a rough battle plan for getting these jobs done that I thought I would share as my first post.

To clarify we are not referring to projects with a stable team and direction that you take over, or are promoted in charge of. We are talking about those deals where a wet deposit of mud is dropped on your desk, and you are told to find the diamond inside.

## 1. Run Away if you can

First off, run away! No seriously escape is your best option. These jobs are scary, messy and with ten hidden problems for every problem you can see. Unless there is a really really good reason to do this then just leave it, its not worth your time.

Okay you think it may be worth your time? First question has someone signed some paper-work and committed you before a full in-depth review, if so slap or knock out the foolish suit with no brains. Try avoid commitments, and deadlines at first, you need some time to inspect this project with your full team.

## 2. Investigate & Negotiate

Find the stake holders, and if possible the developers originally involved and follow the path that lead to the current state. Treat all previous work on the project as if it was done by a genius savant who forgets his trousers. The people before you are likely to be intelligent and experienced but assume they make silly mistakes.

Find out the stake-holders goals, and what they want from the project. Whittle it down as much as possible and kill any sacred cows you can. The more you have the option to change and throw out the more options you have.

Ask to get back to them, keep the lines of communication open. The most common cause of a project failure of this kind is a lack of communication or vision. Understand what they want, keep them in the loop because I guarantee you that you only understand half of what they want, and they only understand a tenth.

## 3. Rip & Roar

Is there a playable build? Get some pizza, drinks and your team together. Play it play it and play it some more. Understand the product, read the documentation, pour through the assests and then rip into them.

Give the team a chance to say what is terrible about the game, what should be burned with fire, and stuff that just doesn’t make sense. Be sure to keep them on track with the stakeholder vision, but don’t stop them from tomatoes at the goals, let them question it.

Write all this down in a public space, such as a whiteboard. It keeps everyone aware of the challenges that need addressing.

## 4. Analyse

Pick through the design, art and code. Often the design of a derailed project will have conflicts, and missing pieces of data. Get your design team to dig deep into the numbers, and systems.

The code review strategy depends on the size of the team. I suggest strike teams of 3-5 programmers with specialist areas, or general on a small team, to attack all the core systems. Compare it to the documentation, if there is any, but more importantly map it out yourselves. Don’t let them do any commits yet, just map out the maze.

I cannot speak for the art process but generally its been a case of compiling a style bible, and clarifying the style. Hopefully someone else can give more advice here 😉

Highlight areas of risk, and I can guarantee for every one you find you missed three. You should now have replaced your rip apart boards, with risk boards.

## 5. Rebuild

Talk with the team about time-scales and build a plan to rebuild. Leave yourself time to iterate, and for the un-missed risks. This is pretty much like a normal game process just with a very unstable base.

This is normal project management, and I don’t want to tell everyone how to suck eggs. The only real challenge, which is hopefully address by the previous two steps, that the team buy in is much harder to get on a rework than a fresh concept.

## 6. Set Expectations

Now is the time to go back to the stakeholders and set expectations. The worst mistake you can make, and one I’ve made, is to try gloss over some risks.

Everyone should know the risks, the costs and what they are likely to get at the end of the project. I guarantee the costs are higher, and the risks are higher. The only reason to do this from a business perspective is to recover investment. Don’t be afraid to say it’s not worth it and kill it now.

Sometimes though the reason is because someone believes there is a diamond in that piece of mud they dropped on your desk. It’s a great feeling to find that diamond, and worth the work.

Also now would be the correct time to sign the piece of paper. 😉

Claire

Postscript:

Here are some general tips for you all, hope they help.

• Lines of Communication are so important in problem projects, be sure of them.
• Don’t pollute the project with blame, hence the rip it sessions. It gets all that negative air out there then rub it out and replace it with plans.
• It’s tempting to throw out massive blocks of code, but a lot of time can be wasted. Instead I encourage decoupling efforts first so you can isolate problem areas.
• Some core design may need to be reworked, but encourage your designers to isolate risk areas first.
• Don’t be afraid to prototype! Yes I know the project has already started but prototypes give a stable clear vision of a problem and a possible solution.

## Managing Decoupling Part 3 – C++ Duck Typing

Original Author: Niklas Frykholm

(Also posted in the BitSquid blog.)

Some systems need to manipulate objects whose exact nature are not known. For example, a particle system has to manipulate particles that sometimes have mass, sometimes a full 3D rotation, sometimes only 2D rotation, etc. (A good particle system anyway, a bad particle system could use the same struct for all particles in all effects. And the struct could have some fields called custom_1, custom_2 used for different purposes in different effects. And it would be both inefficient, inflexible and messy.)

Another example is a networking system tasked with synchronizing game objects between clients and servers. A very general such system might want to treat the objects as open JSON-like structs, with arbitrary fields and values:

``` { "score" : 100, "name" : "Player 1" } ```

We want to be able to handle such “general” or “open” objects in C++ in a nice way. Since we care about structure we don’t want the system to be strongly coupled to the layout of the objects it manages. And since we are performance junkies, we would like to do it in a way that doesn’t completely kill performance. I.e., we don’t want everything to inherit from a base class Object and define our JSON-like objects as:

``` typedef std::map<std::string,Object *> OpenStruct; ```

Generally speaking, there are three possible levels of flexibility with which we can work with objects and types in a programming language:

1. Exact typing – Only ducks are ducks

We require the object to be of a specific type. This is the typing method used in C and for classes without inheritance in C++.

2. Interface typing – If it says it’s a duck

We require the object to inherit from and implement a specific interface type. This is the typing method used by default in Java and C# and in C++ when inheritance and virtual methods are used. It is more flexible that the exact approach, but still introduces a coupling, because it forces the objects we manage to inherit a type defined by us.

Side rant: My general opinion is that while inheriting interfaces (abstract classes) is a valid and useful design tool, inheriting implementations is usually little more than a glorified “hack”, a way of patching parent classes by inserting custom code here and there. You almost always get a cleaner design when you build your objects with composition instead of with implementation inheritance.

3. Duck typing – If it quacks like a duck

We don’t care about the type of the object at all, as long as it has the fields and methods that we need. An example:

``` def integrate_position(o, dt): o.position = o.position + o.velocity * dt ```

This method integrates the position of the object o. It doesn’t care what the type of o is, as long as it has a “position” field and a “velocity” field.

Duck typing is the default in many “scripting” languages such as Ruby, Python, Lua and JavaScript. The reflection interface of Java and C# can also be used for duck typing, but unfortunately the code tends to become far less elegant than in the scripting languages:

``` o.GetType().GetProperty("Position").SetValue(o, o.GetType(). GetProperty("Position").GetValue(o, null) + o.GetType(). GetProperty("Velocity").GetValue(o, null) * dt, null) ```

What we want is some way of doing “duck typing” in C++.

Let’s look at inheritance and virtual functions first, since that is the standard way of “generalizing” code in C++. It is true that you could do general objects using the inheritance mechanism. You would create a class structure looking something like:

``` class Object {...}; class Int : public Object {...}; class Float : public Object{...}; ```

and then use dynamic_cast or perhaps your own hand-rolled RTTI system to determine an object’s class.

But there are a number of drawbacks with this approach. It is quite verbose. The virtual inheritance model requires objects to be treated as pointers so they (probably) have to be heap allocated. This makes it tricky to get a good memory layout. And that hurts performance. Also, they are not PODs so we will have to do extra work if we want to move them to a co-processor or save them to disk.

So I prefer something much simpler. A generic object is just a type enum followed by the data for the object:

To pass the object you just pass its pointer. To make a copy, you make a copy of the memory block. You can also write it straight to disk and read it back, send it over network or to an SPU for off-core processing.

To extract the data from the object you would do something like:

``` unsigned type = *(unsigned *)o; if (type == FLOAT_TYPE) float f = *(float *)(o + 4); ```

You don’t really need that many different object types: bool, int, float, vector3, quaternion, string, array and dictionary is usually enough. You can build more complicated types as aggregates of those, just as you do in JSON.

For a dictionary object we just store the name/key and type of each object:

I tend to use a four byte value for the name/key and not care if it is an integer, float or a 32-bit string hash. As long as the data is queried with the same key that it was stored with, the right value will be returned. I only use this method for small structs, so the probability for a hash collision is close to zero and can be handled by “manual resolution”.

If we have many objects with the same “dictionary type” (i.e. the same set of fields, just different values) it makes sense to break out the definition of the type from the data itself to save space:

Here the offset field stores the offset of each field in the data block. Now we can efficiently store an array of such data objects with just one copy of the dictionary type information:

Note that the storage space (and thereby the cache and memory performance) is exactly the same as if we were using an array of regular C structs, even though we are using a completely open free form JSON-like struct. And extracting or changing data just requires a little pointer arithmetic and a cast.

This would be a good way of storing particles in a particle system. (Note: This is an array-of-structures approach, you can of course also use duck typing with a sturcture-of-arrays approach. I leave that as an exercise to the reader.)

If you are a graphics programmer all of this should look pretty familiar. The “dictionary type description” is very much like a “vertex data description” and the “dictionary data” is awfully similar to “vertex data”. This should come as no big surprise. Vertex data is generic flexible data that needs to be processed fast in parallel on in-order processing units. It is not strange that with the same design criterions we end up with a similar solution.

Morale and musings

It is OK to manipulate blocks of raw memory! Pointer arithmetic does not destroy your program! Type casts are not “dirty”! Let your freak flag fly!

Data-oriented-design and object-oriented design are not polar opposites. As this example shows a data-oriented design can in a sense be “more object-oriented” than a standard C++ virtual function design, i.e., more similar to how objects work in high level languages such as Ruby and Lua.

On the other hand, data-oriented-design and inheritance are enemies. Because designs based on base class pointers and virtual functions want objects to live individually allocated on the heap. Which means you cannot control the memory layout. Which is what DOD is all about. (Yes, you can probably do clever tricks with custom allocators and patching of vtables for moving or deserializing objects, but why bother, DOD is simpler.)

You could also store function pointers in these open structs. Then you would have something very similar to Ruby/Lua objects. This could probably be used for something great. This is left as an exercise to the reader.

## Handy Developer Tools: The Sandbox

Original Author: Max Burke

I have come to rely on having a sandbox environment quite heavily over the past couple years. My sandbox allows me to quickly compile a test program for any of the platforms I need to target. It exposes all the tools that I need in my environment, like dumpbin or objdump, to be able to quickly pull apart code. Most importantly, my sandbox is completely decoupled from the rest of my development environment, in that my sandbox is not connected to my main project, because playing in it should yield instant results.
My sandbox includes a test build environment too, one that lets me quickly pull in other libraries should I need to whip up a program that is dependent on more than just the standard library. This is especially helpful when it comes to having to get all the scaffolding in place to put together a test program that runs on the SPU, or any other scenario requiring complicated setup.
The sandbox is one of the most important tools I have for development. It allows me to rapidly run experiments to test any assumptions I have made. It allows me to answer questions that I have — something like “how exactly does -fstack-check work?” can be answered in a handful of lines of code that compile and disassemble instantaneously.
Do you have a sandbox? What does your sandbox have that helps you?

## Do or do not

Original Author: Paul Evans

Assertion is a valuable tool that many languages provide, but recently I have come to think it is used too much and in the wrong places. I put forward the idea that asserts should not be found anywhere in the main body of your code. Permanently burying asserts deep inside your code could end up causing more problems than they seek to solve. Asserts embedded in methods are useful while debugging, but should rarely be committed to source control.

I extend this thought to other defensive practices like checking for nulls indiscriminately – even where there never should be one. Sanitization of external data is prudent and necessary, but once things are safely inside your own codebase you have control of the expected range of values and should be able to trust that.

## What an assert does

An assert stops execution and flashes up a message when a condition fails. If there is a debugger attached then if it can it will direct the programmer to the source where the assert fired. Great for the programmer trying to track something down, rubbish for someone else who has to skip the message (perhaps many times) to do what they are trying to do.

Asserts disrupting workflow for other programmers and disciplines have been mitigated in a few ways. The following are some more common methods:

• Builds with asserts turned off
• Asserts with a concept of levels or categories that can be toggled
• Assert instance toggle – it will appear once but as well as skip there’s an option to ignore

## Crying wolf

Asserts that continually fire create too much noise; unless the game completely falls over then people start to not take any of them seriously. Just like compiler warnings there is a point where many eyes just glaze over the number and a number above zero becomes an excepted part of life… perhaps even becoming a joke.

## Do or do not, there is no try

Asserts show a lack of confidence of an operation being successful – so much so that a human being should be warned. It is a developer communicating with themselves and others that at some point something is in a state so dire that normal execution must be interrupted so someone can take a look. I do not believe having that kind of warning permanently buried and hidden is healthy. It is preferable to either formally test and handle the situation if it really is expected, or get to the real root of the problem and prevent that invalid state from ever occurring.

Sanitize the data once properly rather than repeatedly sanitizing the same data in the same way throughout the program just in case. Every time a conditional statement is encountered there is a computational cost associated with that. A code block could have its own setup and destruction costs – perhaps more so if exception handling is being used. This may seem like premature optimization but clear concise code is easier to digest for man, compiler and machine. After all the best kind of processing is the processing you can eliminate – look to improve your algorithm.

Asserts in the middle of a method after a long conditional block could indicate that the method could be broken in to smaller more focused testable methods. If possible architect so that a method will always succeed in some measure to allow callers to expect the same predictable outcome. If a function should always return a populated something, use a default something rather than null. For example use a valid bright pink texture for missing data and log the error rather than just returning null and forcing everything from that point on to check for nulls.

A badly written assert may cause a side effect as part of testing the condition perhaps without the programmer even realizing it. When an assert like that is disabled it causes different behavior between types of builds.

Worse is when an assert expects to be heeded in the same way as exception. Ignored or absent through compilation options, execution blunders on and states become corrupted, going on to cause unpredictable problems later. Perhaps that problem will even mask itself as an issue with a completely unrelated system. In the worst case corrupt data could be persisted, ruining a saved game.

Asserts encourage a codebase that constantly tries to correct itself rather than addressing the real problem. Methods no longer trust each other to fulfill their remit. Expectations of callers change from “I should only give it foo” to “I can give it foo, bar or nothing at all”. Those expectations can virally spread throughout the codebase causing more methods to second-guess each other and bloat.

## The truth is out there

Earlier I stated that asserts in the main body of code shows a lack of confidence and are used to try to mitigate bad states. Used in the context of test functions they serve to communicate an expectation of success. The tests can be run independent of running the game or application itself and provided with sample of runtime data to consume. Instead of the same assertions being made in multiple places against the same method scattered across the codebase, an expectation for a specific scenario can be stated exactly once.

Asserted expectations can be gathered in a test suite to form a living document in code that provides examples of use that are easily found. Compile time can be quicker for coders because tests can be separated out in to different projects – run only during the testing phase of a change and on the build machine.

## Conclusion

Asserts in the main body of your code clutter the logic and could be expressed more concisely elsewhere. Programming too defensively could blur the responsibility of what is being written. Do not always code “just in case” otherwise callers will start to rely on the defensive parts. Either set the expectation of a method to always sanitize (and perhaps hint at that extra processing in the name) or fail as gracefully and quickly as possible. Many applications and games are closed systems that have predictable points of failure, only the input from the boundaries should be considered for sanitization.  Failed asserts should be a point of immediate failure – though I would argue there are better ways of indicating failure.

Asserts are very useful for debugging, but be wary of the costs of permanently committing them to the main body of your source code. Logging error messages and substituting inert values can be far less obtrusive then relying on asserts breaking execution and popping up message boxes. It is outrageous for a tool that is supposed to run unattended to stop everything waiting for input, just fail and exit.

Asserts do belong in source control when used to formally test methods and when they clearly express specifications.  This kind of living documentation created by asserts is a form of cover fire.  If you cannot separate asserts in to formal test methods to create clear behavior specifications, try and gather them together close to where the bad state will first occur.  Group them close as possible to where an errant call could first appear on the stack, rather than scattering the expectations around deep in the code.  Let the developer see everything that is expected at a glance.  Asserts should help avoid detective work after all!

There are many advantages to taking asserts scattered through your code and creating test suites from them – see my previous article Cover Fire for Coders.