*Original Author: Jaymin Kessler*

Disclaimer: I am not on the PixelJunk team, so if you liked the game I probably had zero to do with it! Conversely, if there were parts you disliked, I’m probably not the person to talk to š

By now, everyone on the planet should have played PixelJunk Shooter 2, but in case the PSN outage stopped you from downloading it, it looks something like this:

One of the more interesting things in the sequel was the introduction of dark stages, something I think added an interesting new dimension to the game. I won’t do an in-depth description of how the SPU lighting in those stages works, but basically we calculate a real-time dynamic distance field that is also used for fluid particle collisions, and use that to get the length of light rays for occlusion and shadow casting. The lighting itself consists of three stages: get ray length, rasterize out, merge with other lights. The second stage was by far the slowest due to inefficient memory accesses, but I will save my ideas for that for another day. Its the first stage I want to talk about today, but first we need some background.

**Distance Field: it slices, it dices, it makes thousands of julienne fries!**

Distance fields are one of those things that, to me at least, seem to have endless interesting uses. They are related to the skeleton transform, which I believe is the process in which girls become models. Lets start off with a simple 2D world, in which there are two objects: an oval and a square. You start with a “binary” image (that doesn’t have to be binary at all) where certain pixel values denote objects and others free space. The end result of the distance transform should be another image where each pixel’s value gives the distance to the closest object. As you get closer and closer to an object, the distance gets smaller and smaller, but what happens inside an object? Well, that depends. In an unsigned distance field, pixels that represent an object tend to be all zero, since the distance to the closest object (itself) is zero. In a signed distance transform, the distances become zero at object edges, and then go negative as you move towards the center of an object. Its actually quite useful, for example, when you want to know how far an objects penetrates a wall and figure out how much you need to back up.

There are many methods used to calculate them on CPUs, GPUs, and some mythical fake-sounding possibly theoretical machines like SIMD Hypercube Computers (Parallel Computation of the 3-D Euclidean Distance Transform on the SIMD Hypercube Computer), LARPBS (Fast and Scalable Algorithms for the Euclidean Distance Transform on the LARPBS), and EREW PRAM (Parallel Computation of the Euclidean Distance Transform on a Three-Dimensional Image Array). GPU algorithms tend to be multi-pass and slow to converge, while CPU algorithms tend to be very parallel-unfriendly, and the algorithms that are parallel tend to be for the weird architectures mentioned above.

I’ll now briefly go over the Chamfer signed distance transform. For a handy reference, be sure to check out the excellent “The Dead Reckoning Signed Distance Transform” by George J. Grevera. First (obviously) is initialization. Go through every pixel in your texture and if that pixel is inside an object, give it a value of -1.f, otherwise give it a value of FLT_MAX. Then pass over one more time looking for object edges, and assign them values of zero. The second pass is a forward sweep from left to right and top to bottom. You have a sliding window that looks something like this

?2 1 ?2

1 C -

- - -

where C is the center of the window (and the pixel whose value we want to fill in). So for each pixel in the surrounding 8 neighbors, we take its distance value and add the corresponding offset in the window (1 for the pixels directly above and to the left, ?2 for the pixels to the upper right and upper left, skip pixels marked with a -). Out of those 4 values, find the min and make that the distance for C. You can see we are starting with known distances to objects and then propagating. The second pass is almost identical, except we start at the bottom right corner and go right to left, down to up. This time the window looks something like this

- - -

- C 1

?2 1 ?2

Thats it. By now you’ll have a lovely approximation of the distance from any pixel in your map to the closest object. If you check out Grevera’s paper you can see the results from experimenting with different window sizes and distance offsets, and read about dead reckoning which is useful for keeping track of real distances.

**that one regret**

One fine day, Jerome (my boss) sent me a copy of a PDF he thought I’d be interested in. It was called “Rendering Worlds with Two Triangles with raytracing on the GPU in 4096 bytes” by Inigo Quilez (http://www.iquilezles.org/www/material/nvscene2008/rwwtt.pdf). Its the paper that introduced me to raymarching, and kicked off my obsession with procedurally generated on the fly distance fields. The really obvious thing he mentions is that for any point in the distance field, its “guaranteed” that there won’t be any objects closer than the distance field value at that point. So if you’re marching along a ray, you no longer have to check every single pixel for occluders, but rather can just jump ahead by the distance to the closest object. It would have been absolutely perfect for the first pass of the Shooter 2 lighting systemā¦ if only I had actually used it! The only drawback is when you have a ray running parallel to a close by wall. Because the closest object is always right next to you, you can’t jump ahead so far.

The approach I took in Shooter 2 was slightly more… um… low level. I decided to calculate light ray length by loading between 4 and 16 pixels at a time into a vec_uchar16, and then in parallel check for the earliest occluder giving me the total ray length (see http://6cycles.maisonikkoku.com/6Cycles/6cycles/Entries/2010/4/17_n00b_tip__Psychic_computing.html). Of course I was too busy unrolling and software pipelining and microoptimizing to care about the insane cost of loading sparse pixels along a ray and packing them into a vector. Actually, thats not entirely accurate. I put a lot of work into coming up with an offline-generated table of ordered indices that would minimize the redundant loads and packing, but the overall cost of the first stage was still dominated by inefficient (some would say unnecessary and avoidable) data transforms. (note: I experimented with ways to get around this like http://6cycles.maisonikkoku.com/6Cycles/6cycles/Entries/2010/11/27_n00b_tip__Combining_writes.html but none ended up shipping with Shooter 2)

testing light occlusion against the oval and box defined by the distance field

So, as a joke I decided to hack together a particle free Shooter 2-like lighting demo on a platform far less powerful than the PS3 and the results were pretty amazing. Not only was I able to get a large number of lights going, but I was also able to add reflection and refraction, something I must admit would have looked insanely sexy with the Shooter fluid system š

**There is no such thing in life as normal**

Even if you’re a Johnny Marr fan, you have to admit Morrissey has a point. The geometry for the objects used to define my distance field doesn’t exist, and there are times I want the normals. For example, when doing the reflection and refraction mentioned above. I thought back to basic calculus and remembered how to calculate gradients

http://www.khanacademy.org/video/gradient-1?playlist=Calculus

http://www.khanacademy.org/video/gradient-of-a-scalar-field?playlist=Calculus

Testing my newfound normals, I found something disturbing. When bouncing off the oval, there were certain points when the reflected ray would totally go nuts (see below where bouncing off two very close points gives two different results).

Interesting. I tried rendering some of the normals and suddenly the problem became clear

OK. So the distance field itself is a low resolution noisy approximation of the true distance, and calculating the normals is an approximation from the distance field, so I’d expect it to be crap but we should be able to do better. I researched all kinds of interesting ways of improving the normals, things like edge curve fitting and bilinear filtering, but in the end I was able to get close enough but still maintain acceptable performance by a combination of blurring the distance field values and increasing the distance from the current pixel of the pixels used to get the gradient. Below are some things I tried and the results

averaging the normals and increasing the gradient distance from 1 to 2

**Additional unrelated topic: moving raymarching into 3D**

One last thing. Ray marching is an insanely cool technique that has uses in dimensions other than 2. It can also be used to do stuff in 3D! Since I’m not a graphics programmer and I suck at making stuff look good, I won’t waste too too much time talking about the cute little guy I was able to make

It took me about 15 minutes to get that first little demo up and running. I’m still experimenting with procedural on the fly distance fields, and I might post again after a bit more math research. By the way, here is what it looks like when someone who knows what they are doing uses raymarching