Omni-directional shadow mapping

Original Author: Bernat Muñoz

Introduction

If there’s one thing that I like more than learning, is teaching, thus I’ve tried in the past to give back as much knowledge as I’ve received from others: some times, simply translating from English to Spanish, some other, from my experiences on a concrete matter. So whenever blogs became trendy, I started writing about what I knew or was working on. But then, I started to have less and less free time, and eventually stopped writing at all.

To keep this short, I plan to write about what I’m working at home, mostly related to implementing different techniques on my graphics engine for PC. Some posts won’t probably be hard to grasp from a technical point of view, neither exclusive on the net, but I want to give a detailed enough explanation which let’s you actually implement it, and were to go from there. Also, I might get corrected if I actually assumed something that’s not true.

Another motivation about talking about something that I implemented a few hours/days ago is providing a working sample. I intend to create a very simple demo, containing as few as possible besides the described technique, so any doubts can be cleared by just looking at the source.

Shadow mapping

When adding shadows to a graphics engine, most people start implementing what I (and I guess most people) consider the easiest way of shadow mapping, which is spot light shadow mapping. This is quite logical, as it serves as a base for other techniques, and it’s quite straightforward to implement. Funny thing, is that most people don’t actually support shadow mapping from other light types, once this basic implementation is done. This might not be totally true on professional engines, but I’ve seen it happening a lot in hobby 3D engines. At least, that’s my experience, I would love to be corrected on this.

From an implementation point of view, there’s one major choice to do when dealing with omni directional lights and it’s shadows: either use non-linear projections or multiple linear projections (reference this slides from NVidia, page 43). I won’t talk about using non-linear projections, because I’ve never implemented omni-directional shadow maps with them, but you can gather some details from the NVidia slides linked. The good thing about multiple linear projections, is that it’s a simple extension from spot light shadow mapping: the idea is using several normal shadow maps to cover all directions (more details later on this).

So, wrapping up, I’m going to go detail how to omni-directional shadow mapping using multiple linear projections, assuming that you already know or have a working implementation of spot light shadow mapping. Let’s hope this is actually useful to someone :)

Omni-directional shadow mapping: general idea

Let’s first detail what we want to implement. Keeping in mind the technique we’re aiming at, we need to calculate several shadow maps, that cover all directions. This means actually calculating 6 shadow maps, each in following an axis direction (+-X, +-Y, +-Z), so we cover all the space that the light potentially illuminates and casts shadows. In order to avoid any gaps or overlaps between shadow maps, we have to force it’s FOV to 90º. Visually, we want to mimic this setup:

Then, we could just attach those textures to a cube map, and sort of use the light -> pixel vector to make the shadow lookup. The problem is that, as detailed on the slides linked before, cubemaps and shadow textures don’t work actually match what we need. The main problem is that cubemap distance is radial instead of planar, that’s what we would like to use. That can be fixed, as we have a 90º projection can, the depth value can be easily reconstructed from the light -> pixel vector.

Another problem with cubemaps, is the lack of hardware support for Z comparison and PCF. That could be easily fixed by just doing the comparison and PCF taps ourselves, but it’s quite a pity loosing those, performance wise. A better (from the performance point of view) solution is to lay out all the shadowmaps into a single 2D texture, set up in a grid. Then we could use an indirection cubemap in order to map the light -> pixel vector to the 2D texture, and do the shadow computation in a regular 2D shadow texture, thus getting hardware Z comparison and PCF. This is known as Virtual Shadow Depth Cube Textures (VSDCT).

Omni-directional shadow mapping: generating the indirection texture and shadow maps

By now, you should have a pretty clear of the idea of the whole process. First of all,  we need to generate the indirection cubemap. This is simply a cubemap that maps from the light->pixel vector to the 2D texture actually containing our calculated shadow maps. The most simple approach is using a small two component cubemap (GL_RG16 in OpenGL). Then, for each face of this cubemap, just generate the corresponding 2D UVs: for example, on the 2D grid layout on the prior image, and assuming a 2D texture size of 1024×1024, the UVs from the +X face would go from (0,0) to (1024/3, 1024/2). That actually means that, if the indirection cubemap is 256×256, the +X face would contain (0,0) on pixel (0,0), and (1024/3, 1024/2) on pixel (255, 255). All the omni lights that use the same texture size can share the indirection texture, so you want to generate those during loading/light setup.

Then, whenever the shadow maps need to be update, you should repeat 6 times the following process:

  1. Generate a camera pointing to one of the basis vectors, from the light position
  2. Setup the render target to render the shadowmap to one of the positions of the grid
  3. Cull and the render depth from that view

This is pretty straightforward, generating the views is a matter of generating a regular view on the axis you’re currently rendering, for example if you use a radius as a property of your light, and you’re rendering the +X axis, that would come down to generate the view from light_position to light_position+vec(radius,0,0). Then, you need to setup your render target displacement and size to your current axis, so you only render the part of the VSDCT you are updating.

Omni-directional shadow mapping: shadowing

Now, you have all the elements, so the process to actually render the lights is the following:

  1. Bind the indirection cube map
  2. Bind the 2D shadow map grid
  3. Calculate light->pixel vector (probably you’ll want to calculate that on vertex shader, interpolate and normalize it on the fragment shader)
  4. Look up the UV from the indirection cube map using the light->pixel vector, getting the UV from the corresponding shadow map from 6 lay out on the 2D shadow map grid
  5. Calculate the eye space Z from the light->pixel vector.
  6. Create the 3 components vector, XY from the indirection cube map lookup, Z from the calculated value on step 5
  7. Look up on the 2D shadow map grid with the calculated vector on step 6, and get hardware comparison and PCF :)

The only tricky part is step 5. In order to get all the shadows in one pass and fast, we don’t pass the model and view matrices for each axis, thus needing to calculate that Z value from the light->pixel vector. To get the eye space Z (keeping in mind the a 90º projection), just selecting the component from the light->pixel vector with the largest magnitude, then only transforming it to clip space (transforming by the projection matrix) is needed. We could, of course, just use the light projection matrix, but an there’s an easiest way, as we only need to project the Z value, we can just derivate from the canonical projection matrix, ending up with the following computation:

Where Zeye is the largest component from the light->pixel vector, as explained earlier. The resulting Z can be then compared to the stored shadow map Z.

Omni-directional shadow mapping: issues

Now, you know all that’s needed to implement omni directional shadow mapping. The main issue with the described technique is the contact points between the different shadow maps, and how filtering affects them. The problem comes from the lookups from the 2D shadow map grid actually graving depth information from one of the other 5 shadow maps, thus getting wrong results. If you actually do some extra filtering besides the hardware PCF, then you can start getting really noticeable artifacts. Leaving a several pixel border between shadow maps on the 2D shadow map grid can hide some errors, but it’s not a perfect solution. Also, you can try just scaling a bit the UVs depending on camera distance or number of taps if that is variable on your engine, that can also hide the most visible errors, which might be enough, depending on your situation.

Closing and what’s next

Well, this was a pretty simple article with new tech I want to add to my engine and to a simple test bed that I plan to release with every blog, but this time was impossible due to time constraints and me being a bit more busy than expected this week. Let’s hope I actually finish a light framework to be able to provide a complete (and very simple) sample with every post, so you can get the explanation AND the source code in case I actually missed detailing something.

I’ll probably be working on more shadow related techniques for the next articles, VSM, PSSM and ID shadow maps specially gathering my interest, now that I did warm up a bit :)