Why are these new shadow map techniques filterable!?
I’ve only been doing graphics stuff seriously for about a year. One of the first things I tackled was shadow mapping. I got a simple shadow map implementation going and then implemented Variance Shadow Maps. It’s based on statistics and the big deal is that you can filter your shadow map and get filtered shadows as a result. At the time, I didn’t understand why that was true. I had assumed it was a special property of the Chebychev Inequality. I just implemented it and went on. Yesterday, I quickly tested Exponential Shadow Maps. While looking at it, it finally struck me why it was filterable:
These new shadow map techniques are smooth functions based on the occluder and receiver distances to the light!
Above is my Microsoft Paint created graphs (grabbed this style of graphing from Pat Wilson heh). On the left is ESM, you can see that it smoothly drops from 1 (fully lit) to 0 (fully shadowed). So if you massage your shadowmap and you get values on the x-axis (which is occluder – receiver), you’ll see that there will be a border of grey values (basically from -5 to 0 in the graph above, btw this is not what e^(c*o-r) actually looks like, heh). On the right, standard shadow mapping is just a step function. If you move this a little bit below zero, you’re immediately shadowed completely.
This is a also a reason why you don’t need to worry about shadow map bias as much with these new techniques. Because the shadow function isn’t all or nothing, if occluder – receiver is -.9999, you’re going to look basically lit. But in standard shadow mapping, -.9999 is fully shadowed, and you’ll get shadow acne.
So you could draw any random function as a 1d texture and use that for shadow mapping! These other techniques are just ways of creating that function in a way that is fast and makes sense visually.
The silly thing is that I knew why standard shadow maps were not filterable from the get-go, I just didn’t “invert” my thinking to figure out why these new methods were filterable.
bzzt wrote:
hah, this is explained in this slidedeck:
http://developer.download.nvidia.com/presentations/2008/GDC/GDC08_SoftShadowMapping.pdf
Posted on 10-Sep-08 at 3:30 pm | Permalink
Marco Salvi wrote:
Brian,
You should buy ShaderX6 and read my article about ESM, you can find in it also your own interpretation of it 🙂
Even though it’s not true that whatever smooth-monotonic function will do. Note that in the ESM case there are 2 ways of approximating a step function, the one that you showed and another one that goes from -inf to 1. While the first one suffers from light bleeding on non planar receivers the second one suffers from over darkening. That’s the price one has to pay with approximations 😉
Even though it’s easy to replace the step function/occlusion test with another function that looks ‘good’, you are going to have all sort of different problems when geometry or lights move. Moreover a generic function doesn’t allow you to know (in the general case) what rules you should apply while filtering your shadows.
Blurring your map(s) like you can do with ESM or VSM is not magic, only works for specific reasons. For example I used exponentials because those are the only functions that can make light bleeding invariant under translations. It doesn’t sound like a big deal but you don’t really want your shadow to get significantly darker or lighter just because your objects are moving around some light 😉
VSMs replaces step functions with a 2D -> 1D mapping, perhaps you can experiment with some 3D textures as well 🙂
Posted on 11-Sep-08 at 10:47 pm | Permalink
bzzt wrote:
Woah! Thanks for commenting! I’ve got ShaderX6 and I’ll probably have re-read your article a few more times before everything clicks for me. 😉
Posted on 12-Sep-08 at 12:08 am | Permalink
Marco Salvi wrote:
This means I have to improve my articles-writing-skills 🙂
Posted on 12-Sep-08 at 9:46 am | Permalink
bzzt wrote:
hahaha, nah, I just need to slow down and work through the math. The last time I read through it, I was mostly concentrating on the log filtering. Which is quite clear! It was a good call to list all of the transformations you made to go from w0e^x + w1e^y (on page 270) to the final filtering equation. It made it pretty clear how it works.
I just need to follow the rest of the math in the theory section as closely as I did the log
filtering to get the rest of it. 😉
Posted on 12-Sep-08 at 10:02 am | Permalink