Back to basics: Projection transform
While working on Parallel Split Shadow Maps something that had been tripping me up was calculating the bounding box of a frustum in a light’s clipspace. I need this bounding box in order to scale the projection matrix to look at the piece of the scene that I’m interested in. I’ve noticed that other people have struggled with this problem as well, so I thought I’d post what I found.
My problem: So the problem I’m solving is this: Given a frustum in world space, find the bounding box of that frustum in the light’s clip space. My attack was this:
1. Translate each point of the frustum into light space. 2. Using the light’s projection matrix, translate this point into clipspace. 3. Project the point by dividing by the points .w coordinate. 4. Finally, check against the current min/max points (standard bounding box construction).
Which are the correct general steps. The problem is when a point that was one of the bounding boxes min/max points went behind the light, it wasn’t being counted correctly anymore? What gives?
Quick recap of clipspace/projection:
The clip matrix for D3D looks like this:
| xZoom 0 0 0 | | 0 yZoom 0 0 | | 0 0 (farPlane + nearPlane)/(farPlane - nearPlane) 1 | | 0 0 (nearPlane * farPlane)/(nearPlane - farPlane) 0 |
This basically allows for field of view (xZoom/yZoom) and near/far plane clipping. The goal of the matrix above is to scale x,y,z& w coordinates of a point to the values needed to project it to 2d and to provide clipping information. If 0 <= z <= w, then the point is within the near and far planes. This also works for the x & y coordinates by comparing against -w and w.
For my problem, we can simpify this matrix down to this:
| 1 0 0 0 | | 0 1 0 0 | | 0 0 1 1 | | 0 0 0 0 |
Which turns projection into just dividing x & y by z. This is the way most of us did 3d when we were little kids.
The issue was my mental model of projection was wrong. I basically thought of it as a flattening operation. That is true when the points are all in front of the camera, but when points are behind the camera the x & y coordinates will actually get mirrored due to the z coordinate being negative! So if z = -1 then it’s going to flip x & y, duh!
In my case, I needed to use the absolute value of the w to get the number I needed. I doubt this is an issue that will come up often for people, but I just thought I’d share what I learned by repeating smashing my head against this problem. ;)
Notes: Here’s the slide-deck that triggered this in my head: http://www.terathon.com/gdc07_lengyel.ppt
Slide 7 has a great graphic, basically, the red area is the flipped coordinates.
Example: In front of the camera: untranslated: -10.000000 100.000000 0.000000 1.000000 clipspace: -10.000000 -0.000006 99.909988 100.000000 normalized device coords: -0.100000 -0.000000 0.999100
Behind the camera: untranslated: -10.000000 -100.000000 0.000000 1.000000 clipspace: -10.000000 0.000006 -100.110016 -100.000000 normalized device coords: 0.100000 -0.000000 1.001100
You can see that behind the camera the x&y coordinates are flipped!
Here’s the code for the above little experiment:
void testProj(Point4F testPt) { MatrixF proj(GFX->getProjectionMatrix()); Con::printf(“untranslated: %f %f %f %f”, testPt.x, testPt.y, testPt.z, testPt.w); proj.mul(testPt); Con::printf(“clipspace: %f %f %f %f”, testPt.x, testPt.y, testPt.z, testPt.w); testPt.x /= testPt.w; testPt.y /= testPt.w; testPt.z /= testPt.w; Con::printf(“normalized device coords: %f %f %f”, testPt.x, testPt.y, testPt.z); }
Point4F inFrontOfCamera(-10.0f, 100.0f, 0.0f, 1.0f); Point4F behindOfCamera(-10.0f, -100.0f, 0.0f, 1.0f); Con::printf(“infront”); testProj(inFrontOfCamera); Con::printf(“behind”); testProj(behindOfCamera);