So you want to render transparent objects in your deferred renderer? Good luck. To quickly reiterate why is transparency difficult in deferred rendering… wait, no, go and have a look at John Chapman’s article about it. It’s a good reading.
It really depends on what kind of transparent objects you have, but if you happen to have a simple scene, where you can afford manually Z-sorting your transparent objects, using their center points, then the technique outlined below might suffice. It’s really simple and performs well. I invented it for WebGL, but naturally, use it wherever you want, it’s pretty generic.
It really depends how your deferred pipeline works, how do you apply shading, lightning, direct and indirect illumination, but let’s make some assumptions:
You can divide your objects into two sets: opaque and transparent
You can run your shaders on different sets of objects and output to different textures
You have a normal/depth render pass that outputs world space normal of a fragment to RGB, and fragment’s depth to Alpha channel, of FBO’s attached float color texture
- You have an albedo pass that outputs fragment’s color and transparency
First, this is the image that we’re aiming for:
Transparency in the back, in between, in front, with opaque object in front of transparent one, full featured example, even with two transparent objects overlapping. To render this image, we need to first render normal/depth maps for opaque and transparent objects separately, it looks like this (note that normals and depth are in a single texture):
Opaque objects (normals)
Opaque objects (depth)
Transparent objects (depth)
Next we need albedo maps. These maps will tell us actual surface color/texture and in case of transparent objects, it’s alpha value. For the spheres in our example, the albedo is simple. We render it with depth testing and backface culling enabled and voila.
Opaque objects (albedo)
For the transparent objects though, it’s a bit more complicated. As mentioned above, you can’t use depth testing, instead, you need to manually depth sort those objects. That’s because we’re going to use alpha blending and and simply using depth buffer won’t do. If you can precalculate center points of those transparent objects, it’s easy. Then you just transform those points using view matrix to get them into camera space, and sort them by their Z coordinate. You will use this sorted set to draw those objects back to front. Now the trick is to render with depth testing disabled and with blending function set to GL_ONE, GL_ONE_MINUS_SRC_ALPHA. This means that the objects in front will add up to the color of the objects already drawn behind them. The result is this:
As it turns out, you can’t really use this image, because it contains pixels that are not visible in the final image. Those are the fragments that are occluded (they’re behind) by the opaque objects. To do this, we use the depth map of the opaque objects in this albedo shader, and we compare depths of the current (semi-transparent) fragment, and depth of the same fragment in the opaque objects’ depth map. If the opaque object’s depth is lower than the depth of this semi-transparent one, it means the opaque object is occluding this fragment, and we discard it, or we set it to zeros.
Transparent objects (albedo & alpha)
To composite the final albedo map, we simply use the alpha value from the transparent objects’ albedo map to blend between transparent and opaque albedo maps.
If you want to apply some shading and lighting (lambert, phong, you name it), you will have to do something very similar to the above, you will render it separately for transparent and opaque object and you will blend between those, using the values from depth map and alpha value from the albedo maps. Have fun blending.