Blueprint Shader With Depth-Peeling in Unity

Shahriar Shahrabi
6 min readNov 21, 2019

--

A blueprint look requires more than a mesh with outlines on its visible sides. Blueprints typically show case the entire structure of the object, also the inner sides and hidden faces. To achieve this effect, I am going to implement the technique, described here in Unity. Meshes used in this article are from realities.io and Azad Balabanian

As usual, you can find the code on my Github: https://github.com/IRCSS/BluePrintShader

Top Down Look

The GPU gems article is very well written, I won’t cover the points stated there already. A very short summery is, we want to apply an outline effect, not only on visible edges, but also to invisible edges that are being occluded by the front faces of the object closest to us and the first we can see. To achieve that we need a technique called depth peeling, a multi pass technique where in each pass, the depth values of the previous pass is used, to cull and discard fragments of the faces which are closest to the camera, hence making the faces behind them visible. Each pass reveals a layer of that object, which can be rendered how we wish.

Here are the concrete steps we need to take in Unity to implement this technique:

  1. A setup where we can multi pass render the mesh each frame and save the intermediate results in an array of render textures
  2. In each pass we render the depth value of the fragment and its normal to a render texture, the depth texture is used to peel away the mesh in the next pass
  3. Using an edge detection filter and the array of render textures on a screen space pass, we can detect the visible and invisible edges of the mesh. This is also saved in an array of render textures
  4. Gather all the information from the edge detection of different layers and composite them in one image.

Basic Setup

For stuff like these, I am a fan of using command buffers. It gives you alot of control over when the code is executed in Unity’s rendering pipeline and it is easy to remove and re add at any given time.

If you want to have the first n layers of your mesh, you need n render textures which are written to by n passes in a loop of a sort. You can also keep peeling the object until nothing is left, though I would avoid that for the sake of determinism. How many layers the object has is a function of its topology but also your viewing angle. So it is better to know in advance and limit how many layers you peel.

Your render textures can be any size you want. If you have fill rate issues, you can render a sprite only as big as the bounding volume of the mesh. Though compositing it back on a screen space texture is a bit more work, so I am going to render the entire screen for each peeling pass. Also I will render it at the screen resolution, since I don’t want to deal with the minification and magnification aliasing issues. This setup is not the most performant, but I will leave the optimization to whoever needs it.

There are a few different ways you could deal with the peeling and render textures. The bare minimum information you need is the normal and depth of the fragment. You could write each in a separate texture in a separate pass. This has several advantages. For example you can easily add more passes for more information. If you also want the albedo color of the fragment, you add an extra pass where you save the albedo. Main advantage though, is that you can assign different precision and formats for each render texture. If you need more bits depth for your normals, you can have that without having to use the same precision for the depth texture. Last but not least, is the properties you bind to your render texture and pass. For example you might want to turn off hardware bilinear filtering when doing the depth, but not when you are rendering the color of the fragment. Or you might want to have the color rendertexture as an sRGB render texture with a gamma of 2.2, your normals compressed differently and your depth texture compressed logarithmicly. The main disadvantage of this method is you will have your number of passes multiplied with how many intermediate property you want to save (depth, normal, color etc.). This can be over come by using G-Buffers, binding several write render textures to your shader. It is still going to be quite a few render textures you need to manage.

I will keep it simple, I will render both normal and depth in one pass in one texture. Normal in red, green and blue depth in alpha. If you want to free up a channel you can save only two of the normals component and the third component of the normal is reconstructed in the shader later using the x,y values. I am going to go for 32 bit floating point per channel for the render texture. This is a huge memory footprint, more than what you actually need. There will be artifacts from precision, compression or filtering. For the sake of simplicity though, I won’t worry about any of those and hope for the best. If you are interested on getting the most out of this, have a look at this article about block compression, or Real Time Rendering’s chapter on texture compression and formats. Also filtering normals and depth have their own dedicated techniques.

Figure -1- depth peeling in action

Edge Detection Filter

Now that we have a set up to render and peel N layers of depth, it is time to constructed the edges. There are several types of edges which can be drawn on an object. Most typical ones are silhouette edges, border edges and creases (popular in computer graphics are these three, as a painter I would argue an edge is a discontinuity in a visual property, for example rapid change in hue, value and saturation would be drawn as an edge in a sketch). The silhouette edges are marked by discontinuities in depth (possibly also normal), border edges only in depth and creases are more visible through rapid changes in normals.

An edge enhancement technique samples the neighboring pixels for each pixel, and calculates the degree of discontinuity in depth and normal. Based on this an edge map is constructed which we can use for our rendering. Here are some of those edge maps.

Figure -2- Edge map for each depth adds up to the blue print rendering, top left

Having the edge map allows us to style them however we want. I have constructed a separate pass where a forward pass is combined in screen space with the edge map for the renderings.

Figure -3- some renderings using the techinque

Thanks for reading, you can follow me on my twitter IRCSS

Further Reading

  1. GPU gems chapter on blueprint rendering and hand drawn outlines: https://developer.nvidia.com/gpugems/GPUGems2/gpugems2_chapter15.html
  2. Block compression for textures: http://www.reedbeta.com/blog/understanding-bcn-texture-compression-formats/
  3. Edge enhancement technique: https://pdfs.semanticscholar.org/eb04/2c44d3f2d3daa81f5396a83633b901445ba2.pdf

--

--

Shahriar Shahrabi
Shahriar Shahrabi

Written by Shahriar Shahrabi

Creative Lead at Realities io, Technical Artist

No responses yet