Custom Shadow Mapping in Unity

What Is Shadow Mapping and Why Should I Care

Shadow mapping is very simple on paper. The technique is based on the idea that if an area is exposed to a light source, it can see the light source. In turns, the light source can also see the area. Whatever the light source can not see, is occluded by an object, which the light source can see and hence the area is in shadow casted by that object. So what we need is a camera which records what the light source can see, and using this to determine if a pixel is being seen by the light or is it in the shadow of another object. The implementation of this technique can get more challenging, if you want to cover all the edge cases and problems with shadow mapping. Before going in the technical implementation though, I would like to talk a bit about why shadow mapping is important.

Shadow mapping Implementation, a Top Down Look

The goal is to know for each fragment which is currently being rendered, if this fragment is being occluded by another object in the scene in relation to the light source. For simplicity, I am going to assume there is only one directional light source in the scene. Step by step:

  1. A visible set needs to be determined for directional light’s camera
  2. The depth of each fragment of the visible set is rendered to a buffer by the light camera. The depth is of course in relation to the light camera. This is also where different resolution of depth maps are rendered in different shadow cascades
  3. The main camera renders the depth of each fragment it sees, in relation to itself
  4. In the shadow collecting phase, first using the depth map of the main camera, the world position of each fragment is reconstructed
  5. Then using the VP matrix of the light camera (projection*view), the pixel is transferred from world space coordinates to the homogeneous camera coordinate of the light camera. In this coordinate system x and y correlate to the width and height position of the fragment in the light camera texture and z is the depth of the fragment in the light camera. We have successfully converted the fragment coordinates in a system where we can compare its depth with an occluder’s depth and determine if the occluder is occluding the fragment.
  6. At this stage, it is time to unpack the depth value we rendered in stage 2. If there are no shadow cascades, we can simply remap the homogeneous coordinates we calculated from -1 to 1 to 0 to 1 and sample the light camera’s depth texture. However with shadow cascades this is a bit different, since we need to first choose the correct shadow cascades texture to sample and calculate the uv coordinate for that specific texture.
  7. Once that is done, we sample the light camera’s depth texture. Then we compare the z value we calculated at stage 5 with the depth value we just retreated. If the depth value of the depth texture is smaller than our z value, it means the fragment is occluded and is in shadow. The logic here is that smaller depth value means that there exists a fragment, along the path of the fragment we are rendering to the light, which is closer to the light than the fragment we are rendering and hence it is occluded by whatever fragment the light is seeing. (Of course you need to keep an eye on reversed Z buffer and such)

Implementation in Unity

I am not going to cover step one and two in great detail. They require a few blog posts of their own. I refer the interested readers to other sources, specially for step two, giving this entry by CatLikeCoding a read wouldn’t be bad.

cb = new CommandBuffer;RenderTargetIdentifier shadowmap = BuiltinRenderTextureType.CurrentActive;
m_ShadowmapCopy = new RenderTexture(1024, 1024, 16, RenderTextureFormat.ARGB32);
m_ShadowmapCopy.filterMode = FilterMode.Point;
cb.SetShadowSamplingMode(shadowmap, ShadowSamplingMode.RawDepth);
var id = new RenderTargetIdentifier(m_ShadowmapCopy);
cb.Blit(shadowmap, id);
cb.SetGlobalTexture(“m_ShadowmapCopy”, id);Light m_Light = this.GetComponent<Light>();
m_Light.AddCommandBuffer(LightEvent.AfterShadowMap, cb);
YourCamera.projectionMatrix * YourCamera.worldToCameraMatrix;
float4 near = float4 (vDepth >= _LightSplitsNear); 
float4 far = float4 (vDepth < _LightSplitsFar);
float4 weights = near * far;
float3 shadowCoord0 = mul(unity_WorldToShadow[0], float4(, 1.)).xyz; 
float3 shadowCoord1 = mul(unity_WorldToShadow[1], float4(, 1.)).xyz;
float3 shadowCoord2 = mul(unity_WorldToShadow[2], float4(, 1.)).xyz;
float3 shadowCoord3 = mul(unity_WorldToShadow[3], float4(, 1.)).xyz;
float3 coord =
shadowCoord0 * weights.x + // case: Cascaded one
shadowCoord1 * weights.y + // case: Cascaded two
shadowCoord2 * weights.z + // case: Cascaded three
shadowCoord3 * weights.w; // case: Cascaded four
float shadowmask = tex2D(m_ShadowmapCopy, coord.xy).g;
shadowmask = shadowmask < coord.z; 
col *= max(0.3, shadowmask);

Further Readings, References and Thanks

This time I am not repeating all the links in the article here, since there are way too many. Just the two that helped me get this set up and two GPU gems article that further show what you can do if you sample the light camera’s depth pass.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store