Interactive Volumetric Fog With Fluid Dynamics and Arbitrary Boundaries

Shahriar Shahrabi
12 min readOct 16, 2021

--

A break down of making a realtime interactive fog respecting arbitrary boundaries, using fluid simulation and compute shaders in Unity 3D. In the article I go over an easy method to generate a mask for arbitrary boundaries, and discuss two further methods to counter issues regarding terrain with overhangs and discuss minor improvements you can add to the system.

In this post, I won’t go over how to program the fluid simulation, since I have done that in depth in my previous post, Gentle Introduction to Realtime Fluid Simulation for Programmers and Technical Artists.

As usual you can find the code on my Github: https://github.com/IRCSS/Compute-Shaders-Fluid-Dynamic-

Also you can download/ view the model used in the demo on my Sketchfab.

Rendering Volumetric

To get the fog look going, the first thing we need is a way to render volumetric data. There are different ways of rendering fog, for example ray marching on a surface of a quad, while looking up a height map generated by our fluid simulation engine. For this demo I decided to go with something simpler, stacking up a series of transparent planes on top each other, where you get the impression of a volume being rendered. I use the dye texture generated by the fluid engine, as a transparency mask for my fog.

I covered this technique in one of my other blog posts about volumetric grass, where I explain how to generate these planes in a geometry shader. For this VFX, I wanted to generated them on the CPU side. You can of course create a default plane in Unity, instance it a bunch of time, and map its dimensions to the area where the simulation is running. However since I am marking the area of the simulation through marking the four corners of the simulation square with 4 empty game objects anyways, I generate this mesh procedurally. In this way the vertices in the vertex buffer exactly match my simulation space.

Here is the code which generates the mesh for the plane, and the uvs which match the simulation space in the compute shaders:

Figure -1- Code for generating a single plane mesh

Then, I generate as many planes as I want in a for loop, assign materials to them and attach them to game objects. This generates as many draw calls as there are plane, which is not ideal in a more performance critical setting. The reason I am doing it this way, is to leave the sorting of the transparent layers which is relevant for a correct blending to Unity’s renderer. For a scenario (like ours) where the simulation will always be viewed from above, you can assume the correct render order is always from top to bottom. You can put all your layers in one mesh, and sort the index buffer that way, and draw everything in a single draw call.

Figure -2- Creating the stack of layers

This is how this looks like in engine:

Figure -3- The stacks visualization

Creating a Obstacle Map

As I covered in the previous article, in the Boundaries and Arbitrary Boundaries section, you need to provide your fluid simulation engine with a mask of a sort to let it know where the arbitrary boundaries are within your simulation area. An arbitrary boundary in this case is any object within your simulation which blocks the fluid. For example, the mountains should block and deflect the movement of the fog. If you want to find out how that is implemented, read the previous article, specially the code documentation posted in the boundaries section of the article. The focus here is how to generate this mask automatically.

Figure -4- Comparing the Height of obstacles with simulation area height in camera space

The easiest way to do this is to use the depth information of our scene and find out what object reaches higher than our simulation plane. The assumption (look at figure 4 for explanation) is that any area which is higher than the simulation plane, is an obstacle in the plane. This assumption is correct for our case (not quite, but correct enough), but might not be correct in other scenarios, which I will address later in the article. So here is a summary of how to obtain a mask based on this method:

  1. Render the depth of the scene (all objects that should act as obstacles). The camera should be orthogonal to the simulation space with an orthogonal projection (in our case and the usual case this would be a top down view, where the camera is looking at the simulation)
  2. Take the texture produced in step 1 and run a check to see which pixel is closer to the camera than the simulation plane (hence is higher than the simulation plane). Mark those pixels as white, which encodes a pixel covered by obstacles.
  3. After step 2, you are left with a black and white mask of all the obstacles which you provide the fluid engine.

Creating a Depth Map for the Obstacles

First thing first, we need a way to know which objects in the scene can be a blocker. I am tagging these in the scene, and a script collects all objects with this tag in an array. Next we need to render the depth of all these objects to a texture, from a top down view. For this I am using the Graphics.DrawNow method, which immediately draws the objects when the method is called. This is preferable for me, since I am calling the function only once in start and I want to be able to get the map ready and done after the Start function is finished, ready for the first frame of rendering.

The first thing we need to do, is to create a “Camera” that renders these objects. I put camera in quotation mark, since the Draw Now doesn't really take a camera. A camera is at its core only an abstraction which includes a transformation matrix, render target and code for culling, sorting, render settings etc. The transformation matrix is what we need to create.

If you are not comfortable with matrices, I suggest reading my Matrices for Tech Artists, a Cheat Sheet. A very quick summary here, a “camera” is a matrix which takes a 3D object, projects it on a 2D surface, which overlaps with the image (texture) that the render produces. The matrix that we need to create will act on the vertices of the obstacle meshes, which have their positions expressed in local space. These are the steps involved from local space all the way to the clip space:

  1. Model (local space) to world space: local2World Matrix
  2. World space to Camera (local space of camera game object) space: View Matrix
  3. Camera space to Clip Space: Projection Matrix

To get our final MVP matrix, which we multiply our o.vertex in the vertex shader (this is partly what the macro UnityObjectToClipPos does in Unity), we multiply all these together:

MVP = Projection* View * local2World

The local2World matrix we can easily get from our obstacles’ game objects, using gameobject.transform.localToWorld.

If we had an actual camera setup (which you can create and position correctly per hand, and avoid all the maths that are to come), we could get View by doing Camera.transform.localToWorld.Inverse.

If you also set this camera to be a orthogonal camera, and adjust its size so that it perfectly overlaps with the simulation area, then you could also get the projection matrix using the Camera.projectionMatrix.

However doing this per hand is more work than writing a function which creates a matrix which renders our simulation area with pixel perfect accuracy.

First we need to make a view matrix. For that, I do the following steps:

  1. I take the vectors which represents the edges of the simulation quad, and by using the cross product find the normal of the simulation plane.
  2. Then I find the middle point of the simulation area, and add the normal times an offset to the mid point, this way we place the camera somewhere above the simulation space.
  3. For the forward vector of the camera we take the flipped normal of the plane (-normal), for the right vector one of the edges of the simulation quad (preferably the one along the x axis) and for up we take the cross product between the two.

We pack the forward, right and up axis plus the camera position in a four by four matrix, which creates a camera that is looking down at our simulation space, and is positioned in the middle of the area. Where the camera is placed along the normal of the simulation plane is actually irrelevant for an orthogonal projection, I still place it some distance away for visual debugging reasons. If you want a slower explanation for constructing a view matrix based on forward, right and up vector, look at my Look At Transformation Matrix in Vertex Shader.

Figure -5- Constructing View Matrix

Next step is to construct a Projection Matrix. For Ortho projection we have a way easier time than perspective projection. After the the model has gone through the View transformation, we have the coordinates expressed in the local space of the camera’s gameobject. What we would like to have after the Projection Matrix is the xyz component to be within the range of -1 to 1 and the mid point of our simulation to sit on (0, 0, 0). This means the four corners would sit on permutations of 1 and -1. For the Z coordinates range, we want what sits on the far clip to be remapped to 1 and what is positioned in near clip to -1, these values also need to be determined by us. To calculate this, I iterate through all the obstacles, find the min and max of their bounding box and set the min and max in camera space as near and far clip.

In the code below (figure 6), you can see the process described above. Scaling xy so that the four simulation plane corners are remapped to range -1 to 1 and the z so that the highest mountain peak is -1 and the lowest base of the mesh 1. On top of that I subtract the mid point, so that the mid point is transferred to the origin (0, 0, 0).

Last detail is a small permutation matrix, which I am using since I construct my view matrix with x forward, y up and z right, and unity uses x right, y up and z forward. This is also the place where I switch the up and right axis to match the camera rotation to the actual simulation space in the compute shaders.

Figure -6- Create Projection Matrix

Having this, the rest is simple, just iterate through the obstacles, create the MVP matrix, and have their depth written to a render target. One small detail is that I map the depth values from -1–1 to 0–1 in shader, since that is how unity expects them.

Figure -7- Render Obstacle Depths

Your depth map should look like something like this:

Figure -8- Depth Map

Creating a Obstacles Mask from Depth Map

This will be a fast step. After you have your depth mask, calculate the depth of the plane in the projection space, and in a single pass over the depth pass, determine whether the pixel is above or below the simulation space.

Figure -9- Construct a Mask From Depth Mask

Your mask should look like this, which you can provide the simulation engine:

Figure -9- Obstacle Mask

Problem With Over Hangs in Terrain

There is an issue with this setup. Look at figure 10:

Figure -10- Terrain with overhang

The red area will be wrongly marked as a blocked section with our algorithm, due to the overhang which blocks that camera’s view of the space between the overhang and the simulation space. This is not an uncommon setup, though terrains usually avoid it by design since they are based on height maps, and height maps only extend up and can’t produce overhangs. My terrain is actually distorted in the XZ plane, so I do have slight overhangs, but not enough to justify implementing a more complicated algorithm. Never the less, I did spend some time thinking about this issue, here are some thoughts on how to solve it.

SDF Representation

Probably the first thing that would come to mind for most. If you build a sign distance field representation of your simulation plane with respect to your obstacles, you can very easily tell if something is within our outside of the obstacles. For each pixel of this map, it points towards the closest obstacle to that pixel in your simulation space. This is a bit of work to implement, and won’t be my first choice if I want it just for the arbitrary boundaries map. But if you have this information, you can construct a flow map (cross product between the sdf vector and the normal of the simulation plane, the sign distance field itself is a field of scalars, but you can save a vector with a inside outside side in the w component instead). If you apply the Projection operator which I discussed in the last article, you can get a divergence free flow map, which you can use to move stuff around, pan noise etc. You could get even a fairly believable “fake” fluid behavior without the need of actually simulating anything.

Count Up Count Down Method

The second method that I can think of is inspired by Shadow volume technique, used in some of the older games like Doom 3. The challenge there was to find a way to find out if a pixel is within a volume or not, very similar to our case. I am not sure what would be a good name for a technique like this, but count up and count down explains it the best.

What you would need to do is to still render the scene top down with the objects marked as a potential obstacle. This time however you will have Z Test and Culling Off. Meaning all faces will be rendered and regardless of whether they are facing the camera or not. Every time you encounter a camera facing pixel, increment up the value of that that pixel in a buffer, every time you get a pixel facing away from the camera, decrement it down. Look at figure 11 for explanation.

Figure -11- Count Up and Down Technique

For the center area, you will correctly calculate 0, meaning the section is not within an obstacle, whereas for the section next to it, since you never decrement, you will get 1, and those section will be correctly marked as boundaries.

You could hack this in alpha blending maybe, or use stencil like shadow volume. Easiest way is probably binding a Structured buffer to the pixel shader with a random write access rights, which you will use to save the value of how many up facing and down facing faces have already been visited on that fragment.

Worth mentioning is that this technique would only work with non manifold topology. This is however an issue also with the SDF method, since there is no good way of extracting a volumetric representation of a mesh with manifold edges.

Last Thoughts

If you would like to improve this setup, your best gain will probably be from adding macro details to the fog based on a noise. As mentioned in the Fluid Simulation post, your simulation can only have a limited resolution, smaller movements you can imitate with noise.

The smaller movements don’t have to be very accurate and there are a lot of techniques to add that visual fidelity to your fog. I discussed this at length in the VR performance challenges and creating realistic fog in Unity. In there I also mention blurring the screen behind the fog for added realism, which didn't fit into the visual style for this demo, but something very much worth considering. The result of those techniques you can see in this video:

Lastly, just like in my persian gardem demo scene, you can use the pressure buffer to adjust the thickness of the volumetric fog. This will add more variation to the fog along its upward axis, and make it look more 3D.

As usual, thanks for reading, you can follow me on my Twitter: IRCSS

--

--

Shahriar Shahrabi
Shahriar Shahrabi

Written by Shahriar Shahrabi

Creative Lead at Realities io, Technical Artist

No responses yet