Raymarching in Unity

Shahriar Shahrabi
3 min readSep 20, 2019

--

Raymarching is a very useful technique. It is commonly used to render implicit surfaces, uniform or non uniform participating medium and generally any surface where the intersection function is not that easy to find. While pure raymarching doesn’t see as much use in real time graphics and games as the demo scene, it is still widely used together with rasterization to create some amazing effects. I am going to be experimenting with some raymarching in Unity, and will be posting as usual the code and some higher level explanations here.

You can get my Unity project here on Github: https://github.com/IRCSS/UnityRaymarching

main Scene of the project

Raymarching has already been explained on several resources, way better than any explanation I could give. That is why I would refer the interested reader to this article from Inigo on how to render height field terrain. The reason why I recommend this article specifically is because height maps are very easy to render and visualize. It is a simple case of, select a point in space and take its xz value, pass it thorough your height map function, and check if the height of the point is higher than your height map. If yes, you are above the surface, if no you are under it.

Other articles might get more complicated if you are starting off, there are different optimizations for the marching, different ways to find an intersection with different primitives and even different coordinate systems you might want to use for your raymarching. So for the first few times, keep it simple.

My goal this time was to combine the rasterization together with raymarching to construct a volume within a unity scene. This setup would make it possible to have the majority of the scene rendered normally, but adding raymarching in the mix, coexisting with the rest.

To do this, I first render the backfaces of the meshes I will raymarch on to a rendertexture. This gives me the inner side of the mesh, if the mesh is water tight. Then comes the main pass, first I calculate the depth of the front faces and using the fragment shader of my main pass I start marching. The idea is that the front face is the starting point of the raymarch, and the back face the end point. The rendering the backfaces is done using the command buffers.

For surface function, I am using Gerstner Waves which I wrote for a different effect.

The lighting is a PBR model I took from here. It is a bit of an overkill, but I have other further plans for the surfaces anyways. This is the initial setup.

To get the normals, I am using the method described by Inigo in the article, however since the height function can actually be differentiated really well, you could also use the first derivatives of the function to directly calculate the normal. It would be a fourth of the current amount of calculations in the getNormal function.

There are some known issues with the setup. Ray marching is happening in the fragment shader of the polygons, so if the fragment is culled for some reason, the effect will also disappear. This could happen because of ZTest, or near clipping for example. You see this if you get too close to the volumes, or if you aligne the several volumes on the camera axis. To go around this, you might want to raymarch on screen space, and rendering the front faces depth also to a render target.

I have documented the code quite well this time, so I will link it here, since it already contains explanations.

--

--