Interactivity in grass Shader

Shahriar Shahrabi
5 min readAug 5, 2019

--

A while back I did the snow shader below. It was a very satisfying feeling. Now that the grass shader is looking good, I am very much looking forward to implement something similar for the grass. As usual you can have a look at the source code here on its Github page: https://github.com/IRCSS/Interactive-Grass

I have also implemented a very crude lighting for the grass shader, with triplanar mapping support. The code is more of a proof of concept, and there are quite a few open issues with it.

https://twitter.com/IRCSS/status/1083341018894876672

First let’s go over the basic idea. There are several ways to do this effect in games. If you are dealing with only few number of actors that are supposed to interact with the geometry, you can pass their position to the GPU and for example shrink the grass batches (cross trees) as the player gets closer to them. Example of a methodic similar to this is implemented in Assassin Creed Odyssey. If you are dealing with a large number of agents with various sizes and shapes however, I found rendering their depth a very useful technique to make interactive surfaces.

The idea is very similar to shadow mapping. You render the scene from above (or below with backfaces depending on implementation) with a camera which only sees the objects that are supposed to interact with the surface. In this pass you only render the depth of these objects. Then you render the scene normally, and sample the depth texture from before, which is being projected on top of the fragments being rendered. Comparing the depth of the surface, with the pre pass depth, you can determine if two surfaces are colliding or not.

If you do not clear the buffer which saves the result of your collision detection, the traces will remain, and you can have long lasting foot prints. There are easier ways to set this up. The advantage to this way is that you can do actual collision detection for objects of various sizes and shapes on different heights and topologies. You can also serialize and localize these maps easily, for a lager scene. The disadvantages are plenty. The problems surrounding shadow mapping are a very well researched and documented. These problems also apply here. A few examples are: aliasing, peter panning or acnes, finding optimal resolution distribution and performance cost of things like PCF and such. For my mini game I am ok with all the disadvantages, mainly because the camera is not supposed to move. (I hate aliasing)

Back to grass. We give each sheep a simple quad which represent the bottom area of contact, we go through the described method, and here it is, we get interactive grass. I know I am going over this topic way too fast, but it is a very messy architecture, one which requires more than a few paragraphs to explain.

First, let me address the seriously not physically-based shading I used in this grass. Rendering grass realistically is a complicated thing. It comes with all the challenges of rendering hair strands, plus the fact that each grass blade is actually big enough, that you can’t treat it just as a line on the screen. Whatever the BRDF you use, it needs to fulfill a lot of criterion. You need a statistical distribution of normals, that is visually not too busy, but still creates the illusion of individual blades of grass having different surfaces plus the wind collectively moving a batches of a grass in a certain way. Another challenge is that the subsurface scattered light within the blades has a way more saturated green than the specular highlights. I spend half a day looking at models for rendering fabrics hoping to use a similar approximation model with slight randomness on the surface for the grass, but realized very fast that it’s a rabbit hole with no end. So I just went with the flow and stylized everything to make the best out of it.

Second I need to address the architecture for this effect. It is way too much overhead, and it is a bit messy. All the passes and cameras you need to keep track of, and not to mention the code will get way more confusing if you actually start optimizing it for a bigger project. Maintaining it over a longer period of time for different needs is very tiresome. For my needs it’s fine but you need to weight that in for yours.

Last but not least, there are some known issues with the algorithm. Beside the incorrect lighting, I have made some assumptions about the set up which won’t be the case for every scenario. For example, I assume that the camera which records the traces left on the grass is orientated in a certain way. Another one is that the shadow map itself is wrong. We are basically creating new geometry in the main pass which we are not doing in the shadow prepass. Also the traces pass texture is now simply projected from above on top of all geometries that share this shader. There are better ways to set this up. Either create a screen space map in the prepass like Unity does for its shadow, or let the objects do the collision detection themselves. I might come back to it later and fix those, but for now I need to prototype other parts of the art for the mini game I am making.

As usual, thanks for reading. You can follow me on my twitter IRCSS. If you find any factually incorrect information in the post, please let me know. Here are some further reading for those interested in specific parts of the algorithm I used:

  1. Shadow mapping explained in detail (the article is abit wordy but you will manage)
  2. Multi tapping in height maps to derive normal
  3. Triplanar mapping
  4. And the original paper which the algorithm comes from

--

--

Shahriar Shahrabi
Shahriar Shahrabi

Written by Shahriar Shahrabi

Creative Lead at Realities io, Technical Artist

No responses yet