Optimized Detail Animation in Vertex Shader for Mobile in UE4

Shahriar Shahrabi
Realities.io
Published in
10 min readAug 22, 2022

--

In this post I will cover a method of animating various different small objects in vertex shader with different pivots in a single draw call in Unreal Engine 4, based on the work done in Puzzling Places.

Making a game for mobile comes with a series of challenges. One of these challenges is the limited processing power as well as the limited memory bandwidth. Mobile devices usually have a shared architecture where the memory is shared between GPU and CPU. Since these resources are shared between both processers, it is easy to place pressure on the input and output and cause a bottle neck. This means mobile projects are very sensitive to things like draw calls, texture size, overdraw, etc.

For our latest puzzle in Puzzling Places, we decided to add moving chimes to the side of the building. In a typical workflow, you could easily place each chime as its own actor, with a piece of code inside which rotates it around the pivot of the chime. The pivot would be the origin of the local space. However that would force each chime to be its own draw call, with various state changes associated with it. Performance wise, this is not feasible, considering we planned to have 20 something of them.

The solution which I will be covering here is to animate the chimes in the vertex shader with one draw call. This poses two challenges to over come, the first is to somehow have the position of the pivot per chime available in the vertex shader, and the second is to move the chimes individually around that pivot independent of each other within the same draw call/ shader.

Encoding the Pivots

If you want to pass some information to your vertex/ fragment shader you usually have few options. You can pass the information as a uniform for example, a value set by your CPU which is valid for the duration of that draw call. This would work if we wanted to set something like the wind speed. Since this value is the same for all the chimes. However for the chimes pivot themselves, we need to somehow have a different value for different vertices. For this we can use the vertex attributes, which differ from vertex to vertex.

You can use any of the vertex attributes to encode the pivot position (normal, uvs, color etc.). we went for UVs, since vertex color in Unreal is a 256 stepped integer, and I needed a float attribute due to precision reasons.

For encoding the values themselves, you can use whatever 3D software you wish (including writing a python tool or doing it in Unreal engine itself). We used the geometry nodes in Blender.

In our case we have two types of chimes, the big ones and the smaller ones. All the big ones as a group are in the same height. Similarly the small ones are also on the same height. So we don’t actually need to encode the height of the rotation pivot for the chimes in the UVs, since that can be easily reconstructed in the shader later. We encode in an attribute (color) the information needed to know which type of chime the shader is dealing with.

Figure -1- Different Chimes type

The entire mesh you see is rendered with the same shader. To make sure nothing else is animated, that single attribute channel of the mesh is used for the shader to know what part of the mesh it is dealing with. Attribute values between 0 and 0.1 mean it is the normal mesh, 0.1 and 0.2, it is one of the attachments (the monk, tigers etc. they use a separate smaller texture), 0.2 to 0.3 small chime and 0.3 to 0.4 the big chime. We have even more case differentiation because we animate the inside of the chime differently to the outside.

As far as these cases are mutually exclusive, you can encode up to 256 states in a single channel of the vertex color. Of course there is a cost associated with reading and decoding this information in the shader later, which for our use case of only a few states is negligible.

Figre -2- Vertex color is used to encode the different mesh areas

To encode the actual pivots themselves, you need to set the UVs of every vertex within a chime to have the XY Value of the pivot of that chime. You can do this automatically. For example you can unwarp using projecting from view on a top down orthogonal view, then scale individual islands to zero on individual bounding boxes. Then using geometry nodes/ python, you can use the camera projection setting to deduce the actual world position from the projected space, which the UV space is in. However since we didn’t have many chimes to deal with, we simply did it by hand. After projecting from a top down view, for each chime we would simply read the vertex value of where the pivot should be, and set the UVs of all vertices within that chime to that value.

Next step is where geometry nodes come in. To avoid precision problems, we normalize the UVs to the 0–1 range using the bounding information of the mesh. This process is done later in reverse in shader to get the true local space position of each pivot.

Figure -3- normalizing the pivots in the uvs in geometry nodes

Just for visualization context, the normalized space looks like this. In the image below the red and green color represent the XY values of the pixel position (not pivot) of all the attachments in the normalized space.

Figure -4- visualization of the normalized space

At the end, your UV map looks like this.

Figure -5- UV map where the pivots are encoded, each yellow dot is the top down projected position of the pivots of that chime, in the normalized space

Reconstruction of the Pivot in Engine

Now that we have the pivot information in the UVs, we can simply reconstruct it in engine. The whole shader is quite big, so I will only focus on the relevant parts.

Figure -6- the whole shader rendering the temple, chimes, and attachments

First of all, we would need to do a case differentiation based on the vertex color to know what part of the mesh we are dealing with. Using simple step functions, you can build a logic gate mathematically which gives you a 0/ 1 (false/ true) based on whether an input number is within a given range.

Figure -7- first image: function used to determine if the vertex color is within a certain range, second image: constructing the different boolean cases for the different mesh parts. Using Min, Max, Multiply, 1-x, etc. functions you can now use these similar to OR and AND

Parallel to that we need to read the normalized pivot position from the UV and do the reverse of the normalization we did in geometry nodes. The bounding information used for normalization needs to be the same numbers used in blender.

Figure -8- reconstructing the pivtos using the UVs, normalisation values and the hard coded height. The star section is the input from the case differentiation in figure -7-. determining what part of the mesh the shader is deal with to decide the correct height for the pivot.

You can fully reconstruct the pivot using the information gained above, and the height of the pivot deduced from the vertex color and a hard coded value. The hard coded value indicates what the height of the big and small chimes are in local space.

Figure -9- using Max and multiply to emulate OR and IF functionality. This is the left side of the star section in figure -8-

With this the pivot part is done. You can now animate the chime in shader as if they were individual objects within a component/ actor.

Animating the Chime

The core concept of animating the chime is a look at transformation matrix. I have used this many times before, and also covered it a few times. I won’t be repeating myself here, so if you are interested in that give the three blog posts a read. Look at Transformation Matrices, Matrices for Tech Artists, Cheat Sheet, Non Affine Transformations.

What I will be covering is the Unreal specific information. Unreal material editor doesn’t have a native support for the data type matrix. This is one of those design decisions in Unreal which leaves you scratching your head. Building one is not hard though, you will need a function like this to do vector matrix multiplication.

Figure -10- rotating a vector around a point by setting the axis of the space. The naming can be confusing, this is not an angle/ point sort of function. It is a look at function from Pivot. I should probably change the name. The Column XYZ are the look at matrix, the pivot is the origin of the space you want to apply the look at matrix in, the point is the position of your vertices in local space

The rest of the look at stuff is straight forward. You build your coordinate system using a series of cross products, just as described in my Look At Matrix blog post. The important point is that the look at vector should be aligned with the bottom vector of the local space of the chime, since that is the direction the chimes are pointing towards.

Figure -11- constructing the axis of the coordinate system using cross products

To determine the direction the chime should look at, I simply move a point within a circle below the pivot of the chime using sinus and time. This point is the look at target which the chime is rotated towards. The wind strength and chime type determines the radius of this circle. To add to the realism, each chime is a bit offset in time compared to the ones next to it. So not only each chime has its own individual animation, the animation also traversers from one chime to the other, as the wind itself would.

Figure -12- constructing the look at vector itself. The Star input is the pivot for each chime added to the sin to vary the animation

There are some more details in the animation, such as the bending of the inner part on the bigger chime and varying wind strength using an over aching sinus which I won’t cover.

Figure -13- Smaller details that control the strenght of the rotation

Last point to cover is to apply the actual movements. In material editors you can usually only set vertex displacement, not the actual vertex position. So as a last step, after transforming the positions using your look at matrix, you need to subtract it from the unmoved positions to get the delta vector to pass on as an offset. Worth mentioning is that if you want to rotate your actor around while keeping the animation the same, you should do all these calculations in local space and do a local to world on the offset vector at the end.

Figure -14- final steps before applying the transformation. Input A is the mask from vertex color to ensure the animation is only applied to the relevant part of the mesh, B is the transformed vertices after look at rotation and C is the local position of the untransformed vertex.

Improvements

The setup right now is optimized but we can do better. For starters I am using a vertex color, but only using one channel out of the 4 available in there. So I am using 4 times more memory than what I need.

As I said before, mobile devices are sensitives on bandwidth pressure. Since vertex attributes exist for every single vertex, the extra unused memory on vertex color scales linearly with the amount of vertices you have, and can put unneeded pressure on the bandwidth as the vertices are processed by the vertex shader.

One solution would be to use the vertex color for the pivots. But as mentioned before Blender/ Unreal force integer type on the vertex color of the mesh, which is not ideal for things like pivots due to its lack of precision. You can of course take the 3 int channels and encode it in to a float your self, but it is a bit of programming effort and there is a runtime cost associate with it.

A better solution would perhaps be to take a float 4 attribute, and use it for both case differentiation and pivot encoding. This would get rid of the uv+color combination which are wasting away 8 slots on the attributes (4 for color +2 for uv, if the no attributes are packed in the padding, it rounds up to 8 due to the architecture). A good attribute for this in Unreal is the normal. Unreal engine forcefully packs the normal to your vertex shader, even if you have an unlit shader that is not using it. Head scratching again, I know. Specially for resource sensitive projects like VR mobile. So if you encode it into the normal, you will effectively have the same memory foot print as a mesh without all these info, while having access to the information you need.

The main problem with normals in Unreal is that some operations happen to them in the back ground in the vertex shader. We ended up not using any of these improvements, due to the lack of time.

Thanks for reading, as usual you can get in touch on my Twitter: IRCSS for any questions. The blue temple was a project done within Realities IO and the work presented here was a cooperation between Daniel Kraft and me. The scan of the location itself was done by Aaron Cederberg.

--

--