Mesh Texture painting in Unity Using Shaders

Shahriar Shahrabi
9 min readOct 22, 2019

--

Let’s say you want to paint or render something directly in the texture of a mesh. In this article, I am going to cover how to implement that in Unity, using shaders and GPU.

As usual, the code you can find on Github: https://github.com/IRCSS/TexturePaint

If you just want to try out the build, you can head over to this itch page: https://ircss.itch.io/paint-david

Summary

Rendering directly in the texture of a mesh has a lot of uses. For example preprocessing effects, such as Rendering Lightmaps or ambient occlusion. Another potential use is Realtime painting on surfaces. Effects like applying graffities or letting the player splash paint on the surfaces. These effects are typically implemented using decals, but depending on your game you might want to save your changes in the mesh texture forever.

A top down look of the process is like this. You need to set up a pass, where your fragment shader renders to the UV space of your mesh texture, instead of screen space. The rasterizer then creates triangles out of the vertex coordinates you provide it (now the uv positions) and effectively recreates the UV islands which you can render your new data in. So the only setup required is to provide the rasterizer with the correct input, and the graphic pipeline takes care of figuring out per fragment the address (uv position) which your fragment shader should write to.

This means your meshes surface needs to have already been parametrized. Or in simpler terms, you need to have an unwrapped mesh. If you don’t have a uv, there are alternatives such as octree spatial texture maps in GPU.

In our case, since in games meshes are almost always unwrapped, we render our effect in the uv space of the mesh. To paint in this space, you need to convert your mouse position from screen space coordinate to the UV coordinate of your mesh (or vice versa). Then you can do sign distance calculations to know how close a fragment is to your mouse. Using mouse input you can enable painting of a certain color in the fragment shader if they are close enough to your mouse position.

Implementation

First thing we need is to set up a pass, in which the UV texture is reconstructed. To do this I use an already existing texture of the mesh, to create a new render target with the same dimension. We need a construct to run a shader on this render traget every frame. For this I am using a Commandbuffer. All the command buffer does is run my shader every frame after the depth pass of the camera, and draw the mesh in the render texture I created:

 cb 	 = new CommandBuffer();
cb.name = “TexturePainting”;
cb.SetRenderTarget(runTimeTexture);
cb.DrawMesh(meshToDraw, Matrix4x4.identity, m);
mainC.AddCommandBuffer(CameraEvent.AfterDepthTexture, cb);

Now on the shader side. Here is where the trick is. To reconstruct the UV texture of the mesh, instead of projecting your mesh in the clip space of the camera, you need to project it to the uv space of the texture. Just keep in mind that your rasterizer expects coordinates which have a range of -1 to 1, and your uv starts at 0. So first we need to remap this to that range. Before we pass it on to the rasterizer, the rasterizer needs a float4, the Z, which is the depth can be 0 since we don’t have any depth in texture space. Very important, this means that we assume our uv has no overlapping triangles. So no tillable textures. The w component of the vector, also known as perspective divide, should be set as 1 since this is an orthogonal projection on the uv space.

v2f vert (appdata v)
{
v2f o;
float2 uvRemapped = v.uv.xy*2. — 1.;
o.vertex = float4(uvRemapped.xy, 0., 1.);
o.uv = v.uv.xy;
return o;
}

This shader doesn’t need to do much at the moment. Make sure You have these tags added for the pass which is drawing the texture.

 ZTest Off
ZWrite Off
Cull Off

So using this setup, we can already reconstruct the uv space. Here is the texture we will get out.

Figure -1- UV Space of the mesh

If we apply this to the albedo of our mesh, we get this:

Figure -2- flipped Y uvs

Well that looks very wrong. This is obviously not our correct UV. So what went wrong? The reason behind this becomes obvious if you look at Figure 1. You can see that the u (x) axis goes from left to right (good so far) but the v (y) axis is going from up to down! The reason behind this is the set of very confusing conventions which I still haven’t wrapped my head around. For each coordinate systems, you can decide in which direction x,y,z points at. You want to hold some convention to keep the sanity of your programmer, and OpenGL, Vulkan and DirectX have their own (different) conventions. There might also be variation from Game engine to game engine, or 3d software to software. To make matter worse, in this case there are two coordinate systems of interest. How the UV coordinate system is setup in the render target, and how the Clip space and NDC (normalized device coordinate system) is setup. My professional method of dealing with this is to flip things until they look right. So let’s just do that here too. Alternatively you can sit down, think about it properly which convention is being used and why.

float2 uvRemapped   = v.uv.xy;
uvRemapped.y = 1. — uvRemapped.y;
uvRemapped = uvRemapped *2. — 1.;
o.vertex = float4(uvRemapped.xy, 0., 1.);

And we get this, which is correct:

Figure -3- corrected UV

Painting

Time to paint in the texture. First problem we need to deal with is to know conceptually where the mouse Cursor is in world space. If you are writing a painting application for VR, this is easy. Your controllers already have a world position associated with them. However a mouse cursor exists in screen space of your monitor. Applications like blender, mesh mixer or substance painter usually give the user the option of choosing how the cursor is applied on the surface of the mesh. For our case we keep it simple, a ray is casted from the screen position of the mouse out, and where ever it hits on the surface of the mesh, is where our painting cursor is.

Before we get to that implementation, I would like to offer an alternative that works better in certain cases. The above method would be implemented in the application side (CPU) and the world position of the mouse is simply passed to our shader. Since we already have the collider and Ray system of unity we can simply use that. However this method would require the mesh to have a collider, and preferably a low poly collider. This becomes more important in application where the performance is critical.

The alternative to that would be, to project the mouse position directly on the triangle which is being rendered. Each fragment has a position in the space spanned by two basis vectors, its tangent and bitangent (part of which is the triangle being rendered). You can project the mouse on this space, by constructing a matrix A, with the tangent and bitangent as its column space, and use the projection matrix of matrix A to convert the mouse position in a format where you can do singe distance calculations. This has the advantage that using this coordinate system, you can also project a decal texture, to imitate paint brush and such. Also you don’t need a collider for your mesh and your performance cost won’t differ from object to object (it would differ base on your render texture resolution). A sister of this method would be to project the triangles of the uv islands on the clip space and do the sign distance calculations in world space.

However in my case, I already have a low poly collider for the mesh, and the application is not performance intensive so I will go down the CPU path.

I am not going to go over how I get the hit position on the mesh. Standard ray casting. Once you have mouse hit in world position pass it on to the shader which is rendering in the UV space of your mesh.

Next step is to calculate the world position of each fragment in the uv space of the mesh. This is trivial using the interpolation of rasterization. The vertex shader can calculate this value, and the fragment shader would simply receive the interpolated result. This is possible because both world space, and the space of the uv texture vary linearly in both x and y direction. Pass on the LocalToWorld matrix of the mesh to the shader, and calculate the wolrd position of the fragment in one of the TEXCOORDs.

struct v2f
{
float4 vertex : SV_POSITION;
float3 worldPos : TEXCOORD0;
float2 uv : TEXCOORD1;
};
v2f vert (appdata v)
{
... rest of vertex shader
o.worldPos = mul(mesh_Object2World, v.vertex);
... rest of vertex shader
return o;
}

In the fragment shader, you can do the sign distance calculations. Again I won’t go over it, since it is pretty much standard stuff. I am saving the left mouse pressed value in the w component of the _Mouse vector. This value is one when mouse is pressed and 0 when it is not. The xyz of the vector are world coordinates of the mouse position.

fixed4 frag (v2f i) : SV_Target
{
float4 col = tex2D(_MainTex, i.uv);
float size = _BrushSize;
float soft = _BrushHardness;
float f = distance(_Mouse.xyz, i.worldPos);
f = 1.-smoothstep(size*soft, size, f);

col = lerp(col, _BrushColor, f * _Mouse.w * _BrushOpacity);
col = saturate(col);
return col;
}

So here you go. You have a texture painting set up. Look at the code for more details of the implementation.

Mipmapping and Island Edges

If you go with the naive implementation I have suggest here, you are left with two bugs which needs to be fixed. One of the two I have already fixed.

I will start of with the first one, island edges. As you render the faces of your mesh in the uv space of the mesh, you are reconstruction the islands one triangle at the time. On the edges of the island, it can be that due to underestimation the rasterizer doesn’t consider a pixel which is actually in the island. For these pixels, no pixel shader will be executed and you will be left over with a crease.

Figure -4- Creases

Simplest solution would be to overestimate the rasterization, however that is not accessible in Unity. I won’t go in to the exact implantation of how I solved this, but the basic idea is that every frame, after the paint texture has been updated, I run a shader over the entire texture, and using a filter and a pre baked mask of the uv islands, extend the islands outwards.

A better solution to this edge bleeding would be to perhaps create a mip chain for the paint texture. At the moment I am not using mip maps for this texture, since I am painting only in the texture itself and not on the entire mipchain. Although I haven’t implemented a solution for this, the two ways I would recommend in case someone will implement it, is to either update the mip chain as you update the paint texture, or to reconstruct the mipchain every frame in the GPU, using cheap filters.

Here is another usage of the technique. The mesh belongs to my coworker Azad

Hope you enjoyed reading. Please get in touch if there are factually incorrect information in the article. You can contact me via my twitter IRCSS

--

--