Texture Fixing in Blender Using Photoshop’s Content Aware Fill

Shahriar Shahrabi
5 min readApr 23, 2021

Content Aware Fill is a powerful tool. I would love the day Blender would have something like that integrate in. Until then, we can use a small trick with projection mapping, to bring the content aware fill functionality of Photoshop to Blender. Here is a short break down of this technique.

A very common problem in photogrammetry, is when you miss certain areas during your capture, which leads to certain surfaces having missing or bad textures. An example would be in the following dataset:

Figure -1- Data set in RC, missing coverage for the marked area

As you can see, none of the cameras captured the area marked in red. Since this is a flat area, in theory, it should be rather simple to create a texture for the missing part to get clean borders. Blender already has the clone tool, which does a nice job on a lot of artifacts, for this particular case, the content aware fill plus some quite retouches in Photoshop could do a better job with less seems.

I am going to fix this missing part here as an example to demonstrate the technique.

Figure -2- Missing part to fix

To get started, bring your mesh to blender, and create geometry for the area you wish to fix.

Figure -3- Extruded mesh to create geometry on the area

Then unwrap the section, so that there is a dedicated texture space for those triangles.

Figure -4- Unwrapped extension

Then place a camera above the missing area. Place it in a way, so that you can capture the missing area and its surrounding in a render. The camera should be orthogonal to the area you want to fix. For the camera settings, make sure it is set on orthogonal, and the orthographic scale is set to whatever you need in order to capture the area of interest.

Figure -6- Camera Settings

Your camera placement should look something like this.

Figure -7- Camera placement

Render out that area to an image. I used an 8k image for this.

Figure -8- Missing area as a rendered image

Now import this to Photoshop, and fix the artifacts using content aware fill and retouching.

Figure -9- Content Aware fill

Time to project this back on the mesh. The idea is to use the texture coordinate of the camera we used to render this image, to project it back on the surface of the mesh in the material of the mesh. The general material set up would look like this:

Figure -10- Projecting back the fixed texture

As you can see, the image is being projected from the perspective of the ortho camera. It repeats itself to the left and right, because the textures wrap mode is set to repeat. The only tricky part about this whole technique is to understand how to map the object coordinates of the camera object, to the camera space coordinate which the fixed texture is in. The transformation is as follow:

Figure -11- Camera Object to Camera Space transformation

The reasoning behind this transformation is rather simple since this is an orthographic camera. In opposite to a perspective camera, the difference between the object space of an orthographic camera and its clip space is a uniform scaling and a translation. I will explain the above, one node at a time.

The Texture Coordinate node, is referencing the Camera object (in the Object panel), hence its Object output, is the value which each fragment of our mesh has in the object space of the camera. Since the Camera transforms have scale of (1, 1, 1), this space is simply a rotated/ translated version of the world coordinates. To get from this space to camera’s screen space, we first need to scale it in a way that the corners of the image are normalized values. The formula for that would be:

normalized_coordinates = object_coord / Orthographic_scale

Where Orthographic_scale is the float value we set in figure 6, in the camera settings. After this transformation, the very left side of your texture would be -0.5 and the right side 0.5 in this normalized_coordinates. However to sample the texture, you would need the coordinates to go between 0 to 1, so we add 0.5 in all axis (Z doesn't matter).

Now that we have our texture coordinates for the camera texture, we can project it on the mesh. As you see however, the projection is happening everywhere on the mesh, which is not what we want. We want to project only on specific spots. To fix that, I created a third texture in Blender, which I am going to use as a mask to blend between these two textures. Once created, go ahead and paint the area of interest using the texture painting mode.

Figure -12- mask texture for blending

Then, as mentioned above, use this texture to blend between the two different areas:

Figure -13- Blending code in material

Best part is, this setup is also kind of non destructive, you can go back for example to Photoshop now, and further fix the texture, and Blender will automatically apply the fixes when you reload the texture.

Figure -14- further retouching

As the last step, go ahead and use the baking functionality of Blender, to bake out this combination of two different textures and a mask, to a single texture. I won’t be covering that here, since I already covered that in this article: https://bootcamp.uxdesign.cc/making-a-3d-model-out-of-a-watercolor-painting-6800821b6ee5

That was it, hope you found this article helpful, and as usual, thanks for reading and you can follow me on my Twitter: IRCSS

--

--