Material Scanner

I’m Niklas and for the last one and a half years I’ve been developing a material scanner in my spare time. As this is my first blog post about this topic, here is a quick rundown of what happened during that time:

My current version is far from the first. There was a simple first prototype, which lasted a whole day before ditching it for a great new plan. It turned out that this great new plan wasn’t as great as I had anticipated, so off I went building the third version of the material scanner . This got me my first good results and served well as a proof of concept.

Then I might have strayed from my path a bit by working on other projects which I told myself were absolutely necessary for the next version of the material scanner, like building a CNC router and a spectrometer. In hindsight I could have just used the Datron CNC of the local maker space, but that didn’t have the same appeal as building a way worse version of it myself.

At this point I was rather happy with the general design and it was time to actually build the next prototype. So after building a CNC, building a spectrometer, machining/3D printing all necessary parts, designing custom PCBs and finally putting it all together, I ended up with the current version.

Why build a material scanner?

Now that you know how I got here, I should probably explain why I’m doing all of this:

The goal of this material scanner is to be able to calculate the visual properties of a material. These properties then allow me to render images of the same material under different lighting conditions. So a typical application would be computer games for example.

To better understand this you might want to look up the birirectional reflectance distribution function (BRDF). In a nutshell, this is a function describing how light is reflected from a surface. It depends on the direction of incoming light and the direction of outgoing light and returns the amount of reflected light. This enables us to calculate how some material would look when viewed from a certain position and illuminated from another.

Image from Wikipedia under the CC BY-SA 3.0 License

There are plenty of different BRDFs out there, all with their advantages and disadvantages, but the most commonly used is probably the Disney BRDF, often with all sorts of minor modifications. This is also the BRDF that I’m using, and the goal is to calculate all variables used in this function to represent the appearance of a material. This means I want to get the albedo, normal map, metalness, roughness and specularity of a material. A very good explanation of this can be found here: learnopengl.com/PBR/Theory

Image from learnopengl.com (by Joey de Vries) under the CC BY-NC 4.0 License

How to build a material scanner?

Now the question arises, how do you go about measuring this. It’s done by taking many images of the material under different lighting. In my case each image corresponds to one of the light sources which are arranged around the material. This means that for each pixel, we get several measurements of how much light is reflected towards the camera for several directions of incoming light. With this information it is possible to calculate the parameters describing the appearance of the material. Of course I wasn’t the first to have this idea, and this technique was first proposed over 40 years ago (see: Photometric Stereo). Since then there has been a lot of research on this topic, and you can get a good overview by reading this survey on photometric stereo techniques.

Image from Wikipedia under the CC BY-SA 4.0 License

The Details

Software implementation

I implemented pretty much all of the software for acquisition, solving and visualizing myself. The technologies used are C and C++ for acquisition, C++ and CUDA for solving and C++ with DirectX for visualizing the results.

Software implementation

For the material scanner I’m using 63 white LEDs and 8 color LEDs in combination with a 16MP monochrome camera. Additionally a motor can rotate a linear polarizer in front of the camera lens. This allows for separation of the specular and diffuse reflections (more on this later). For switching the LEDs on and off I’ve designed some simple PCBs which are basically just daisy chained shift registers connected to mosfets. The material scanner is capable of taking 11 images per second, but this is mainly limited by my camera, which is an ASI 1600MM Pro that I normally use for astrophotography. In the future I’ll upgrade this camera to a model better suited for my use case.

Separating diffuse and specular reflection

There are two different types of reflections, diffuse reflection and specular reflection. Specular reflection is the one we learned about in school: light hits mirror, mirror reflects light with the angle of reflection being the angle of incidence.

The diffuse reflection however, is due to the light entering the material, getting scattered around and at some point coming out of the material again. An ideal case of this is Lambertian reflection where light enters and exits the surface at the same point and is reflected in all directions equally.

Image from Wikipedia under the CC BY-SA 3.0 License

An interesting difference between these is that if you polarize light before it hits the surface, the specular reflection will maintain the same polarization while the light of the diffuse reflection will become unpolarized. This means that we can use two polarization filters (one in front of the light source and one in front of the camera) to either filter out the specular reflection or not. Being able to separate the diffuse and specular reflection simplifies the problem.

Solving the diffuse render equation

Lets tackle the easier problem first. For simplicity we assume that our material has lambertian reflection and does not reflect any light specularily, which is close enough for most materials. This means that the function we need to solve looks like this:

$$ I_D=L\cdot NCI_L $$

where $I_D$ is the intensity of the diffusely reflected light, $L$ is the light vector, $N$ is the normal vector, $C$ is the color (=albedo) and $I_L$ is the intensity of the incoming light. In simpler terms this means that the intensity of one rendered pixel is the albedo multiplied by the cosine of the angle between the light and normal vector. We now know $I_D$ (which is the brightness of one pixel in an image of our scan), $I_L$ and also $L$. Using this information we can find an optimal $C$ and $N$ with the method of our choice. I opted to use an iterative algorithm, which is essentially just gradient descent, but with a few custom features added to it.

Solving the specular part of the render equation works in a similar manner, just with bigger and more complex equations.

Getting a color image

Those who have been closely paying attention will have noticed that I’m using a monochrome camera. So all I get are grayscale images. The normal process to get an colored image is to just use a color camera. Interestingly, color cameras don’t natively output a color image, but rather have a color filter array (CFA) in front of their monochrome sensor. As a processing step the values of neighboring pixels can be merged into a single colored one.

Image from Wikipedia under the CC BY-SA 3.0 License

But such a CFA lowers the effective resolution and sensivity of a camera and is generally not too suitable for photometric stereo. So instead I’m using a monochrome camera paired with color leds. After taking an image for each led, I have 8 color measurements per pixel. These need to be mapped to 3 values representing red, green and blue, which is done via a simple matrix multiplication. The matrix is first calculated by a calibration routine, that takes images of a color checker and finds the optimal mapping matrix.

Here are the specral response curves of my LEDs, camera and the theoretical spectral response of the rgb values (the dotted line). The solid line in the bottom plot is the ideal spectral response curve of sRGB.

This is how an image of my color checker looks, with the ideal colors as overlay in the small rectangles (sometimes difficult to see).

I also made a video of the color capture process. If someone wants do do further reading on color accuracy, this is a great blog post about it: strollswithmydog.com/perfect-color-filter-array/

What are the next steps?

Obviously, building a bigger and better material scanner! I managed to claim a fair bit of unused space at the office, which is the perfect opportunity to build a geodesic dome design with a diameter of 2m.