SLIT SCANNING

Wild experiments to create my personal avatar





Fun with slitscanning

CREATE YOUR PERSONAL AVATAR


Ever tried to build a 3D model of your own head? At some point you could have used a scanned map of your head or entire body. To start, all you need is your phone, a rotating office chair, LightWave 3D, and my free LScript
plugin: Stereo Slit Scanner.

In the demo below I could have optimized a lot more, but it also reveals some pitfalls. Read my recommendations at the bottom for better results.

Even with a simple setup, the raw result can look like this:



and it should be UV wrapped around your 3D model to create your personal 3D avatar. My first result is nowhere near a professional level, but for a start it's much better than I expected. This proves the concept has potential.

To test the result, I applied the 360° image to a 3D model of my head, without facial hair, that has been modeled so far with front view and side view photo's.



This Cylindrical image mapping could be precise enough if the rotational movement of the talent would have been at a constant speed. In this demo I rotated the chair manually, hence the speed variations, but this type of image mapping is a first quick check for correct proportions.

Some elements (sideburns, ears, eyes) are slightly off, so let's apply a UV map to move certain areas individually for a better match.

IMAGE MAPPING


The 3D model had to be tweaked first for more realistic ears. Corrected the height/width ratio of the head. Corrected the position of the eyes. Corrected the pupil distance to mine (PD = 6,5 cm). Raised the corners of the mouth to look less mean.

Then I edited the image: removed the collar, blurred exessive blockiness, added a transition to (slightly) hide the seam between head and body. Still working on the top of the head - there should be slightly more hair, and it should be there without weird twists at the top-end of the scanned image.

The next step is generating the UV map, to apply the cylinder type of image onto the skin.



I'm still using the same scanned image here but I was thinking about an additional photo for the eyes because they are important for the end result.

During editing of the UV map of this personal 3D avatar, I've learned that I should rotate the talent much slower during scanning. Compressed areas and areas with motion blur are difficult to reposition.

MORPH TARGETS

This is where the fun starts.

At first sight, the facial hair looks realistic enough, but compared to the scanned talent it lacks volume. Especially obvious with the moustache. I didn't wanted to ruin the model so I generated separate Morph maps to restore the volume for both moustache and beard, for maximum control. I wouldn't recommend a separate beard object though, as it's diffcult to blend with the face model.

The length of the beard tends to vary in time.

Luckily the Morph targets can be adjusted for the occasion. In percentages, including negative values and extreme positive values. Or it could be shaved or reshaped or dyed in fluorescent colors - just in case you want to try different creative models and slightly worry your spouce.



At this point, the 3D model is still not perfect. To mention a few issues that need to be addressed: the eyebrows should be stretched a bit more to the sides, the visible skin below the beard is stretched in a way that makes the animated morph look creepy, the hair on top is distorted and missing (yes, there is hair), the beard below the ears is not well defined compared to the rest, the hairline at the back is weird, the blend between scanned image and body material should be more subtle, eyelashes are absent and incorrectly projected on the eyelids, improve the definition of the eyes and eye lids, the depth of sideburns needs to be toned down.

Also, the ears and some parts of the skin could benefit of SubSurface Scattering or SSS. A rendering technique to let these translucent parts light up like wax, milk, marble, leaves, or ... skin. Hold your fingers in front of a light source, and you'll understand what I mean.

Surprisingly, even with this low resolution scan you're able to notice the imprints of my 10 gram glasses, on the nose and in the hair, just above the ears.
This raises the question: how detailed is detailed enough?
Actually, it's starting to look like something useful. For instance a character for film, in the background or crowd, or a game character.

Slit scanning is a cheap alternative for industrial-grade 3D scanning equipment, in terms of investment and overhead costs. Maybe it is not good enough for the next Terminator movie, but the results can be convincing, especially after optimization.

If you're interested in capturing a detailed 3D shape of your talent, search for open source photogrammetry software, and take a closer look at Colmap, Meshroom, Micmac, Multi-View Environment, OpenMVG, Regard3D, and VisualSFM.

Production Steps

mount your phone

Place your mobile phone in a cradle or a clamp, in front of your rotating chair. Use tape or whatever keeps it steady and exactly straight.

provide diffused light

Shadows on your 3D models are calculated at render time, so make sure the head (or full body) is evenly lit with plenty of light.

setup your camera and recording format

Use a progressive video format, high frame-rate, high resolution, hdr mode.

add a rattler to the chair

For audible feedback on the rotation speed variations.

place your talent in front of the camera

Take off glasses and other accessories.
Sit straight up on the swivel chair.
Keep head horizontal and position body and head at the rotation axis (rotate the chair much faster to get a better feeling for the optimal position).

While recording, don't move facial muscles, the eyes, neck, head, or move your center of gravity outside the rotation axis.

Note that this was a quick and easy video recording. The frame coverage should have been optimized for improved vertical resolution (by a factor 3), and I unintentionally didn't turn off the automatic functions (AF, AWB, Stabilizer, etc), so there is plenty of room for improvement.

start recording

Make a neutral pose, and stare with your eyes into the distance - don't lock to an object because that will make your eyes move during the rotation. Now, slowly rotate the chair with constant speed 360° in 5 seconds or more.

edit recording

Transfer the video file to your computer, into your favorite NLE.
Remove the audio track - you don't need it, unless you're using it as a speed reference.

Cut a section from your video clip, and make sure it visually loops correctly. If you forgot to disable Auto White Balance setting (like me) you may want to crossfade between two copies of your clip, between corrected light temperatures. Neutralize any movement deviation. Apply a vertical crop to get rid of unused areas.

export video stream

Before you export the result to a file, you need to know that my Stereo Slit Scanner plugin is an Image Filter on top of the video stream. It means the slits are exported by the plugin on top of the same video stream. This makes the width of your video stream a limiting factor for the temporal resolution (horizontally in this example). This means you can't apply a horizontal crop on the video source that makes it too small.

So, if your video loop has a duration of 5 seconds, at 60 fps that means the Stereo Slit Scanner plugin generates 300 slits (each 1px wide), placed next to each other.
If you want to increase horizontal resolution, you might want to experiment with slits that are slightly wider. For instance 3 to 15 pixels wide.

In a future version of the plugin, I'll try to add a crossfade filter between the slits. I'm also thinking about smart algorithms to improve image quality.

start LightWave Layout

Load your video export as a Background Image. Check the file properties in the Source tab, match the Preview Range to the duration of the video stream, verify Color Space. Use the Image Editor (F6), select your video stream, open the Processing tab, apply the Stereo Slit Scanner plugin, double click its name to open the plugin dialog.

Verify the selected Slit is set to Center, or choose the Left- or Right Eye and adjust the Stereo-3D Depth (%) or Slit Position (px) in order to select the correct slit - the position must match the part of your talent that exactly faces the camera. Note that the vertical green line in the thumbnail marks the slit position. This line is also visible in the F9 Render for as long as you didn't fill the Frame Buffer with button Scan Slits.

start slit scanning

Press button Init Camera to change the vertical camera resolution to match the vertical resolution of your video stream, and change the horizontal camera resolution to match the number of frames in the Preview Range, multiplied by the Slit Width.

After scanning, you may want to manually multiply the horizontal camera resolution by an even factor for preview purposes. Note that this doesn't add real resolution to your render output.

Note that the setting for Projection Phase is set to zero. A value of 0.5 will shift the original left side to the right side of the final result, and the left side is filled with the original right side. This makes it easier to judge the seam.

You may add the Full Precision Blur filter to blur the pixels. Alternatively you may want to use a Blur Brush in Krita or Photoshop equivalent to selectively blur the worst aliased areas in the right direction, fill the gaps, and make some parts (like the ears) more symmetrical, or overwrite details (like the ears) with a section taken from a regular 2D photo that should have been made from the same camera angle and light conditions.

Now, press button Scan Slits and watch how the timeline is played back and each slit of the frames in the Preview Range are sequentially copied to the internal Frame Buffer. You may press button Ok while it is collecting its data.

At the end of your timeline Preview Range, press F9 to render that frame. The content of the internal Frame Buffer is now projected on top of the Background Image.


If you want to, you can change the Projection Phase and re-render to move the seam to the center.
You don't need to press button Scan Slits again for this adjustment.

Analyse and improve

You may want to check and adjust for rotation speed, motion linearity, coverage, angle, slit position, temporal resolution, motion blur, seam position, etc.


Evaluate the result critically, from all sides.
  1. The vertical resolution could be better if I would have optimized my position in the frame.
  2. The horizontal resolution could be better if I rotated much slower.
  3. My left ear looks compressed because of speed variations. In this case, it wouldn't be a huge problem if the absolute speed would have been much slower. The beard below the left ear lacks definition. It looks like it has been caused by motion blur but in this case I would recomment slightly more camera tilt to capture the area below the chin to use more camera sensor pixels in that specific area. In that case I should use a still from the hair area on top of the head.
  4. To reduce speed variations: a motorized platform (for chair/standing or camera) would have done a better job.
  5. A more advanced and probably cheaper method of registering speed variations is adding a pulse sensor to the chair. Via a mechanical rattle or an electronic sensor. One that generates an audio burst for each degree it is turned. Send the audio burst to the camera for live recording, by wire or a speaker. During post-processing this kind of reference can be used to gradually speed-up/down the source file.
  6. The top of my head is slightly distorted due to mistakes in manual correction of the vertical position over time.
  7. I should have turned off all automatic functions (AF, AWB, Stabilizer, etc).
  8. I should have used a zoom lens to create a flat image with minimal depth information.
  9. To remove shadows completely, I should have used a donut shaped light that surrounds the camera lens.
  10. I should have taken off my shirt because it masks areas that may be important and causes the seam of the skin to sneak up my chin.
  11. I should have taken stills from the rest of the body to end up with the same light properties, including white balance and shadow fall-off.

TO BE CONTINUED


© Copyrights 2005- by V.D. Mesman, Amsterdam, The Netherlands, EU. All rights reserved.