Yesterday one of my friends at Autodesk mentioned 123D Catch. It's a free program they make, similar in some ways to Photosynth by Microsoft in that it analyzes photos you take of an object or place and stitches them together to get 3d spacial info. In Photosynth the goal is to make panoramas (which can be used images for reflection maps or cheated hdri renders, btw). With 123D Catch your photos are analyzed in the cloud and then you are emailed with a file that contains a 3d model generated from the images.
To test it out, I decided to take my Samurai Jack figurine and run it through, without tweaking any of the defaults, and this was the result.
This object probably makes for a good test because the large flat white areas are tricky to triangulate into 3d space. My solution was to put pieces of masking tape on the surface to give extra contour information. From the moment I stepped outside to photograph it on my cellphone, until I was able to get the model into a 3D modeling program for tweaking, I did about 20 minutes of work (not including the cloud processing time, which was about 5 or 6 minutes.
I took about 40 pictures, all on my cellphone, walking around the model a few times. After uploading them to their server through the application, a few minutes later I was emailed a file that contained:
I tried to export the model to Maya, but oddly enough none of the formats that this Autodesk program exports seem compatible in Autodesk Maya, go figure. Blender to the rescue!
Blender was able to read the .obj just fine, and load the texture map that exports with it. And with its Zbrush-like sculpting tools, one could spend a few minutes polishing the surface and filling dents to bring back the details that are lost. Remember that this is just a default processing. There are settings in 123D that allow you to manually stitch by choosing consistent features among several images, which should create a better result, but I'm happy with what I got with such little effort. If I had a maquette I wanted to digitize, I would definitely give this a shot. Check out their learning tutorials.
Overall, I find this technology promising. A year ago I tried to follow a few tutorials on hacking Photosynth to get a file that you could bring into MeshLab to skin the point clouds it generates. It was a frustrating failed attempt. My hope is that the next step is a 123D Catch iPhone app that allows you to shoot the photos and upload them all from the phone, so that you have a scanned 3d model waiting by the time you get to your desktop. Pretty please, Autodesk?