Posts tagged 3D
These past couple of days, I’ve finally gotten some time to work on the tremendous backlog of photos that I have sitting around from a number of trips. Among those pictures are sets of photos in the hundreds destined for photosynth. A number of my friends have expressed interest in what the software is, what it does, how it works, and how to take photos best suited for processing. I think now is a great opportunity to go over the basics.
What Photosynth Does
First of all, what Photosynth does is create a 3D point cloud model/representation of an object or scene from a set of photos. Depending on the scene complexity, the number of photos might be in the tens, or hundreds for sufficiently complicated scenes. It all depends on the model and how much time you have on your hands.
Perhaps the best way to explain it, is to see it. The following is a synth of the Pantheon that I recently finished processing, constructed from photos taken by my brother and I from a D80 and D90:
How it does it
The software uses feature extraction to identify textures in parts of each image that are similar, then tries to fit each corresponding from each image together to create a perspective-correct view. The process is extremely computationally intensive, but only needs to be done on the initial set of images to determine position and location. The beauty, of course, is that this process requires no human input for reconstructing the scene; it’s entirely computationally derived.
I won’t claim to be the most qualified to talk about it, but it does use feature extraction and some fancy fitting to work. An important note is that the software works based on unique features in texture, not necessarily on structure. This is why synths with lots of unique patterns turn out extremely well, while others don’t.
Creating the actual Synth is actually the easiest step; just create an account, install the software, add your photos, and go.
The real work in that process is creating proper tags, descriptions, and then adding geotagging data from photos, or later on in the web interface. Doing so is a great way to get your synth recognized.
How to take the best shots
If I’m taking a photo of a single object, something like this column, for example, I’ll try to stay equal distance away from the object, and take photos in steady progression around the subject.
The important thing to keep in mind is that although Photosynth can extrapolate the point cloud from features, it still cannot extrapolate images that you haven’t given it. Simply put, if you want to get the nice scrubber bar to circle around an object, you’ll need to take the requisite photos to make it. I find that pacing steadily around while taking photos at regular intervals is the best way.
- Take photos of subjects from a variety of angles. If you can, from every angle possible in an equal manner.
- Take photos from a single perspective pointing in multiple directions. I find that spinning around taking photos from each corner of the room works marvelously well; even though you look slightly special in the process.
- The most important thing to keep in mind is that quantity is generally on your side; so long as there is variety in the shots, but overlap as well.
- Choose subjects that have a variety of textures and features. Things like the Sistine Chapel synth really well because unique texture is everywhere, while cars generally don’t because of their solid color.
- Take wide angle shots of the entire room from the four corners first, then one from the center. Afterwards, take photos of objects/features close up. These are things that people will want to focus on when viewing later; a good example are pictures on the wall in a museum or specific fresco sections on a large wall.
Some of my favorite Photosynth creations are:
- Piazza San Marco: Link
- Sistine Chapel: Link
- Artemision Bronze: Link
- Salpointe Graduation 2009: Link
- Replica of Equestrian Statue of Marcus Aurelius: Link
- Library of Celsus: Link
- Pantheon: Link
For equal comparison, here are some that didn’t turn out so well:
Although I couldn’t make it to CES this year, I have been following it pretty closely through reading liveblogs, news items, press releases, and unsurprisingly live webcasts. Unsurprisingly, probably the main highlight of the conference this year is popularization of 3D media. Displays, cameras, movies, and all the compute power to render, edit, and distribute it.
What’s become immediately obvious, however, are the challenges that this new format will face before becoming widespread. The most glaring of which, is how all the 3D I’ve been able to see so far is this:
No, not Bono or how content providers hope this will sell yet another copy of media we already have, or how 3D is somehow the end of movie piracy ( this time in a 3D format. Parallax.
Of course, parallax is fundamental to how 3D displays work; you present different images to each eye with the subject shifted proportional to how much depth should be perceived. The chief problem that I think adoption will face is that, ultimately, you need to see an example of 3D to become a fan of 3D. In essence, it’s impossible to convey what 3D displays look and seem like (especially over print or 2D monitors) until you already have one.
Forget the primary hurdle to 3D, the glasses (unless you have a very special 3D monitor that doesn’t require them because it uses voxels or a surface pattern to create the parallax). It seems to me like, already, you’re going to have to go either see a 3D movie or find a very lucky friend who has a 3D monitor to make an educated decision about it yourself. And although it seems like the industry has already decided this is the next big trend, consumers must first be convinced it’s the way to go.