![]() | Paul Debevec earned degrees in Math and Computer Engineering at the University of Michigan in 1992 and a Ph.D. in Computer Science at UC Berkeley in 1996. He began doing research in computer graphics and vision in 1991 by animating an image-based 3D model of a Chevette from photographs. |
Debevec's Ph.D. thesis presented Façade, a system for creating virtual cinematography of architectural scenes using new techniques in photogrammetry and image-based rendering. Using Facade he directed a photoreal fly-around of the Berkeley campus for his 1997 film The Campanile Movie whose techniques were later used to create the Academy Award-winning virtual backgrounds for the "bullet time" shots in the 1999 film The Matrix.
This approach can be used to recover models for use in either
geometry-based or image-based rendering systems. This work presents
results that demonstrate the approach's ability to create realistic
renderings of architectural scenes from viewpoints far from the original
photographs. This thesis concludes with a presentation of how these
modeling and rendering techniques were used to create the interactive art
installation Rouen Revisited, presented at the SIGGRAPH '96 art
show. |
Paul E. Debevec and Camillo J. Taylor and Jitendra
Malik. Modeling and Rendering Architecture
from Photographs: A Hybrid Geometry- and Image-Based
Approach, Proceedings of SIGGRAPH 96, Computer
Graphics Proceedings, Annual Conference Series, pp.
11-20 (August 1996, New Orleans, Louisiana). Addison
Wesley. Edited by Holly Rushmeier. ISBN
0-201-94800-1.
Abstract
We present a new
approach for modeling and rendering existing architectural scenes from a
sparse set of still photographs. Our modeling approach, which combines
both geometry-based and image-based techniques, has two components. The
first component is a photogrammetric modeling method which facilitates the
recovery of the basic geometry of the photographed scene. Our
photogrammetric modeling approach is effective, convenient, and robust
because it exploits the constraints that are characteristic of
architectural scenes. The second component is a model-based stereo
algorithm, which recovers how the real scene deviates from the basic
model. By making use of the model, our stereo technique robustly recovers
accurate depth from widely-spaced image pairs. Consequently, our approach
can model large architectural environments with far fewer photographs than
current image-based modeling approaches. For producing renderings, we
present view-dependent texture mapping, a method of compositing multiple
views of a scene that better simulates geometric detail on basic models.
Our approach can be used to recover models for use in either
geometry-based or image-based rendering systems. We present results that
demonstrate our approach's ability to create realistic renderings of
architectural scenes from viewpoints far from the original
photographs.
Online Paper (PDF) / Web Site / BibTex Entry | (the Façade paper) |
Maintained by John Loomis, last updated 21 May 2005