|
Deriving Lights from Pixels
by Jaemin Lee and Ergun Akleman
Integrating computer-generated building images with real photographs can be an effective addition to the architectural design process. Such composited images can show how a proposed structure will change the existing landscape.
Polished compositing presents a seamless merging of real and computer-modeled components. But fitting a model into a scene credibly can be difficult. To render a building model to accurately match the lighting environment displayed in a real-world photograph calls for real-world illumination information. One must also know the photo's camera position and parameters and the shapes and material properties of the real objects.
Some architects are already using composited images, though the quality of such images tends to be limited. Most firms cannot afford to employ the computer graphics experts who know how to perform the requisite calculations and graphic manipulations.
To simplify the process, we have developed a user-friendly and practical method for obtaining the needed real-world data from photographs. Our method is based on a simple and easily constructible device in conjunction with software we have written.
With our system, users can determine orientations, colors, and intensities of light sources as well as the surface colors of objects. Users can then apply this information to computer renderings of designed spaces, to make them fit appropriately into the composite image. >>>
Discuss this article in the Architecture Forum...
|
|
In a frame from an animation, a robot, its refraction, and its shadow are computer-generated and composited over real video.
Image: Han Lei
The researchers' device, photographed under outdoor illumination, gives a simple measure of sun position.
Photo: Jaemin Lee
Click on thumbnail images
to view full-size pictures.
|
|