Page T2.2 . 25 June 2003                     
ArchitectureWeek - Tools Department
NEWS   |   DESIGN   |   BUILDING   |   DESIGN TOOLS   |   ENVIRONMENT   |   CULTURE
< Prev Page Next Page >
 
TOOLS
 
  •  
  • Comprehensive Building Modeling
     
  •  
  • Deriving Lights from Pixels

    [an error occurred while processing this directive]
    AND MORE
      Current Contents
      Blog Center
      Download Center
      New Products
      Products Guide
      Classic Home
      Architecture Forum
      Architects Directory
      Topics Library
      Complete Archive
      Web Directory
      About ArchWeek
      Search
      Subscribe & Contribute
      Free Newsletters
       

     
    QUIZ

    [an error occurred while processing this directive]

    Deriving Lights from Pixels

    continued

    This "shadow-based" method, which we have tested with students, can be useful in architectural practice because it does not require specific expertise in computer graphics or mathematics. It uses photographs of a simple device constructed by placing a small cylindrical stick on the center of a white plate. Because the shape of the device is known, it is easy to create a digital model of it. Comparing the model of the device and its cast shadows to its photograph gives us a baseline for setting parameters for the rendering software.

    User-Assisted Software Procedure

    Our program, called "LightRecon," computes locations and intensities of light sources based on pixel values selected by users from a photograph of a real scene. We also use the commercial rendering application Maya to recover camera information by matching a virtual scene with the photograph.

    The virtual scene contains a computer-generated model of our physical device and light sources that take lighting data from LightRecon to illuminate the scene. We also use Maya to render the scene with recovered illumination information.

    To create a field record, we first collect survey data from a real scene, such as lighting condition, surface information of objects in the scene, camera height and angle, dimension of the device, distance between camera and the device, film format and size, lens size, and exposure information.

    Then we photograph the real scene without any reference objects for later compositing with the computer-generated objects. We also photograph: a) a gazing ball to obtain additional information about the environment, b) the device for illumination data, and c) material samples so we can match the color of synthetic and real objects.

    To reconstruct camera parameters, we build a Maya model of a real scene based on the survey data. In the scene, we model our device to help determine illumination data and a camera orientation.

    We then run LightRecon to obtain the scene's lighting information, such as the intensities and locations of lights. The software allows us to select pixels of a photograph, on which it performs calculations. In order to run LightRecon, we need a scene data file from the 3D replica, which is a text file containing information about the matched camera and the device. As a final output, it creates a text file to provide the lighting information. We provide a prototype of scene data, so users can simply insert the required information into the file based on its instruction.

    Further Refinements

    To improve the usefulness of the initial intensities of the light sources that we get from LightRecon, we readjust them based on pixel values of both a rendered image and a background photograph.

    First we render the virtual model of the device with initial lighting intensities. Then we open both the rendered image and the background photograph in Adobe Photoshop or similar application, and collect a pixel value from each image, especially pixels on the white surface of the device. We find a ratio of each color channel value of two selected pixels and multiply the initial intensities by the ratios to get new intensities of the light sources.

    Finally, we render the scene again with the updated illumination information and repeat the process, if necessary, until the ratio values approach unity.

    A similar reiterative method enables users to determine colors for synthetic objects based on the photographs of material samples taken in a real scene. The procedure takes into account the lighting environment of the real scene, which affects the color rendition of the material samples. By comparing the rendered and photographed colors, LightRecon computes parameters for modifying the modeled colors.

    This method provides users with an accessible process for recovering illumination information. In addition, the method helps artists understand the overall concept of image-based lighting techniques. We plan to further develop LightRecon before making it available to others.

    Discuss this article in the Architecture Forum...

    Jaemin Lee and Ergun Akleman are with the Visualization Sciences Program of the Department of Architecture at Texas A&M University.

    A longer version of this article first appeared in Thresholds: Design, Research, Education and Practice in the Space between the Physical and the Virtual, Proceedings of the 2002 Annual Conference of the Association for Computer-Aided Design in Architecture, edited by George Proctor.

     

    AW

    ArchWeek Image

    The researchers' device, photographed under studio lighting. The blue squares on the white surface help identify the orientation of a camera.
    Photo: Jaemin Lee

    ArchWeek Image

    One of four images required for a field record.
    Photo: Jaemin Lee

    ArchWeek Image

    Photograph of a gazing ball, required for a field record, for obtaining information about the environment.
    Photo: Jaemin Lee

    ArchWeek Image

    Photo of material samples, part of the documentation for a field record.
    Photo: Jaemin Lee

    ArchWeek Image

    Example of a computed object composited in the real photo. Note the complex shadows due to studio lighting.
    Photo: Jaemin Lee

    ArchWeek Image

    An example of a photographed background image for an outdoor scene.
    Photo: Jaemin Lee

    ArchWeek Image

    The same outdoor scene composited with a computer-generated object.
    Image: Han Lei

     

    Click on thumbnail images
    to view full-size pictures.

     
    < Prev Page Next Page > Send this to a friend       Subscribe       Contribute       Media Kit       Privacy       Comments
    AW   |   GREAT BUILDINGS   |   DISCUSSION   |   SCRAPBOOK   |   BOOKS   |   FREE 3D   |   SEARCH
      ArchitectureWeek.com © 2003 Artifice, Inc. - All Rights Reserved