Image-Based Rendering for Non-Diffuse Synthetic Scenes

Dani Lischinski and Ari Rappoport

Most current image based rendering methods operate under the assumption that all of the visible surfaces in the scene are opaque ideal diffuse (Lambertian) reflectors. This paper is concerned with image-based rendering of non-diffuse synthetic scenes. We introduce a new family of image-based scene representations and describe corresponding image based rendering algorithms that are capable of handling general synthetic scenes containing not only diffuse reflectors, but also specular and glossy objects.

Our image-based representation is based on layered depth images. It represents simultaneously and separately both view-independent scene information and view-dependent appearance information. The view-dependent information may be either extracted directly from our data-structures, or evaluated procedurally using an image-based analogue of ray tracing. We describe image based rendering algorithms that recombine the two components together in a manner that produces a good approximation to the correct image from any viewing position. In addition to extending image based rendering to non-diffuse synthetic scenes, our paper has an important methodological contribution: it places image based rendering, light field rendering, and volume graphics in a common framework of discrete raster-based scene representations.


Proceedings, Ninth Eurographics Workshop on Rendering, 1998, pp. 301-314.