Planetary-scale Terrain Composition
 

This video demonstrates the Planetary-scale Terrain Composition algorithm. We overlay the national elevation databases, 17 gigapixel height map covering the United States at 30 meters resolution with the Shuttle Radar Topography Mission, a 3 gigapixel height map covering the world at 500 meters. The colormap is Blue Marble Next Generation, also 3 gigapixels, for a total of 23 gigapixels of source data. This data and the demonstrated camera path are the same used in the performance analysis included in our paper.

Let’s look at the individual steps performed by the Terrain Composition algorithm. First the CPU computes a coarse set of visible triangular patches. These are uploaded to the GPU for recursive refinement seen here. This mesh is processed by the GPU as an image, allowing an arbitrary number of height map textures to be composed directly with it. The texture coordinate of each fragment is written to a 32-bit floating point frame buffer. Here we see longitude in red and latitude in green. Given this an arbitrary number of surface maps may be composed atop our terrain geometry using ordinary alpha blending terrain space. In this case, the many pages of virtual texture are seamlessly blended. Similarly, normal maps derived from our height map textures are accumulated. Finally, the diffuse color and normal map buffers are combined with an atmospheric illumination model, yielding the final image.

Now we see the algorithm operating in real-time on a large-scale, high resolution, virtual reality display device. A cluster of PCs drives an array of 60 auto stereoscopic screens with a total resolution over 100 million pixels. The scalability of our algorithm makes it uniquely applicable to this kind of display environment.

producer/director: Robert Kooima

credits: Robert Kooima, EVL

©2008 University of Illinois

contact:

 
 
size: 9.91 MB   duration: 00:01:28
  related movies:
1 associated movie(s)<
  related events:
1 associated event(s)<