Researchers: Robert Kooima, Tom Peterka
Funding: NSF, DoE, NTT
Autostereoscopic displays enable unencumbered immersive virtual reality, but at a significant computational expense. This expense impacts the feasibility of autostereo displays in highperformance real-time interactive applications. A new autostereo rendering algorithm named Autostereo Combiner addresses this problem using the programmable vertex and fragment pipelines of modern graphics processing units (GPUs).
This algorithm is applied to the Varrier, a large-scale, head-tracked, parallax barrier autostereo virtual reality platform. In this capacity, the Combiner algorithm has shown performance gains of 4x over traditional parallax barrier rendering algorithms. It has enabled highperformance rendering at sub-pixel scales, affording a 2x increase in resolution and showing a 1.4x improvement in visual acuity.
Date: March 1, 2006 - March 1, 2007