VideoAvatar Library (3D static)

January 1st, 1996 - August 1st, 1997

Categories: Networking, Software, Tele-Immersion

About

The VideoAvatar Library is a collection of functions that works in conjunction with the CAVE Library, and can be used to add static, photo-realistic 3 dimensional representations of remote users, as well as other objects or agents to virtual reality applications.

The process involves obtaining views from 360 degrees around the person to be represented, then selecting two of these images, one for each eye, to represent the user in 3D space.

The images are acquired by placing a person on a turntable in front of a blue screen at a distance of ten feet from a video camera positioned at eye level.

During preview various parameters are set to calculate the chroma key for eliminating the background. Images are recorded (30fps for 15 seconds) from one revolution of the person turning on the turntable, resulting in a 450 frame movie file. Frames used for display within the virtual application can be variable in order to accommodate for various hardware configurations.

During display, a different image is selected to represent the captured image to each eye. Which two images to be used is calculated based on the position of both the local and remote users in the space, and the direction that the remote user is facing.

These images are then texture mapped onto two separate polygons, each of which is rotated around the vertical axis toward the eye which is meant to see it.

As we are not actually creating a 3D model, but using two 2D images to represent the avatars, not all depth cues are supported. Among those supported are convergence, binocular disparity, horizontal motion parallax, and occlusion. The proper perspective of the VideoAvatar is maintained in relation to the other objects in the modeled environment.

However, this does not hold true for vertical motion parallax, since the images were captured from a fixed distance at a fixed height. For this same reason, proper perspective within the model is not maintained. The correct perspective is achieved when the avatar is viewed from the same distance and height used in recording.

The VideoAvatars provide high quality, easily recognizable people for virtual environments. Because the people are generally static when they are recorded, they appear rather statue-esque, but provide significant information about where other users are located in the environment and where they are looking.