3D Avatars Using Depth from Stereo
 

developers: Daniel J. Sandin, Alex Hill, Dana M. Plepys, Vivek Rajan, Satheesh Subramanian

funding: NSF, ASCI

The 3D model of the person is obtained using depth from stereo. The range information obtained from depth from stereo is used to segment the background. Using a 3D meshing algorithm the surface of the 3D avatar is reconstructed from the point cloud data of the foreground. Since the model of the avatar is constructed in real-time the 3D shape or size of the person need not be known. Thus this method can be used to generate more realistic video avatars.

start date: 07/01/1999
end date: 09/01/2000

contact:

Point Cloud and Triangulated Image
image provided by V. Rajan
 
 
related projects:
Collaboration and Visualization over High Speed Networks
Video Avatars
VideoAvatar Library (3D static)
A Comparsion of Video, Avatar & Face-To-Face in Collaborative Virtual Learning Environments
related info:
no associated paper(s)
4 associated event(s)<
 
related categories:
software
tele-immersion
networking<