|
developers: Daniel J. Sandin, Alex Hill, Dana M. Plepys, Vivek Rajan, Satheesh Subramanian funding: NSF, ASCI
The 3D model of the person is obtained using depth from stereo. The range information obtained from depth from stereo is used to segment the background. Using a 3D meshing algorithm the surface of the 3D avatar is reconstructed from the point cloud data of the foreground. Since the model of the avatar is constructed in real-time the 3D shape or size of the person need not be known. Thus this method can be used to generate more realistic video avatars.
start date: 07/01/1999
end date: 09/01/2000
contact:
|