- Collaborative extension of CAVE5D.
Developed by Glen Wheless,
Cathy Lascara at the Center for Coastal Pacific Oceanography, ODU, and
Bill Hibbard at the Univ. of Wisconsin, Madison
Integrates Vis5D and CAVE
Libraries to provide interactive visualizations of time-varying, 3-dimensional
data sets in a virtual environment. It is a configurable Virtual Reality
application framework that enables visualization of 3D numericle data in
the Vis5D format in the CAVE/Immersadesk, and enables user interaction
with the data.
Vis5D is a system for visualizing
data made be numerical weather models.
The data is in the form of
a five dimensional rectangle,
dimensional gridded space,
time dimension and
dimension which is an enumeration of multiple physical variables such as
contour slices, wind/current
trajectory vectors, salinity etc.
Hence CAVE5D as it exists,
is a simple application that allows visualization and some interaction
with the data. It does not however provide any form of synchronous/asynchronous
means to interact together with other people.
- A collaborative extension of CAVE5D
CAVE6D was thus written to
provide interaction between users located at different sites, in the virtual
environment provided by CAVE5D. This was made possible by integrating CAVERNSoft
Users share the same virtual
space of the data. The virtual co-presance of these users is depicted in
the form of an avatar.
Points of interest shown by
the wand of the avatar.
Each location have their own
replicated data sets (the Vis5D file), and this is not shared or downloaded
at run time.
The time in the simulation is
globally synchronised at all times, so the users see the same time varying
Each user has an equal control
over the simulation. Any one can start it, and other users can stop it,
go forward or back in time, to discuss a particular point of interest in
Diagram showing the global/local switch option.
If the global option has been
switched on for a particular parameter the state of the parameters change
synchronously for all the users.
Director, developed by Bob Patterson, Donna Cox, and Stuart
Levy at the NCSA, provides the synchronous collaborative capability to
CAVE5D. Virtual Director, acts as an interface between real-time data and
user desired actions to enable animation recording, steering and editing,
thereby allowing archive and playback of sessions in virtual space. This
could be of immense help in the remote training and analysis sessions.
The users have the ability to
set their own environment locally, or make their settings global so that
other participants could follow their set up.
CAVERNSoft could not be integrated
into CAVE5D, because CAVERNSoft uses POSIX threading model pthreads, which
do not support sprocs, used in CAVE5D.
To counter this problem, a shared
memory arena was used to interface between the CAVERNSoft's IRB, and the
The application writes data
into the shared memory at a particular frequency ( eg. once in 5 iterations
). The IRB takes this information from the shared memory again at a fixed
frequency and passes it across to the other connected IRBs.
The data is not sent continously
because it overcrowds the network.
These frequencies are set based
upon the machine, and the network conditions ( by the -f option in irbsend's
Higher frequencies takes up
a lot of the processor cycles. Multiprocessor machines can afford higher
rates by dedicating the irbsend to a particular processor.
Lower frequencies as expected
would make the avatar movements very jerky.
On the shared memory side, if
the data is written by the application at a rate faster than the IRB can
read, it leads to the data being over-written and thus producing the jerkiness
Information being passed across
the network to remote participants consists of,
Avatar's head and wand tracker
values to draw the avatars.
Time stamp to synchronise the
Synchronisation is done at every
The states of the buttons of
the graphical parameters.
The values of the graphical
It is to be noted that each
time the current state of the particular parameter is sent across the network,
and not the events ( like an object move event or an on/off event ). Though
takes much more network bandwidth, it cannot be avoided in order to let
collaborators join in late and get in synch with others.
Some sort of a locking mechanism
is needed to implement the movement of the graphical objects. As 2 participants
might try to move the same global parameter at the same time in different
directions. Though not an atomic operation, it is seen who moved the object
first, and that user gets the lock to move the object.
The color/identity of the avatar
is decided at run time if the user does not specifies it. At the start
of the application it waits to see who all are there in the environment,
and based on that it assigns the unused avatar id / color to itself.
Many users wished they could
move the menus, rather than having them fixed. Such observations were more
prominent from experienced VR users, though novice users did not seem to
Though the buttons are collaborative,
the menu isnt. In a teacher/tour guide scenario it would be good to have
the menu also collaborative so they look at the same menu at the same time,
and thus see them making changes to the button states. Having different
menus opened could confuse the participants.
Again in a situation when
one of the user is a teacher or a tour guide, who would want to set other
users setting, it could be useful to have one global switch which would
make all the global/local buttons in the same state as the user.
Also have a 'Set All Global'
and a 'Set all Local' option, which could provide easier Interface.
Currently there is no collaboration
in terms of the scale of the world in which the users share the environment.
It could be useful to provide global common scales in which users see the
environment. This could be done in 2 ways. Either scaling everyone's world
or to scale the avatar according to his/her current scale of the environment.
It was observed that for a session
with a remote site, irbsend is comfortable wih sending frequencies of 5-15
packets per second.
It would be useful to have a
run time selection choice of the data set one would want to load up in
CAVE6D, and maybe propagate the selection to the remote users.
1 Bill Hibbard and Brian Paul.
2 Andrew E. Johnson, Jason Leigh, Thomas A. DeFanti, and
Daniel J. Sandin.
CAVERN: the cave research network.
In Proceedings of 1st International
Symposium on Multimedia Virtual Laboratories, pages 15-27, Tokyo, Japan,
3 Jason Leigh, Andrew E. Johnson, and Thomas A. DeFanti.
CAVERN: a distributed architecture
for supporting scalable persistence and interoperability in collaborative
Journal of Virtual Reality Research,
Development and Applications, 2(2):217-237, 1997.
4 Jason Leigh, Andrew E. Johnson, and Thomas A. DeFanti.
Issues in the design of a flexible
distributed architecture for supporting persistence and interoperability
in collaborative virtual
In Proceedings of Supercomputing'97,
San Jose, California, Nov 1997. IEEE/ACM.
5 Marcus Thiebaux.
The Virtual Director.
Master's thesis, Electronic
Visualization Laboratory, University of Illinois at Chicago, 1997.
6 Glen H. Wheless, Cathy M. Lascara, A. Valle-Levinson,
D. P. Brutzman, W. Sherman, W. L. Hibbard, and B. Paul.
Virtual chesapeake bay: Interacting
with a coupled physical/biological model.
IEEE Computer Graphics and Applications,
16(4):52-57, July 1996.
- Cave6d logo has been designed and created by Samroeng
For download/more information
please visit: http://www.evl.uic.edu/akapoor/cave6d