The new application programming interface for the distributed (i.e. 4 wall) CAVE is based on the idea of "display data", which is pipelined from the computation process to the display processes. The main process requests a special buffer of shared memory; it performs computations with the buffer, and then tells the library to pass on the new data. The data will be sent, frame accurately, to the display processes on the local (master) node, and on all the slave nodes. The application's rendering functions request a pointer to the latest display data buffer, and draw using that data.
The display data buffer can contain either all the shared data needed in rendering, or messages describing changes to the database. The second approach (messages) may be needed if an application has a very large data base and only makes small changes at a time.
This new interface also takes care of access control to the shared data, even in non-distributed CAVE applications. You should not need to do any semaphoring/locking to synchronize the computation and display processes's access to the display data.
The basic approach to using the display data functions is as follows:
datatype *calcbuf;
main()
{
CAVEConfigure();
calcbuf = CAVEAllocDisplayData(size);
if (CAVEDistribMaster())
init calcbuf
CAVEDisplay(draw_function,0);
CAVEInit();
...
if (CAVEDistribMaster())
while (1)
{
compute with calcbuf
CAVEPassDisplayData(calcbuf,0);
}
else
while (!CAVESync->Quit) sleep(1);
}
draw_function()
{
drawbuf = CAVEGetDisplayData(calcbuf,NULL);
render with drawbuf
}
As shown above, both the master and slave nodes need to call CAVEAllocDisplayData,
but only the master node should perform the calculations and call
CAVEPassDisplayData().
Aside: Note that CAVEDisplay (and CAVEInitApplication & CAVEFrameFunction) may now be called before CAVEInit. This can be used to assure that the functions are called the same number of times on each machine, since they will be called starting from the first frame.
Scramnet, TCP, and Hippi distribution protocols are available. Scramnet may be real or simulated - simulated Scramnet uses a shared memory segment which unrelated processes on a single machine can connect to. Use simulated scramnet via the configuration:
Distribution scramnet AppDistribution scramnet Scramnet n # Shared memory key (for shmget()) SimScramKey 1700
64 byte data:
AppDistrib scramnet: 3300 sends/sec 211.2 KB/s
AppDistrib hippi: 132 sends/sec 8.4
AppDistrib tcp (ether): 360 sends/sec 23.0
AppDistrib tcp (hippi): 333 sends/sec 21.3
AppDistrib tcp (atm): 330 sends/sec 21.1
No distribution: 385000 sends/sec 24640.0
512 byte data:
AppDistrib scramnet: 1240 sends/sec 634.9
AppDistrib hippi: 132 sends/sec 8.4
AppDistrib tcp (ether): 317 sends/sec 162.3
AppDistrib tcp (hippi): 330 sends/sec 169.0
AppDistrib tcp (atm): 315 sends/sec 161.3
No distribution: 192000 sends/sec 98304.0
2048 byte data:
AppDistrib scramnet: 380 sends/sec 778.2
AppDistrib hippi: 118 sends/sec 241.7
AppDistrib tcp (ether): 196 sends/sec 401.4
AppDistrib tcp (hippi): 310 sends/sec 634.9
AppDistrib tcp (atm): 287 sends/sec 587.8
No distribution: 70000 sends/sec 143360.0
8192 byte data:
AppDistrib scramnet: 101 sends/sec 827.4
AppDistrib hippi: 106 sends/sec 868.4
AppDistrib tcp (ether): 90 sends/sec 737.3
AppDistrib tcp (hippi): 254 sends/sec 2080.8
AppDistrib tcp (atm): 195 sends/sec 1597.4
No distribution: 12500 sends/sec 102400.0
32768 byte data:
AppDistrib scramnet: 25 sends/sec 819.2
AppDistrib hippi: 100 sends/sec 3276.8
AppDistrib tcp (ether): 22 sends/sec 720.9
AppDistrib tcp (hippi): 147 sends/sec 4816.9
AppDistrib tcp (atm): 111 sends/sec 3637.2
No distribution: 2070 sends/sec 67829.8
Notes: Distribution scramnet
ran between zbox & wall1, no display or tracking processes
main process using isolated CPU, doing no calculations or getbutton()s
#include <cave.h>
#include <malloc.h>
#include <gl/sphere.h>
struct _balldata
{
float y;
};
void init_gl(void),draw_balls(void);
struct _balldata *ball;
main(int argc,char **argv)
{
CAVEConfigure(&argc,argv,NULL);
/****** Allocate buffer for data shared with display processes ******/
ball = (struct _balldata *) CAVEAllocDisplayData(2*sizeof(struct _balldata));
/****** Initialize data ******/
if (CAVEDistribMaster())
ball[0].y = ball[1].y = 0;
CAVEInitApplication(init_gl,0);
CAVEDisplay(draw_balls,0);
CAVEInit();
if (CAVEDistribMaster())
while (!getbutton(ESCKEY))
{
float t = CAVEGetTime();
ball[0].y = fabs(sin(t)) * 6 + 1;
ball[1].y = fabs(sin(t*1.2)) * 4 + 1;
/****** Pass the new data to the display processes ******/
CAVEPassDisplayData(ball,0);
sginap(2);
}
else
/****** CAVElib will set CAVESync->Quit when the master calls CAVEExit() *****/
while (!CAVESync->Quit)
sginap(25);
CAVEExit();
}
void init_gl(void)
{
float redMaterial[] = { DIFFUSE, 1, 0, 0, LMNULL };
float blueMaterial[] = { DIFFUSE, 0, 0, 1, LMNULL };
lmdef(DEFLMODEL,1,0,NULL);
lmbind(LMODEL,1);
lmdef(DEFLIGHT,1,0,NULL);
lmbind(LIGHT0,1);
lmdef(DEFMATERIAL,1,0,redMaterial);
lmdef(DEFMATERIAL,2,0,blueMaterial);
}
void draw_balls(void)
{
float sphereParam0[] = { 2, 4, -5, 1}, sphereParam1[] = { -2, 4, -5, 1};
/****** Get pointer to the most recent display data ******/
ball = (struct _balldata *) CAVEGetDisplayData(ball,NULL);
czclear(0,getgdesc(GD_ZMAX));
lmbind(MATERIAL,1);
sphereParam0[1] = ball[0].y;
sphdraw(sphereParam0);
lmbind(MATERIAL,2);
sphereParam1[1] = ball[1].y;
sphdraw(sphereParam1);
}