Shalini Venkataraman, Luc Renambot
Version: April 12, 2004
ResearchGear ID: 20040412_venkataraman
Vol-a-Tile is a volume rendering tool
for large-scale, time-series scientific datasets for scalable resolution
displays. These large scale datasets can be dynamically processed and retrieved
from remote data stores over photonic networks. Vol-a-Tile focuses on very large
volumetric data and time-dependent simulation results such as seismic data and
ultra-high resolution microscopy.
There are 3 components to the system - Optistore, Volatile and tfUI as shown below.
Optistore is the dataserver which stores objects as 3D volumes or geometry. Optistore is designed to assist visualization dataset handling, including data management, processing, representation and transport. It is built upon existing functionality in VTK such as the marching cubes, gradient estimation, data reduction and sampling filters .
Vol-a-tile handles the rendering, scalable displaying and user interaction. All the nodes in Vol-a-tile (master and slaves) have a dedicated link to Optistore to retrieve the datasets. The master handles user interaction and any transfer function updates from the transfer function editor and broadcasts it to the slaves using MPI. The slaves are responsible for processing commands from the master and rendering to their respective view frustums.
Transfer function Editor (tfUI)
tfUI is the user interface for transfer function selection based on the Simian system at Univ of Utah. The color and opacity can be selected using the classification widgets. These widgets can be overlayed and then rasterized to a 2D texture which is sent to Vol-a-Tile. The 2D Histogram for the dataset is retrived from Vol-a-Tile and displayed to guide the user in the selection.
The diagram below shows in detail, the different software components and how they interact.
Interaction - Roaming sequence
Shown below is a sequence diagram for a typical roam operation. This highlights the main steps that occur when a user presses a key to when the volume is received. Click on each of the colored boxes to access the benchmark graphs for that process.
System configuration for the tests-
Done at the beginning of the program execution. The whole dataset is loaded into memory from disk. The graph below compares loading time for the 3 datasets used with the different compression schemes - Raw binary, Run-length Encoded (RLE) and gzipped.
Arad full volume load (1001x801x801)
Called by the renderer to get a subvolume from the full volume The subvolume size is selected so as to fit into texture memory. Time to crop the full volume for the different subvolume sizes is shown below.
The graphs below shows the time for network data transfer and the total roam time. Three possible cases have been tested - local, cluster and remote
The server and the renderer are collocated ie the same machine runs Optistore and Vol-a-tile. There is no actual network communication involved here.
The server and the renderer are running on different nodes but within the same cluster. So, half the nodes of scylla run Optistore and the other half Vol-a-tile. Network data transfer is over GigE.
The data server is remote, in the case at IGPP/SIO in San Diego and the rendering on scylla cluster at EVL. Data transfer is over regular internet. As expected, the network transfer times are high.
Optionally, the renderer can request a 2D histogram from the dataserver. This histogram is computed based on the data and gradient values and is eventually used by the transfer function editor as a guide for the user. If we are loading previously saved transfer functions, this compute-intensive step can be omitted. There are 2 steps in the calculation
vtkGradient - update
We simply maintain 1 copy of the volume data and use the vtkGradient estimator to calculate the gradient values on the fly. This expensive process can be avoided if we precompute and store the gradient volume in addition to the data (which increases main memory and disk storage requirements).
Calculate 2D Histogram
Calculates a 2D logarithmic histogram using the voxel and the gradient values.
The graph below compares the 2 steps just described.
When the renderer receives a new subvolume is from the dataserver, it has to be downloaded to the graphics memory to be visualized as 3D textures.
The graph below compares frame-rates for rendering with different tile-configurations. The number of texture slices is constant throughout and all nodes render to full screen (1600x1200). The performance of standalone version of Vol-a-Tile is also shown which is lower than the corresponding distributed version. The reason being the glutIdleFunc which does the continuous redraw. In the distributed version, MPI does this for us.
Table showing the different tile configurations and their respective display resolutions
|EVL ResearchGear publishes preliminary software, technical reports, data or results that the Electronic Visualization Laboratory openly shares with the research community. The work presented here is preliminary and we are not responsible for any damages that may result from its use or misuse. If you would like to cite any of this information in your research papers, presentations, etc, please reference the ResearchGear ID above. Thank you, and we hope you find the information on this page useful.|