Collaborative Visualization Architecture in Scalable Adaptive Graphics Environment

April 26th, 2007

Categories: Networking, Software, Supercomputing, Tele-Immersion, Visualization, VR

Byungil Jeong
Byungil Jeong


Jeong, B.


As scalable high-resolution displays have become increasingly prevalent and networking is diminishing in cost at a rate exceeding that of computing and storage [Stix01], EVL envisions situation-rooms and research laboratories in which all the walls are made from seamless ultra-high-resolution displays fed by data streamed over ultra-high-speed networks from distantly located visualization, storage servers, and high definition video cameras [Leigh03, Smarr03]. To help realize this vision, my PhD thesis has focused on the Scalable Adaptive Graphics Environment (SAGE) [Jeong06], a scalable graphics architecture for supporting collaborative scientific visualization environments with hundreds of megapixels of continuous display resolution fed by tens to hundreds of gigabits of network bandwidth. SAGE enables high-performance real-time pixel streaming from remote rendering clusters to scalable high-resolution tiled-display systems. For example, SAGE has streamed ultra-high-resolution image data and video at a rate of 9Gb/s over a 10Gb/s private network on National LambdaRail between San Diego and Chicago, and pushed the data onto LambdaVision, a hundred-megapixel tiled display.

This graphics architecture addresses two non-trivial problems in scientific visualization. One is heterogeneity: since most visualization applications are closely tied to their graphics environment, it is difficult to integrate various visualization applications into a unified graphics environment. The other is scalability: the ability of visualization software and systems to scale in terms of the amount of data they can visualize and the resolution of the desired visualization [Blanke00]. SAGE addresses the heterogeneity problem by decoupling graphics rendering from graphics display so that visualization applications developed on various environments can easily migrate into SAGE by streaming their pixels into the virtual frame buffer. Also, SAGE provides scalability by supporting any number of rendering and displaying nodes, number of tiles, and screen resolution.

SAGE provides users with a full multitasking environment on a tiled display by enabling them to concurrently run multiple visualization applications freely repositioning and resizing the application windows. Every window movement or resize operation requires dynamic and non-trivial reconfiguration of the involved pixel streams. I have experimented with two stream reconfiguration approaches: central reconfiguration and distributed reconfiguration. In the first approach, a central SAGE controller generates all necessary control information and sends it to all other SAGE components to reconfigure them. The second approach performs much faster than the first approach by generating control information on each sender and streaming the information together with pixel data to reconfigure receivers. The second approach performs fast enough to support real-time animation of application windows (20 to 60 moves/resizes per second).

SAGE is the graphics middleware of the OptIPuter [Leigh03, Smarr03], a major NSF-funded initiative to design advanced cyberinfrastructure for data-intensive science using optical networks, and has been used by international OptIPuter partners as well as numerous universities. To support distance collaboration among these international SAGE users in scalable high-resolution display environments, I extended SAGE to stream pixel data to multiple collaboration endpoints. This significantly increases the complexity of the pixel routing problem because the endpoints can have a variety of tiled display configurations and window layouts.

Visualcasting is a new SAGE network service to address this problem. It supports global collaboration by enabling two or more users to share application content, sending multi-gigabit streams as required, enabling two or more remote users to interact with one another, as well as with their data, at the same time. Connected, participating endpoint sites form a virtual laboratory, as Visualcasting enables everyone to see the same content at the same time.

Endpoints can be of any size and configuration, varying from a single high-resolution monitor to room-sized tiled display walls. Each site maintains control of the layout (position, size, overlapping) of its displays. The technology currently used to broadcast information to two or more sites is called multicast [Deering91], and while effective, today’s networking infrastructure does not automatically support it, and requires network engineering to implement. Unlike multicast, which requires network engineering, Visualcasting is application-centric.

Visualcasting addresses the problem by enabling applications to select what information to send, and to whom to send it, without the need for technical support and network modifications. Each endpoint runs SAGE to send and/or receive real-time streams of ultra-high-resolution 2D and 3D content. Visualcasting manages the streams so the same content is made available to all participants. Visualcasting is supported by a bridging system that receives pixel streams from SAGE applications and replicates these streams to each endpoint. The bridge nodes are strategically placed in core network facilities and allocated to service distant collaborators.

SAGE is similar to IBM’s SGE and Deep Computing Visualization (DCV), which support high-resolution visualization on a local scalable display system. DCV supports Remote Visual Networking (RVN) but it is targeted to low-bandwidth networks (10/100 networks) and a single desktop display. In contrast, SAGE Visualcasting supports collaborative visualization among distantly located scalable display environments using graphics streaming protocols specialized for high-performance wide-area networks with high-round-trip-time and multi-ten gigabits of network bandwidth. Furthermore, the SAGE architecture is so flexible that new network protocols can be easily applied to SAGE graphics streaming protocols.


[Blanke00] Blanke, W., Bajaj, C., Fussell, D., and Zhang, X., “The Metabuffer: a Scalable Multiresolution Multidisplay 3-D Graphics System using Commodity Rendering Engines.” Tr2000-16, University of Texas at Austin, February 2000.

[Deering91] Deering, S. E., “Multicast Routing in a Datagram Internetwork,” PhD thesis, Stanford University, December 1991.

[Jeong06] Jeong, B., Renambot, L., Jagodic, R., Singh, R., Aguilera, J., Johnson, A., and Leigh, J., “High-Performance Dynamic Graphics Streaming for Scalable Adaptive Graphics Environment,” ACM / IEEE Supercomputing 2006.

[Leigh03] Leigh, J., Renambot, L., DeFanti, T. A., et al, “An Experimental OptIPuter Architecture for Data-Intensive Collaborative Visualization”, Third Workshop on Advanced Collaborative Environments, Seattle, WA, June 2003.

[Smarr03] Smarr, L., Chien, A. A., DeFanti, T., Leigh, J., and Papadopoulos, P. M., “The OptIPuter” Communications of the ACM, Volume 46, Issue 11, November 2003, pp. 58-67.

[Stix01] Stix, G., “The Triumph of the Light,” Scientific American, January 2001.


Jeong, B., Collaborative Visualization Architecture in Scalable Adaptive Graphics Environment, IBM Visualization and Graphics Student Symposium, TJ Watson Research - Hawthorne and Yorktown, New York, April 26th, 2007.