May 15th, 2017
In the domain of large-scale visualization instruments, Hybrid Reality Environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. HREs create a seamless 2D/3D environment that supports both information-rich analysis as well as virtual-reality simulation exploration at a resolution matching human visual acuity. Collaborative research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them, and the ability to bring in (or hide) data quickly as needed. Addressing these needs requires a matching software infrastructure that fully leverages the technological affordances offered by these new instruments. In this dissertation, I detail the requirements of such infrastructure and outline the model of an operating system for Hybrid Reality Environments. I present an implementation of core components of this model, called Omegalib. I show how this implementation has been successfully used to create HRE visualizations for research, education and outreach. One key feature of Omegalib is its ability to support multiple applications running simultaneously on an HRE. This is a significant improvement compared to classic immersive systems, which normally support a single application at a time. HREs (and large displays in general) are ideal systems to support co-located collaboration, and the ability to run and control multi-application workspaces is key to effective work in this setting. However, enabling multiple immersive applications on a large display presents several challenges. I discuss our solution to these challenges and present
an extension of Omegalib that supports fully dynamic, multi-view, multi-user immersion. I evaluate this new system, which I call Multiview Immersion (MVI) in a formal user study: two-user groups are asked to compare 3D sonar scans using MVI, single-view immersion and multiple views without immersion (simulating a standard display wall). My objective is understanding the effect of MVI on analysis effectiveness, view usage and collaboration patterns compared to alternatives. MVI appears to reduce analysis error rates for this sample task and makes it more likely for both users to remain engaged with the analysis task.
The work outlined in this dissertation extends the state of the art in large display software infrastructures, solves several limitations in current systems (lack of immersion support in multiwindow environments, single-application-only immersive environments) and provides an advanced platform for application development on Hybrid Reality Environments. Moreover, this work outlines a technique for describing co-located collaboration patterns with large displays, based on user gaze direction and communication activity.
An operating system for Hybrid Reality Environments will accelerate research using thes novel instruments in two ways. It will simplify and speed up the creation of domain-specific applications that make full use of HREs. And it will support co-located user groups tackling modern research, analysis or planning tasks requiring multiple heterogeneous data displays involving immersive and non-immersive views. The results of the user study presented in this work identify and describe features of co-located collaborative work within Hybrid Reality Environments that can be extended to large displays in general, and can guide the design of applications for these systems.
Febretti, A., Multiview Immersion in Hybrid Reality Environments, Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science, Graduate College of the University of Illinois at Chicago, May 15th, 2017.