May 1st, 2008
Categories: Applications, Data Mining, Human Factors, MS / PhD Thesis, Software, Supercomputing, Visualization, VR
Virtual reality (VR) is a technology which allows a user to interact with a computer-simulated virtual environment (VE). Since its inception it has created an impact in many fields, but has not yet gained wide acceptance to be used in a personal context. At the same time, personal mobile devices (MDs) have evolved tremendously during recent years. The deployment of MDs is becoming pervasive; the hardware configurations are enhanced to be comparable to low-end desktop systems; and there are drastic improvements in the infrastructures supporting wireless inter-device connectivity and collaboration. These trends make novel applications and business models of VEs by the individual users promising and desirable. However, to make the key elements of a VE, i.e. visualization and interaction, work as smoothly for MD-based systems as for desktop-based systems, three research challenges need to be addressed, for which the technical solution portfolio is far from complete: 1) which visual factors of a VE make the user’s performance similar to that in the real world? 2) How to effectively allow a user to collect his / her own biometrical data to build personalized profiles? 3) How to efficiently make use of the processing resources in the infrastructure for the compute-heavy tasks for a individual user, such as intelligent human computer interaction?
This work, referred to as Personal Augmented Computing Environment (PACE), sets its goal for personalized visualization and scalable human-computer interaction. It attacks the three challenging research problems with corresponding solutions. First, to understand the visual factors that make VE more like the real world, a set of controlled experiments are designed and conducted in a CAVE system. In the experiments three visual factors, namely scene complexity, stereovision and motion parallax are examined. The results suggest that scene complexity and stereovision significantly affect users’ size perception, while motion parallax does not exhibit a significant effect. Being conducted by no other researchers before, this study helps us to better understand the role of size constancy in VE performance.
Next, an effective process to collect a user’s hand reference images for posture profile building is proposed and implemented. The novelty of the proposed process is that it takes advantage of the synergies between mobile device and computing infrastructure, and uses proactive measure instead of post-processing techniques to improve sample image quality. While most hand posture recognizers’ performance are very sensitive to hand sample image quality and require them to be taken under strictly controlled laboratory setting, by employing the process proposed and implemented by this work, the mobile device user can collect hand sample images by themselves with relative ease, and still be able to obtain satisfying posture profiles.
The third contribution of this work is a set of scalable computing techniques to speed up the tasks of a vision-based hand detection and recognition using computer clusters. These techniques include a novel data structure, called a scanning node tree which is used to manage cluster nodes; a unique load balancing algorithm to evenly distribute workload across nodes; and a node-to-node messaging protocol for parallel processing. The set of scalable computing techniques has been proved to be efficient by various evaluation metrics. Enabled by these techniques, a sample application, a hand tracker/controller named Hand Wand is implemented and integrated with a large scale tiled display instrument as a human
computer interaction device.
This work also introduces one preliminary and early work for personal VE: a reach-and-grasp training environment for in-home rehabilitation of stroke survivors. This pilot study achieved certain results and were successful to some extent.
Luo, X., PACE: A Framework for Personalized Visualization and Scalable Human Computer Interaction, Submitted as partial fulfillment of the requirements of the degree of Doctor of Philosophy in Computer Science, Graduate College of the University of Illinois at Chicago, Chicago, IL, May 1st, 2008.