Preliminary Examination Announcement: “Multi-view, multi-modal speech+gesture interaction in large display environments”

September 17th, 2019

Categories: Applications, MS / PhD Thesis, Software, User Groups, Visualization, Natural Language Processing, Human Computer Interaction (HCI)


Ph.D. Student: Jillian Aurisano

Date: Tuesday, September 17, 2019
Time: 11:00 am
Location: Room 3036, Engineering Research Facility

Andrew Johnson (Chair)
Debaleena Chattopadhyay
Barbara Di Eugenio
G. Elisabeta Marai
Rick Stevens (Argonne National Laboratory, University of Chicago)

Analysis of large, complex datasets stands to benefit from environments that permit users to view and juxtapose many views of data. Large, high-resolution environments are capable of showing many related views of data, but interaction with these views poses significant challenges in visual and interaction design. In this talk, I will present work toward “multi-view interactions&rdqo; that enable users to create, organize and act on many views at once, through multi-modal speech+gesture queries in large flexible canvas environments. The goal is the enable users to rapidly and efficiently generate sets of views in support of multi-view analysis tasks, organize these views to reflect changing analysis goals, and operate on sets of views collectively, rather than individually, to efficiently reach large portions of the ‘data+attribute space’. I will implement and evaluate this approach within two domains and two user communities: City of Chicago data for informed citizens and deep learning predictions data for precision medicine in cancer.