May 16th, 2017 - May 15th, 2018
Deep learning is often referred to as a ‘black box’, because, aside from selecting model and the model hyperparameters (like the learning rate, activation function parameters), users don’t have access to what happens inside the model. We can know that the model works by measuring its accuracy on some testing data set, suggesting that the model must have ‘learned’ something meaningful from the training data. But how can we peek inside and ‘see’ what the model ‘sees’?
Biologists have recently begun to explore how deep learning approaches could be used to address questions involving diverse biological datasets. For this community, producing an accurate model is important, but also insufficient, since the larger goal is to contribute to our understanding of complex biological systems and phenomena. Visualization could help biology researchers ‘peak’ inside the black box and relate deep learning mechanics to existing biological knowledge. However, most visualization techniques for deep learning have been developed for images and text, so many of the encodings and layouts are designed for input data with fundamentally different perceptual properties than biological data sets, such as gene sequence data or gene expression data.
This research explores current approaches in visualization for deep learning and how these approaches may (or may not) be applied to biological data. This work is conducted with researchers at Argonne National Labs studying deep learning for cancer research and genomics.