Making Deep Learning Models Interactive and Self-Interpreting
Jian Zhou Lab
There are 2 projects in the Zhou lab:
1) Making deep learning models interactive
Interactivity is a cornerstone of exploratory data analysis and with ever-more complex deep learning models, it is critical to build an intuitive human-model interactive interface to allow understanding and extracting knowledge from models. In this project, you will work on interface design, with mostly web-technologies and build web-based applications that utilize our deep learning models. For example, you may implement a web-interface that allows the user to interact with a deep learning genomic sequence model (by generating predictions, identify important sequence features, and allow the user to introduce mutations etc).
2) Self-interpreting deep learning
Deep learning models are typically black-boxes that requires significant efforts in interpreting how it is making any prediction. Interpreting highly nonlinear functions can be inherently challenging, so why don’t we make the deep learning model interpret itself? In this project, you will design and train “self-interpreting” deep learning networks that provide not just prediction but also directly output important input features.
For more information about these projects email Jian Zhou.