26 research outputs found

    CNN 101: Interactive Visual Learning for Convolutional Neural Networks

    Full text link
    The success of deep learning solving previously-thought hard problems has inspired many non-experts to learn and understand this exciting technology. However, it is often challenging for learners to take the first steps due to the complexity of deep learning models. We present our ongoing work, CNN 101, an interactive visualization system for explaining and teaching convolutional neural networks. Through tightly integrated interactive views, CNN 101 offers both overview and detailed descriptions of how a model works. Built using modern web technologies, CNN 101 runs locally in users' web browsers without requiring specialized hardware, broadening the public's education access to modern deep learning techniques.Comment: CHI'20 Late-Breaking Work (April 25-30, 2020), 7 pages, 3 figure

    Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation

    Get PDF
    Despite the tremendous achievements of deep convolutional neural networks (CNNs) in many computer vision tasks, understanding how they actually work remains a significant challenge. In this paper, we propose a novel two-step understanding method, namely Salient Relevance (SR) map, which aims to shed light on how deep CNNs recognize images and learn features from areas, referred to as attention areas, therein. Our proposed method starts out with a layer-wise relevance propagation (LRP) step which estimates a pixel-wise relevance map over the input image. Following, we construct a context-aware saliency map, SR map, from the LRP-generated map which predicts areas close to the foci of attention instead of isolated pixels that LRP reveals. In human visual system, information of regions is more important than of pixels in recognition. Consequently, our proposed approach closely simulates human recognition. Experimental results using the ILSVRC2012 validation dataset in conjunction with two well-established deep CNN models, AlexNet and VGG-16, clearly demonstrate that our proposed approach concisely identifies not only key pixels but also attention areas that contribute to the underlying neural network's comprehension of the given images. As such, our proposed SR map constitutes a convenient visual interface which unveils the visual attention of the network and reveals which type of objects the model has learned to recognize after training. The source code is available at https://github.com/Hey1Li/Salient-Relevance-Propagation.Comment: 35 pages, 15 figure

    exploRNN: Understanding Recurrent Neural Networks through Visual Exploration

    Full text link
    Due to the success of deep learning and its growing job market, students and researchers from many areas are getting interested in learning about deep learning technologies. Visualization has proven to be of great help during this learning process, while most current educational visualizations are targeted towards one specific architecture or use case. Unfortunately, recurrent neural networks (RNNs), which are capable of processing sequential data, are not covered yet, despite the fact that tasks on sequential data, such as text and function analysis, are at the forefront of deep learning research. Therefore, we propose exploRNN, the first interactively explorable, educational visualization for RNNs. exploRNN allows for interactive experimentation with RNNs, and provides in-depth information on their functionality and behavior during training. By defining educational objectives targeted towards understanding RNNs, and using these as guidelines throughout the visual design process, we have designed exploRNN to communicate the most important concepts of RNNs directly within a web browser. By means of exploRNN, we provide an overview of the training process of RNNs at a coarse level, while also allowing detailed inspection of the data-flow within LSTM cells. Within this paper, we motivate our design of exploRNN, detail its realization, and discuss the results of a user study investigating the benefits of exploRNN
    corecore