3 research outputs found

    ????????? ?????? ???????????? ?????? ???????????? ?????? ??????????????? ???????????? ????????? ???????????? ??????

    Get PDF
    Department of Computer Science and EngineeringAs deep learning has grown fast, so did the desire to interpret deep learning black boxes. As a result, many analysis tools have emerged to interpret it. Interpretation in deep learning has in fact popularized the use of deep learning in many areas including research, manufacturing, finance, and healthcare which needs relatively accurate and reliable decision making process. However, there is something we should not overlook. It is uncertainty. Uncertainties of models are directly reflected in the results of interpretations of model decision as explaining tools are dependent to models. Therefore, uncertainties of interpreting output from deep learning models should be also taken into account as quality and cost are directly impacted by measurement uncertainty. This attempt has not been made yet. Therefore, we suggest Bayesian input attribution rather than discrete input attribution by approximating Bayesian inference in deep Gaussian process through dropout to input attribution in this paper. Then we extract candidates that can sufficiently affect the output of the model, taking into account both input attribution itself and uncertainty of it.clos

    Interpreting Internal Activation Patterns in Deep Temporal Neural Networks by Finding Prototypes

    No full text
    Deep neural networks have demonstrated competitive performance in classification tasks for sequential data. However, it remains difficult to understand which temporal patterns the internal channels of deep neural networks capture for decision-making in sequential data. To address this issue, we propose a new framework with which to visualize temporal representations learned in deep neural networks without hand-crafted segmentation labels. Given input data, our framework extracts highly activated temporal regions that contribute to activating internal nodes and characterizes such regions by prototype selection method based on Maximum Mean Discrepancy. Representative temporal patterns referred to here as Prototypes of Temporally Activated Patterns (PTAP) provide core examples of subsequences in the sequential data for interpretability. We also analyze the role of each channel by Value-LRP plots using representative prototypes and the distribution of the input attribution. Input attribution plots give visual information to recognize the shapes focused on by the channel for decision-making
    corecore