162 research outputs found

    Interpreting Neural Networks Using Flip Points

    Get PDF
    Neural networks have been criticized for their lack of easy interpretation, which undermines confidence in their use for important applications. Here, we introduce a novel technique, interpreting a trained neural network by investigating its flip points. A flip point is any point that lies on the boundary between two output classes: e.g. for a neural network with a binary yes/no output, a flip point is any input that generates equal scores for "yes" and "no". The flip point closest to a given input is of particular importance, and this point is the solution to a well-posed optimization problem. This paper gives an overview of the uses of flip points and how they are computed. Through results on standard datasets, we demonstrate how flip points can be used to provide detailed interpretation of the output produced by a neural network. Moreover, for a given input, flip points enable us to measure confidence in the correctness of outputs much more effectively than softmax score. They also identify influential features of the inputs, identify bias, and find changes in the input that change the output of the model. We show that distance between an input and the closest flip point identifies the most influential points in the training data. Using principal component analysis (PCA) and rank-revealing QR factorization (RR-QR), the set of directions from each training input to its closest flip point provides explanations of how a trained neural network processes an entire dataset: what features are most important for classification into a given class, which features are most responsible for particular misclassifications, how an adversary might fool the network, etc. Although we investigate flip points for neural networks, their usefulness is actually model-agnostic

    Automatic sensor assignment of a supermarket refrigeration system

    Get PDF

    Active diagnosis of hybrid systems - A model predictive approach

    Get PDF

    Porosity Controls Spread of Excitation in Tectorial Membrane Traveling Waves

    Get PDF
    Cochlear frequency selectivity plays a key role in our ability to understand speech, and is widely believed to be associated with cochlear amplification. However, genetic studies targeting the tectorial membrane (TM) have demonstrated both sharper and broader tuning with no obvious changes in hair bundle or somatic motility mechanisms. For example, cochlear tuning of Tectb[superscript –/–] mice is significantly sharper than that of Tecta[superscript Y1870C/+] mice, even though TM stiffnesses are similarly reduced relative to wild-type TMs. Here we show that differences in TM viscosity can account for these differences in tuning. In the basal cochlear turn, nanoscale pores of Tecta[superscript Y1870C/+] TMs are significantly larger than those of Tectb[superscript –/–] TMs. The larger pore size reduces shear viscosity (by ∼70%), thereby reducing traveling wave speed and increasing spread of excitation. These results demonstrate the previously unrecognized importance of TM porosity in cochlear and neural tuning.National Institutes of Health (U.S.) (Grant R01-DC00238)National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant 1122374)National Institutes of Health (U.S.) (Training Grant
    • …
    corecore