4 research outputs found

    Lecture notes: Semidefinite programs and harmonic analysis

    Full text link
    Lecture notes for the tutorial at the workshop HPOPT 2008 - 10th International Workshop on High Performance Optimization Techniques (Algebraic Structure in Semidefinite Programming), June 11th to 13th, 2008, Tilburg University, The Netherlands.Comment: 31 page

    Unsupervised Interpretable Basis Extraction for Concept-Based Visual Explanations

    Full text link
    An important line of research attempts to explain CNN image classifier predictions and intermediate layer representations in terms of human understandable concepts. In this work, we expand on previous works in the literature that use annotated concept datasets to extract interpretable feature space directions and propose an unsupervised post-hoc method to extract a disentangling interpretable basis by looking for the rotation of the feature space that explains sparse one-hot thresholded transformed representations of pixel activations. We do experimentation with existing popular CNNs and demonstrate the effectiveness of our method in extracting an interpretable basis across network architectures and training datasets. We make extensions to the existing basis interpretability metrics found in the literature and show that, intermediate layer representations become more interpretable when transformed to the bases extracted with our method. Finally, using the basis interpretability metrics, we compare the bases extracted with our method with the bases derived with a supervised approach and find that, in one aspect, the proposed unsupervised approach has a strength that constitutes a limitation of the supervised one and give potential directions for future research.Comment: 15 pages, Accepted in IEEE Transactions on Artificial Intelligence, Special Issue on New Developments in Explainable and Interpretable A

    Generalized Neural Collapse for a Large Number of Classes

    Full text link
    Neural collapse provides an elegant mathematical characterization of learned last layer representations (a.k.a. features) and classifier weights in deep classification models. Such results not only provide insights but also motivate new techniques for improving practical deep models. However, most of the existing empirical and theoretical studies in neural collapse focus on the case that the number of classes is small relative to the dimension of the feature space. This paper extends neural collapse to cases where the number of classes are much larger than the dimension of feature space, which broadly occur for language models, retrieval systems, and face recognition applications. We show that the features and classifier exhibit a generalized neural collapse phenomenon, where the minimum one-vs-rest margins is maximized.We provide empirical study to verify the occurrence of generalized neural collapse in practical deep neural networks. Moreover, we provide theoretical study to show that the generalized neural collapse provably occurs under unconstrained feature model with spherical constraint, under certain technical conditions on feature dimension and number of classes.Comment: 32 pages, 12 figure
    corecore