3,084 research outputs found

    Groupoid equivalence and the associated iterated crossed product

    Full text link
    Given groupoids GG and HH and a (G,H)(G,H)-equivalence XX we may form the transformation groupoid G⋉X⋊HG\ltimes X\rtimes H. Given a separable groupoid dynamical system (A,G⋉X⋊H,ω)(A,G\ltimes X\rtimes H,\omega) we may restrict ω\omega to an action of G⋉XG\ltimes X on AA and form the crossed product A⋊G⋉XA\rtimes G\ltimes X. We show that there is an action of HH on A⋊G⋉XA\rtimes G\ltimes X and that the iterated crossed product (A⋊G⋉X)⋊H(A\rtimes G\ltimes X)\rtimes H is naturally isomorphic to the crossed product A⋊(G⋉X⋊H)A\rtimes (G\ltimes X\rtimes H).Comment: 18 pages; changed typo in titl

    Edit at your own risk: evaluating the robustness of edited models to distribution shifts

    Full text link
    The current trend toward ever-larger models makes standard retraining procedures an ever-more expensive burden. For this reason, there is growing interest in model editing, which enables computationally inexpensive, interpretable, post-hoc model modifications. While many model editing techniques are promising, research on the properties of edited models is largely limited to evaluation of validation accuracy. The robustness of edited models is an important and yet mostly unexplored topic. In this paper, we employ recently developed techniques from the field of deep learning robustness to investigate both how model editing affects the general robustness of a model, as well as the robustness of the specific behavior targeted by the edit. We find that edits tend to reduce general robustness, but that the degree of degradation depends on the editing algorithm and layers chosen. Motivated by these observations we introduce a new model editing algorithm, 1-layer interpolation (1-LI), which uses weight-space interpolation to navigate the trade-off between editing task accuracy and general robustness.Comment: DB and CG contributed equall

    Understanding the Inner Workings of Language Models Through Representation Dissimilarity

    Full text link
    As language models are applied to an increasing number of real-world applications, understanding their inner workings has become an important issue in model trust, interpretability, and transparency. In this work we show that representation dissimilarity measures, which are functions that measure the extent to which two model's internal representations differ, can be a valuable tool for gaining insight into the mechanics of language models. Among our insights are: (i) an apparent asymmetry in the internal representations of model using SoLU and GeLU activation functions, (ii) evidence that dissimilarity measures can identify and locate generalization properties of models that are invisible via in-distribution test set performance, and (iii) new evaluations of how language model features vary as width and depth are increased. Our results suggest that dissimilarity measures are a promising set of tools for shedding light on the inner workings of language models.Comment: EMNLP 2023 (main

    Attributing Learned Concepts in Neural Networks to Training Data

    Full text link
    By now there is substantial evidence that deep learning models learn certain human-interpretable features as part of their internal representations of data. As having the right (or wrong) concepts is critical to trustworthy machine learning systems, it is natural to ask which inputs from the model's original training set were most important for learning a concept at a given layer. To answer this, we combine data attribution methods with methods for probing the concepts learned by a model. Training network and probe ensembles for two concept datasets on a range of network layers, we use the recently developed TRAK method for large-scale data attribution. We find some evidence for convergence, where removing the 10,000 top attributing images for a concept and retraining the model does not change the location of the concept in the network nor the probing sparsity of the concept. This suggests that rather than being highly dependent on a few specific examples, the features that inform the development of a concept are spread in a more diffuse manner across its exemplars, implying robustness in concept formation

    Measurements of the Diffuse Ultraviolet Background and the Terrestrial Airglow with the Space Telescope Imaging Spectrograph

    Get PDF
    Far-UV observations in and near the Hubble Deep Fields demonstrate that the Space Telescope Imaging Spectrograph (STIS) can potentially obtain unique and precise measurements of the diffuse far-ultraviolet background. Although STIS is not the ideal instrument for such measurements, high-resolution images allow Galactic and extragalactic objects to be masked to very faint magnitudes, thus ensuring a measurement of the truly diffuse UV signal. The programs we have analyzed were not designed for this scientific purpose, but would be sufficient to obtain a very sensitive measurement if it were not for a weak but larger-than-expected signal from airglow in the STIS 1450-1900 A bandpass. Our analysis shows that STIS far-UV crystal quartz observations taken near the limb during orbital day can detect a faint airglow signal, most likely from NI\1493, that is comparable to the dark rate and inseparable from the far-UV background. Discarding all but the night data from these datasets gives a diffuse far-ultraviolet background measurement of 501 +/- 103 ph/cm2/sec/ster/A, along a line of sight with very low Galactic neutral hydrogen column (N_HI = 1.5E20 cm-2) and extinction (E(B-V)=0.01 mag). This result is in good agreement with earlier measurements of the far-UV background, and should not include any significant contribution from airglow. We present our findings as a warning to other groups who may use the STIS far-UV camera to observe faint extended targets, and to demonstrate how this measurement may be properly obtained with STIS.Comment: 7 pages, Latex. 4 figures. Uses corrected version of emulateapj.sty and apjfonts.sty (included). Accepted for publication in A
    • …
    corecore