79 research outputs found

    An Ultraluminous Supersoft X-ray Source in M81: An Intermediate-Mass Black Hole?

    Full text link
    Ultraluminous supersoft X-ray sources (ULSSS) exhibit supersoft spectra with blackbody temperatures of 50-100 eV and bolometric luminosities above 103910^{39} erg s−1^{-1}, and are possibly intermediate mass black holes (IMBHs) of ≥103M⊙\ge10^3 M_\odot or massive white dwarfs that are progenitors of type Ia supernovae. In this letter we report our optical studies of such a source in M81, M81-ULS1, with HST archive observations. M81-ULS1 is identified with a point-like object, the spectral energy distribution of which reveals a blue component in addition to the companion of an AGB star. The blue component is consistent with the power-law as expected from the geometrically-thin accretion disk around an IMBH accretor, but inconsistent with the power-law as expected from the X-ray irradiated flared accretion disk around a white dwarf accretor. This result is strong evidence that M81-ULS1 is an IMBH instead of a white dwarf.Comment: 12 pages, 1 table, 3 figure

    What does a platypus look like? Generating customized prompts for zero-shot image classification

    Full text link
    Open vocabulary models are a promising new paradigm for image classification. Unlike traditional classification models, open vocabulary models classify among any arbitrary set of categories specified with natural language during inference. This natural language, called "prompts", typically consists of a set of hand-written templates (e.g., "a photo of a {}") which are completed with each of the category names. This work introduces a simple method to generate higher accuracy prompts, without using explicit knowledge of the image domain and with far fewer hand-constructed sentences. To achieve this, we combine open vocabulary models with large language models (LLMs) to create Customized Prompts via Language models (CuPL, pronounced "couple"). In particular, we leverage the knowledge contained in LLMs in order to generate many descriptive sentences that are customized for each object category. We find that this straightforward and general approach improves accuracy on a range of zero-shot image classification benchmarks, including over one percentage point gain on ImageNet. Finally, this method requires no additional training and remains completely zero-shot. Code is available at https://github.com/sarahpratt/CuPL

    Extremely Simple Activation Shaping for Out-of-Distribution Detection

    Full text link
    The separation between training and deployment of machine learning models implies that not all scenarios encountered in deployment can be anticipated during training, and therefore relying solely on advancements in training has its limits. Out-of-distribution (OOD) detection is an important area that stress-tests a model's ability to handle unseen situations: Do models know when they don't know? Existing OOD detection methods either incur extra training steps, additional data or make nontrivial modifications to the trained network. In contrast, in this work, we propose an extremely simple, post-hoc, on-the-fly activation shaping method, ASH, where a large portion (e.g. 90%) of a sample's activation at a late layer is removed, and the rest (e.g. 10%) simplified or lightly adjusted. The shaping is applied at inference time, and does not require any statistics calculated from training data. Experiments show that such a simple treatment enhances in-distribution and out-of-distribution sample distinction so as to allow state-of-the-art OOD detection on ImageNet, and does not noticeably deteriorate the in-distribution accuracy. We release alongside the paper two calls for explanation and validation, believing the collective power to further validate and understand the discovery. Calls, video and code can be found at: https://andrijazz.github.io/ashComment: Preprint. 22 pages (14 main + appendix), 7 figure

    Natural Adversarial Objects

    Full text link
    Although state-of-the-art object detection methods have shown compelling performance, models often are not robust to adversarial attacks and out-of-distribution data. We introduce a new dataset, Natural Adversarial Objects (NAO), to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence. The mean average precision (mAP) of EfficientDet-D7 drops 74.5% when evaluated on NAO compared to the standard MSCOCO validation set. Moreover, by comparing a variety of object detection architectures, we find that better performance on MSCOCO validation set does not necessarily translate to better performance on NAO, suggesting that robustness cannot be simply achieved by training a more accurate model. We further investigate why examples in NAO are difficult to detect and classify. Experiments of shuffling image patches reveal that models are overly sensitive to local texture. Additionally, using integrated gradients and background replacement, we find that the detection model is reliant on pixel information within the bounding box, and insensitive to the background context when predicting class labels. NAO can be downloaded at https://drive.google.com/drive/folders/15P8sOWoJku6SSEiHLEts86ORfytGezi8
    • …
    corecore