9 research outputs found

    A Global Model Approach to Robust Few-Shot SAR Automatic Target Recognition

    Full text link
    In real-world scenarios, it may not always be possible to collect hundreds of labeled samples per class for training deep learning-based SAR Automatic Target Recognition (ATR) models. This work specifically tackles the few-shot SAR ATR problem, where only a handful of labeled samples may be available to support the task of interest. Our approach is composed of two stages. In the first, a global representation model is trained via self-supervised learning on a large pool of diverse and unlabeled SAR data. In the second stage, the global model is used as a fixed feature extractor and a classifier is trained to partition the feature space given the few-shot support samples, while simultaneously being calibrated to detect anomalous inputs. Unlike competing approaches which require a pristine labeled dataset for pretraining via meta-learning, our approach learns highly transferable features from unlabeled data that have little-to-no relation to the downstream task. We evaluate our method in standard and extended MSTAR operating conditions and find it to achieve high accuracy and robust out-of-distribution detection in many different few-shot settings. Our results are particularly significant because they show the merit of a global model approach to SAR ATR, which makes minimal assumptions, and provides many axes for extendability

    Self-Trained Proposal Networks for the Open World

    Full text link
    Current state-of-the-art object proposal networks are trained with a closed-world assumption, meaning they learn to only detect objects of the training classes. These models fail to provide high recall in open-world environments where important novel objects may be encountered. While a handful of recent works attempt to tackle this problem, they fail to consider that the optimal behavior of a proposal network can vary significantly depending on the data and application. Our goal is to provide a flexible proposal solution that can be easily tuned to suit a variety of open-world settings. To this end, we design a Self-Trained Proposal Network (STPN) that leverages an adjustable hybrid architecture, a novel self-training procedure, and dynamic loss components to optimize the tradeoff between known and unknown object detection performance. To thoroughly evaluate our method, we devise several new challenges which invoke varying degrees of label bias by altering known class diversity and label count. We find that in every task, STPN easily outperforms existing baselines (e.g., RPN, OLN). Our method is also highly data efficient, surpassing baseline recall with a fraction of the labeled data.Comment: 19 pages, 9 figures, 10 table

    Establishing baselines and introducing TernaryMixOE for fine-grained out-of-distribution detection

    Full text link
    Machine learning models deployed in the open world may encounter observations that they were not trained to recognize, and they risk misclassifying such observations with high confidence. Therefore, it is essential that these models are able to ascertain what is in-distribution (ID) and out-of-distribution (OOD), to avoid this misclassification. In recent years, huge strides have been made in creating models that are robust to this distinction. As a result, the current state-of-the-art has reached near perfect performance on relatively coarse-grained OOD detection tasks, such as distinguishing horses from trucks, while struggling with finer-grained classification, like differentiating models of commercial aircraft. In this paper, we describe a new theoretical framework for understanding fine- and coarse-grained OOD detection, we re-conceptualize fine grained classification into a three part problem, and we propose a new baseline task for OOD models on two fine-grained hierarchical data sets, two new evaluation methods to differentiate fine- and coarse-grained OOD performance, along with a new loss function for models in this task

    Mixture Outlier Exposure for Out-of-Distribution Detection in Fine-grained Settings

    Full text link
    Enabling out-of-distribution (OOD) detection for DNNs is critical for their safe and reliable operation in the open world. Despite recent progress, current works often consider a coarse level of granularity in the OOD problem, which fail to approximate many real-world fine-grained tasks where high granularity may be expected between the in-distribution (ID) data and the OOD data (e.g., identifying novel bird species for a bird classification system in the wild). In this work, we start by carefully constructing four large-scale fine-grained test environments in which existing methods are shown to have difficulties. We find that current methods, including ones that include a large/diverse set of outliers during DNN training, have poor coverage over the broad region where fine-grained OOD samples locate. We then propose Mixture Outlier Exposure (MixOE), which effectively expands the covered OOD region by mixing ID data and training outliers, and regularizes the model behaviour by linearly decaying the prediction confidence as the input transitions from ID to OOD. Extensive experiments and analyses demonstrate the effectiveness of MixOE for improving OOD detection in fine-grained settings
    corecore