30,708 research outputs found

    ARTMAP Neural Networks for Information Fusion and Data Mining: Map Production and Target Recognition Methodologies

    Full text link
    The Sensor Exploitation Group of MIT Lincoln Laboratory incorporated an early version of the ARTMAP neural network as the recognition engine of a hierarchical system for fusion and data mining of registered geospatial images. The Lincoln Lab system has been successfully fielded, but is limited to target I non-target identifications and does not produce whole maps. Procedures defined here extend these capabilities by means of a mapping method that learns to identify and distribute arbitrarily many target classes. This new spatial data mining system is designed particularly to cope with the highly skewed class distributions of typical mapping problems. Specification of canonical algorithms and a benchmark testbed has enabled the evaluation of candidate recognition networks as well as pre- and post-processing and feature selection options. The resulting mapping methodology sets a standard for a variety of spatial data mining tasks. In particular, training pixels are drawn from a region that is spatially distinct from the mapped region, which could feature an output class mix that is substantially different from that of the training set. The system recognition component, default ARTMAP, with its fully specified set of canonical parameter values, has become the a priori system of choice among this family of neural networks for a wide variety of applications.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); Office of Naval Research (N00014-01-1-0624

    ARTMAP Neural Networks for Information Fusion and Data Mining: Map Production and Target Recognition Methodologies

    Full text link
    The Sensor Exploitation Group of MIT Lincoln Laboratory incorporated an early version of the ARTMAP neural network as the recognition engine of a hierarchical system for fusion and data mining of registered geospatial images. The Lincoln Lab system has been successfully fielded, but is limited to target I non-target identifications and does not produce whole maps. Procedures defined here extend these capabilities by means of a mapping method that learns to identify and distribute arbitrarily many target classes. This new spatial data mining system is designed particularly to cope with the highly skewed class distributions of typical mapping problems. Specification of canonical algorithms and a benchmark testbed has enabled the evaluation of candidate recognition networks as well as pre- and post-processing and feature selection options. The resulting mapping methodology sets a standard for a variety of spatial data mining tasks. In particular, training pixels are drawn from a region that is spatially distinct from the mapped region, which could feature an output class mix that is substantially different from that of the training set. The system recognition component, default ARTMAP, with its fully specified set of canonical parameter values, has become the a priori system of choice among this family of neural networks for a wide variety of applications.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); Office of Naval Research (N00014-01-1-0624

    A review of multi-instance learning assumptions

    Get PDF
    Multi-instance (MI) learning is a variant of inductive machine learning, where each learning example contains a bag of instances instead of a single feature vector. The term commonly refers to the supervised setting, where each bag is associated with a label. This type of representation is a natural fit for a number of real-world learning scenarios, including drug activity prediction and image classification, hence many MI learning algorithms have been proposed. Any MI learning method must relate instances to bag-level class labels, but many types of relationships between instances and class labels are possible. Although all early work in MI learning assumes a specific MI concept class known to be appropriate for a drug activity prediction domain; this ‘standard MI assumption’ is not guaranteed to hold in other domains. Much of the recent work in MI learning has concentrated on a relaxed view of the MI problem, where the standard MI assumption is dropped, and alternative assumptions are considered instead. However, often it is not clearly stated what particular assumption is used and how it relates to other assumptions that have been proposed. In this paper, we aim to clarify the use of alternative MI assumptions by reviewing the work done in this area

    On the Complexity of ATL and ATL* Module Checking

    Full text link
    Module checking has been introduced in late 1990s to verify open systems, i.e., systems whose behavior depends on the continuous interaction with the environment. Classically, module checking has been investigated with respect to specifications given as CTL and CTL* formulas. Recently, it has been shown that CTL (resp., CTL*) module checking offers a distinctly different perspective from the better-known problem of ATL (resp., ATL*) model checking. In particular, ATL (resp., ATL*) module checking strictly enhances the expressiveness of both CTL (resp., CTL*) module checking and ATL (resp. ATL*) model checking. In this paper, we provide asymptotically optimal bounds on the computational cost of module checking against ATL and ATL*, whose upper bounds are based on an automata-theoretic approach. We show that module-checking for ATL is EXPTIME-complete, which is the same complexity of module checking against CTL. On the other hand, ATL* module checking turns out to be 3EXPTIME-complete, hence exponentially harder than CTL* module checking.Comment: In Proceedings GandALF 2017, arXiv:1709.0176

    Zero Shot Recognition with Unreliable Attributes

    Full text link
    In principle, zero-shot learning makes it possible to train a recognition model simply by specifying the category's attributes. For example, with classifiers for generic attributes like \emph{striped} and \emph{four-legged}, one can construct a classifier for the zebra category by enumerating which properties it possesses---even without providing zebra training images. In practice, however, the standard zero-shot paradigm suffers because attribute predictions in novel images are hard to get right. We propose a novel random forest approach to train zero-shot models that explicitly accounts for the unreliability of attribute predictions. By leveraging statistics about each attribute's error tendencies, our method obtains more robust discriminative models for the unseen classes. We further devise extensions to handle the few-shot scenario and unreliable attribute descriptions. On three datasets, we demonstrate the benefit for visual category learning with zero or few training examples, a critical domain for rare categories or categories defined on the fly.Comment: NIPS 201
    corecore