9 research outputs found

    Robust Estimation Using Context-Aware Filtering

    Get PDF
    This paper presents the context-aware filter, an estimation technique that incorporates context measurements, in addition to the regular continuous measurements. Context measurements provide binary information about the system’s context which is not directly encoded in the state; examples include a robot detecting a nearby building using image processing or a medical device alarming that a vital sign has exceeded a predefined threshold. These measurements can only be received from certain states and can therefore be modeled as a function of the system’s current state. We focus on two classes of functions describing the probability of context detection given the current state; these functions capture a wide variety of detections that may occur in practice. We derive the corresponding context-aware filters, a Gaussian Mixture filter and another closed-form filter with a posterior distribution whose moments are derived in the paper. Finally, we evaluate the performance of both classes of functions through simulation of an unmanned ground vehicle

    Localization from semantic observations via the matrix permanent

    Get PDF
    Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the robot’s sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association. Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the observer’s trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization

    Biologically Motivated Novel Localization Paradigm by High-Level Multiple Object Recognition in Panoramic Images

    Get PDF
    This paper presents the novel paradigm of a global localization method motivated by human visual systems (HVSs). HVSs actively use the information of the object recognition results for self-position localization and for viewing direction. The proposed localization paradigm consisted of three parts: panoramic image acquisition, multiple object recognition, and grid-based localization. Multiple object recognition information from panoramic images is utilized in the localization part. High-level object information was useful not only for global localization, but also for robot-object interactions. The metric global localization (position, viewing direction) was conducted based on the bearing information of recognized objects from just one panoramic image. The feasibility of the novel localization paradigm was validated experimentally

    Global localization by soft object recognition from 3D Partial Views

    Full text link

    Data Association for Semantic World Modeling from Partial Views

    Get PDF
    Autonomous mobile-manipulation robots need to sense and interact with objects to accomplish high-level tasks such as preparing meals and searching for objects. To achieve such tasks, robots need semantic world models, defined as object-based representations of the world involving task-level attributes. In this work, we address the problem of estimating world models from semantic perception modules that provide noisy observations of attributes. Because attribute detections are sparse, ambiguous, and are aggregated across different viewpoints, it is unclear which attribute measurements are produced by the same object, so data association issues are prevalent. We present novel clustering-based approaches to this problem, which are more efficient and require less severe approximations compared to existing tracking-based approaches. These approaches are applied to data containing object type-and-pose detections from multiple viewpoints, and demonstrate comparable quality using a fraction of the computation time.National Science Foundation (U.S.) (NSF Grant No. 1117325)United States. Office of Naval Research (ONR MURI grant N00014-09-1-1051)United States. Air Force Office of Scientific Research (AFOSR grant FA2386-10-1-4135)Singapore. Ministry of Education (Grant to the the Singapore-MIT International Design Center

    Robot localization using soft object detection

    Get PDF
    In this paper, we give a new double twist to the robot localization problem. We solve the problem for the case of prior maps which are semantically annotated perhaps even sketched by hand. Data association is achieved not through the detection of visual features but the detection of object classes used in the annotation of the prior maps. To avoid the caveats of general object recognition, we propose a new representation of the query images that consists of a vector of the detection scores for each object class. Given such soft object detections we are able to create hypotheses about pose and to refine them through particle filtering. As opposed to small confined office and kitchen spaces, our experiment takes place in a large open urban rail station with multiple semantically ambiguous places. The success of our approach shows that our new representation is a robust way to exploit the plethora of existing prior maps for GPS-denied environments avoiding the data association problems when matching point clouds or visual features
    corecore