12 research outputs found

    SIFT Saliency Analysis for Matching Repetitive Structures

    Get PDF
    The ambiguity resulting from repetitive structures in a scene presents a major challenge for image matching. This paper proposes a matching method based on SIFT feature saliency analysis to achieve robust feature matching between images with repetitive structures. The feature saliency within the reference image is estimated by analyzing feature stability and dissimilarity via Monte-Carlo simulation. In the proposed method, feature matching is performed only within the region of interest to reduce the ambiguity caused by repetitive structures. The experimental results demonstrate the efficiency and robustness of the proposed method, especially in the presence of respective structures

    Two-Stage Focused Inference for Resource-Constrained Collision-Free Navigation

    Get PDF
    Long-term operations of resource-constrained robots typically require hard decisions be made about which data to process and/or retain. The question then arises of how to choose which data is most useful to keep to achieve the task at hand. As spacial scale grows, the size of the map will grow without bound, and as temporal scale grows, the number of measurements will grow without bound. In this work, we present the first known approach to tackle both of these issues. The approach has two stages. First, a subset of the variables (focused variables) is selected that are most useful for a particular task. Second, a task-agnostic and principled method (focused inference) is proposed to select a subset of the measurements that maximizes the information over the focused variables. The approach is then applied to the specific task of robot navigation in an obstacle-laden environment. A landmark selection method is proposed to minimize the probability of collision and then select the set of measurements that best localizes those landmarks. It is shown that the two-stage approach outperforms both only selecting measurement and only selecting landmarks in terms of minimizing the probability of collision. The performance improvement is validated through detailed simulation and real experiments on a Pioneer robot.United States. Army Research Office. Multidisciplinary University Research Initiative (Grant W911NF-11-1-0391)United States. Office of Naval Research (Grant N00014-11-1-0688)National Science Foundation (U.S.) (Award IIS-1318392

    Attention and Anticipation in Fast Visual-Inertial Navigation

    Get PDF
    We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of visual-inertial navigation? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-the-art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate visual-inertial navigation while appearance-based feature selection fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table

    A Stable Nonlinear Switched System for Landmark-aided Motion Planning

    Get PDF
    To guarantee navigation accuracy, the robotic applications utilize landmarks. This paper proposes a novel nonlinear switched system for the fundamental motion planning problem in autonomous mobile robot navigation: the generation of continuous collision free paths to a goal configuration via numerous land marks (waypoints) in a cluttered environment. The proposed system leverages the Lyapunov based control scheme (LbCS) and constructs Lyapunov like functions for the system’s subsystems. These functions guide a planar point mass object, representing an autonomous robotic agent, towards its goal by utilizing artificial landmarks. Extracting a set of nonlinear, time invariant, continuous, and stabilizing switched velocity controllers from these Lyapunov like functions, the system invokes the controllers based on a switching rule, enabling hierarchical landmark navigation in complex environments. Using the well known stability criteria by Branicky for switched systems based on multiple Lyapunov functions, the stability of the proposed system is provided. A new method to extract action landmarks from multiple landmarks is also introduced. The control laws are then used to control the motion of a nonholonomic car like vehicle governed by its kinematic equations. Numerical examples with simulations illustrate the effectiveness of the Lyapunov based control laws. The proposed control laws can automate various processes where the transportation of goods or workers between different sections is required

    Theory, Design, and Implementation of Landmark Promotion Cooperative Simultaneous Localization and Mapping

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a challenging problem in practice, the use of multiple robots and inexpensive sensors poses even more demands on the designer. Cooperative SLAM poses specific challenges in the areas of computational efficiency, software/network performance, and robustness to errors. New methods in image processing, recursive filtering, and SLAM have been developed to implement practical algorithms for cooperative SLAM on a set of inexpensive robots. The Consolidated Unscented Mixed Recursive Filter (CUMRF) is designed to handle non-linear systems with non-Gaussian noise. This is accomplished using the Unscented Transform combined with Gaussian Mixture Models. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis (PCA) and the X84 outlier rejection rule. Forgetful SLAM is a local SLAM technique that runs in nearly constant time relative to the number of visible landmarks and improves poor performing sensors through sensor fusion and outlier rejection. Forgetful SLAM correlates all measured observations, but stops the state from growing over time. Hierarchical Active Ripple SLAM (HAR-SLAM) is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple robots, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of robots, landmarks, and robots poses. This dissertation presents explicit methods for closing-the-loop, joining multiple robots, and active updates. Landmark Promotion SLAM is a hierarchy of new SLAM methods, using the Robust Kalman Filter, Forgetful SLAM, and HAR-SLAM. Practical aspects of SLAM are a focus of this dissertation. LK-SURF is a new image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking. Typical stereo correspondence techniques fail at providing descriptors for features, or fail at temporal tracking. Several calibration and modeling techniques are also covered, including calibrating stereo cameras, aligning stereo cameras to an inertial system, and making neural net system models. These methods are important to improve the quality of the data and images acquired for the SLAM process

    Vehicle localization with enhanced robustness for urban automated driving

    Get PDF

    Human Guidance Behavior Decomposition and Modeling

    Get PDF
    University of Minnesota Ph.D. dissertation. December 2017. Major: Aerospace Engineering. Advisor: Berenice Mettler. 1 computer file (PDF); x, 128 pages.Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics
    corecore