30 research outputs found

    Appearance-based localization for mobile robots using digital zoom and visual compass

    Get PDF
    This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally

    A distributed optimization framework for localization and formation control: applications to vision-based measurements

    Full text link
    Multiagent systems have been a major area of research for the last 15 years. This interest has been motivated by tasks that can be executed more rapidly in a collaborative manner or that are nearly impossible to carry out otherwise. To be effective, the agents need to have the notion of a common goal shared by the entire network (for instance, a desired formation) and individual control laws to realize the goal. The common goal is typically centralized, in the sense that it involves the state of all the agents at the same time. On the other hand, it is often desirable to have individual control laws that are distributed, in the sense that the desired action of an agent depends only on the measurements and states available at the node and at a small number of neighbors. This is an attractive quality because it implies an overall system that is modular and intrinsically more robust to communication delays and node failures

    Landmark-based Mapping, Localization, and Navigation via Dynamic Simulation

    Get PDF
    Imagine, if we want to explore into an unknown and unexplored environment, the first thing we will need is a map of the environment, hence with the help of local/global positioning system and mapping technology, a robot can map the environment that can further utilized for exploration. Now another question that can arise here is what if local/global positioning system is not available, while we have the map of the environment, hence for a robot to know its exact location at any instance of time into that environment its needs to localize itself. But what if we have neither of the two things maps as well as access to local/global positioning system, in such situations we need to map as well localize our robot into that unknown and unexplored environment at the same time and hence the concept of Simultaneous Localization And Mapping (SLAM) came into existence. SLAM consists in the simultaneous construction of a map of the environment, and the estimation of the state (localization) of the robot moving within it. The SLAM community in robotics has made some astonishing progress over the past 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to the robotics industry. In this thesis we study a novel approach to construct a metric embedding of landmarks observed by a robot equipped with a noisy range sensor, navigating in an unknown environment. In every iteration of the algorithm, a graph constituting of the landmarks and observation points as vertices are modeled as a spring-mass-damper system and its dynamic simulation is performed to obtain the optimal lengths of the links in the graph. The dynamic simulation is run every time a new set of observations are obtained, with the result of the previous dynamic simulation used as its initial condition. This incrementally constructs an approximate metric representation of the landmarks and robot poses, in effect giving us a metric map of the landmarks and allowing us to localize the robot relative to that

    Robust optimization of factor graphs by using condensed measurements

    Full text link
    Popular problems in robotics and computer vision like simultaneous localization and mapping (SLAM) or structure from motion (SfM) require to solve a least-squares problem that can be effectively represented by factor graphs. The chance to find the global minimum of such problems depends on both the initial guess and the non-linearity of the sensor models. In this paper we propose an approach to determine an approximation of the original problem that has a larger convergence basin. To this end, we employ a divide-and-conquer approach that exploits the structure of the factor graph. Our approach has been validated on real-world and simulated experiments and is able to succeed in finding the global minimum in situations where other state-of-the-art methods fail
    corecore