4,516 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Stabilization Control of the Differential Mobile Robot Using Lyapunov Function and Extended Kalman Filter

    Get PDF
    This paper presents the design of a control model to navigate the differential mobile robot to reach the desired destination from an arbitrary initial pose. The designed model is divided into two stages: the state estimation and the stabilization control. In the state estimation, an extended Kalman filter is employed to optimally combine the information from the system dynamics and measurements. Two Lyapunov functions are constructed that allow a hybrid feedback control law to execute the robot movements. The asymptotical stability and robustness of the closed loop system are assured. Simulations and experiments are carried out to validate the effectiveness and applicability of the proposed approach.Comment: arXiv admin note: text overlap with arXiv:1611.07112, arXiv:1611.0711

    Event based localization in Ackermann steering limited resource mobile robots

    Full text link
    “© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”This paper presents a local sensor fusion technique with an event-based global position correction to improve the localization of a mobile robot with limited computational resources. The proposed algorithms use a modified Kalman filter and a new local dynamic model of an Ackermann steering mobile robot. It has a similar performance but faster execution when compared to more complex fusion schemes, allowing its implementation inside the robot. As a global sensor, an event-based position correction is implemented using the Kalman filter error covariance and the position measurement obtained from a zenithal camera. The solution is tested during a long walk with different trajectories using a LEGO Mindstorm NXT robot.This work was supported by FEDER-CICYT projects with references DPI2011-28507-C02-01 and DPI2010-20814-C02-02, financed by the Ministerio de Ciencia e Innovacion (Spain). This work was also supported by the University of Costa Rica.Marín, L.; Vallés Miquel, M.; Soriano Vigueras, Á.; Valera Fernández, Á.; Albertos Pérez, P. (2014). Event based localization in Ackermann steering limited resource mobile robots. IEEE/ASME Transactions on Mechatronics. 19(4):1171-1182. doi:10.1109/TMECH.2013.2277271S1171118219

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    Human-Machine Interface for Remote Training of Robot Tasks

    Full text link
    Regardless of their industrial or research application, the streamlining of robot operations is limited by the proximity of experienced users to the actual hardware. Be it massive open online robotics courses, crowd-sourcing of robot task training, or remote research on massive robot farms for machine learning, the need to create an apt remote Human-Machine Interface is quite prevalent. The paper at hand proposes a novel solution to the programming/training of remote robots employing an intuitive and accurate user-interface which offers all the benefits of working with real robots without imposing delays and inefficiency. The system includes: a vision-based 3D hand detection and gesture recognition subsystem, a simulated digital twin of a robot as visual feedback, and the "remote" robot learning/executing trajectories using dynamic motion primitives. Our results indicate that the system is a promising solution to the problem of remote training of robot tasks.Comment: Accepted in IEEE International Conference on Imaging Systems and Techniques - IST201
    corecore