1,599 research outputs found

    Dynamic Motion Modelling for Legged Robots

    Full text link
    An accurate motion model is an important component in modern-day robotic systems, but building such a model for a complex system often requires an appreciable amount of manual effort. In this paper we present a motion model representation, the Dynamic Gaussian Mixture Model (DGMM), that alleviates the need to manually design the form of a motion model, and provides a direct means of incorporating auxiliary sensory data into the model. This representation and its accompanying algorithms are validated experimentally using an 8-legged kinematically complex robot, as well as a standard benchmark dataset. The presented method not only learns the robot's motion model, but also improves the model's accuracy by incorporating information about the terrain surrounding the robot

    AUV SLAM and experiments using a mechanical scanning forward-looking sonar

    Get PDF
    Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM) problem, is a key prerequisite of truly autonomous robots. In this paper, a modified-FastSLAM algorithm is proposed and used in the navigation for our C-Ranger research platform, an open-frame AUV. A mechanical scanning imaging sonar is chosen as the active sensor for the AUV. The modified-FastSLAM implements the update relying on the on-board sensors of C-Ranger. On the other hand, the algorithm employs the data association which combines the single particle maximum likelihood method with modified negative evidence method, and uses the rank-based resampling to overcome the particle depletion problem. In order to verify the feasibility of the proposed methods, both simulation experiments and sea trials for C-Ranger are conducted. The experimental results show the modified-FastSLAM employed for the navigation of the C-Ranger AUV is much more effective and accurate compared with the traditional methods

    On-Manifold Preintegration for Real-Time Visual-Inertial Odometry

    Get PDF
    Current approaches for visual-inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time, this problem is further emphasized by the fact that inertial measurements come at high rate, hence leading to fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a \emph{preintegration theory} that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the \emph{maximum a posteriori} state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a-posteriori bias correction in analytic form. The second contribution is to show that the preintegrated IMU model can be seamlessly integrated into a visual-inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a \emph{structureless} model for visual measurements, which avoids optimizing over the 3D points, further accelerating the computation. We perform an extensive evaluation of our monocular \VIO pipeline on real and simulated datasets. The results confirm that our modelling effort leads to accurate state estimation in real-time, outperforming state-of-the-art approaches.Comment: 20 pages, 24 figures, accepted for publication in IEEE Transactions on Robotics (TRO) 201

    The Design and Implementation of a Bayesian CAD Modeler for Robotic Applications

    Get PDF
    We present a Bayesian CAD modeler for robotic applications. We address the problem of taking into account the propagation of geometric uncertainties when solving inverse geometric problems. The proposed method may be seen as a generalization of constraint-based approaches in which we explicitly model geometric uncertainties. Using our methodology, a geometric constraint is expressed as a probability distribution on the system parameters and the sensor measurements, instead of a simple equality or inequality. To solve geometric problems in this framework, we propose an original resolution method able to adapt to problem complexity. Using two examples, we show how to apply our approach by providing simulation results using our modeler

    Open Source Robot Localization for Non-Planar Environments

    Full text link
    The operational environments in which a mobile robot executes its missions often exhibit non-flat terrain characteristics, encompassing outdoor and indoor settings featuring ramps and slopes. In such scenarios, the conventional methodologies employed for localization encounter novel challenges and limitations. This study delineates a localization framework incorporating ground elevation and inclination considerations, deviating from traditional 2D localization paradigms that may falter in such contexts. In our proposed approach, the map encompasses elevation and spatial occupancy information, employing Gridmaps and Octomaps. At the same time, the perception model is designed to accommodate the robot's inclined orientation and the potential presence of ground as an obstacle, besides usual structural and dynamic obstacles. We have developed and rigorously validated our approach within Nav2, and esteemed open-source framework renowned for robot navigation. Our findings demonstrate that our methodology represents a viable and effective alternative for mobile robots operating in challenging outdoor environments or intrincate terrains

    Robot localization in symmetric environment

    Get PDF
    The robot localization problem is a key problem in making truly autonomous robots. If a robot does not know where it is, it can be difficult to determine what to do next. Monte Carlo Localization as a well known localization algorithm represents a robot\u27s belief by a set of weighted samples. This set of samples approximates the posterior probability of where the robot is located. Our method presents an extension to the MCL algorithm when localizing in highly symmetrical environments; a situation where MCL is often unable to correctly track equally probable poses for the robot. The sample sets in MCL often become impoverished when samples are generated in several locations. Our approach incorporates the idea of clustering the samples and organizes them considering to their orientation. Experimental results show our method is able to successfully determine the position of the robot in symmetric environment, while ordinary MCL often fails

    Multi-Robot FastSLAM for Large Domains

    Get PDF
    For a robot to build a map of its surrounding area, it must have accurate position information within the area, and to obtain accurate position information within the area, the robot needs to have an accurate map of the area. This circular problem is the Simultaneous Localization and Mapping (SLAM) problem. An efficient algorithm to solve it is FastSLAM, which is based on the Rao-Blackwellized particle filter. FastSLAM solves the SLAM problem for single-robot mapping using particles to represent the posterior of the robot pose and the map. Each particle of the filter possesses its own global map which is likely to be a grid map. The memory space required for these maps poses a serious limitation to the algorithm\u27s capability when the problem space is large. The problem will only get worse if the algorithm is adapted to multi-robot mapping. This thesis presents an alternate mapping algorithm that extends the single-robot FastSLAM algorithm to a multi-robot mapping algorithm that uses Absolute Space Representations (ASR) to represent the world. But each particle still maintains a local grid to map its vicinity and periodically this grid map is converted into an ASR. An ASR expresses a world in polygons requiring only a minimal amount of memory space. By using this altered mapping strategy, the problem faced in FastSLAM when mapping a large domain can be alleviated. In this algorithm, each robot maps separately, and when two robots encounter each other they exchange range and odometry readings from their last encounter to this encounter. Each robot then sets up another filter for the other robot\u27s data and incrementally updates its own map, incorporating the passed data and its own data at the same time. The passed data is processed in reverse by the receiving robot as if a virtual robot is back-tracking the path of the other robot. The algorithm is demonstrated using three data sets collected using a single robot equipped with odometry and laser-range finder sensors
    corecore