9,595 research outputs found
Mixed marker-based/marker-less visual odometry system for mobile robots
When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test
Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern
Line scanning cameras, which capture only a single line of pixels, have been
increasingly used in ground based mobile or robotic platforms. In applications
where it is advantageous to directly georeference the camera data to world
coordinates, an accurate estimate of the camera's 6D pose is required. This
paper focuses on the common case where a mobile platform is equipped with a
rigidly mounted line scanning camera, whose pose is unknown, and a navigation
system providing vehicle body pose estimates. We propose a novel method that
estimates the camera's pose relative to the navigation system. The approach
involves imaging and manually labelling a calibration pattern with distinctly
identifiable points, triangulating these points from camera and navigation
system data and reprojecting them in order to compute a likelihood, which is
maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte
Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset.
Tested on two different platforms, the method was able to estimate the pose to
within 0.06 m / 1.05 and 0.18 m / 2.39. We also propose
several approaches to displaying and interpreting the 6D results in a human
readable way.Comment: Published in MDPI Sensors, 30 October 201
Computation of the optimal relative pose between overlapping grid maps through discrepancy minimization
Grid maps are a common environment representation in mobile robotics. Many Simultaneous Localization and Mapping (SLAM) solutions divide the global map into submaps, forming some kind of graph or tree to represent the structure of the environment, while the metric details are captured in the submaps. This work presents a novel algorithm that is able to compute a physically feasible relative pose between two overlapping grid maps. Our algorithm can be used for correspondence search (data association), but also for integrating negative information in a unified way. This paper proposes a discrepancy measure between two overlapping grid maps and its application in a quasi Newton optimization algorithm, with the hypothesis that minimizing such discrepancy could provide useful information for SLAM. Experimental evidence is provided showing the high potential of the algorithm
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots
For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered.
Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation.
The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented.
More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem.
The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation.
This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera
- …