1,301 research outputs found
Fast, Robust, Accurate, Multi-Body Motion Aware SLAM
Simultaneous ego localization and surrounding object motion awareness are significant issues for the navigation capability of unmanned systems and virtual-real interaction applications. Robust and accurate data association at object and feature levels is one of the key factors in solving this problem. However, currently available solutions ignore the complementarity among different cues in the front-end object association and the negative effects of poorly tracked features on the back-end optimization. It makes them not robust enough in practical applications. Motivated by these observations, we make up rigid environment as a unified whole to assist state decoupling by integrating high-level semantic information, ultimately enabling simultaneous multi-states estimation. A filter-based multi-cues fusion object tracker is proposed for establishing more stable object-level data association. Combined with the object’s motion priors, the motion-aided feature tracking algorithm is proposed to improve the feature-level data association performance. Furthermore, a novel state estimation factor graph is designed which integrates a specific feature observation uncertainty model and the intrinsic priors of tracked object, and solved through sliding-window optimization. Our system is evaluated using the KITTI dataset and achieves comparable performance to state-of-the-art object pose estimation systems both quantitatively and qualitatively. We have also validated our system on simulation environment and a real-world dataset to confirm the potential application value in different practical scenarios
Integration of Absolute Orientation Measurements in the KinectFusion Reconstruction pipeline
In this paper, we show how absolute orientation measurements provided by
low-cost but high-fidelity IMU sensors can be integrated into the KinectFusion
pipeline. We show that integration improves both runtime, robustness and
quality of the 3D reconstruction. In particular, we use this orientation data
to seed and regularize the ICP registration technique. We also present a
technique to filter the pairs of 3D matched points based on the distribution of
their distances. This filter is implemented efficiently on the GPU. Estimating
the distribution of the distances helps control the number of iterations
necessary for the convergence of the ICP algorithm. Finally, we show
experimental results that highlight improvements in robustness, a speed-up of
almost 12%, and a gain in tracking quality of 53% for the ATE metric on the
Freiburg benchmark.Comment: CVPR Workshop on Visual Odometry and Computer Vision Applications
Based on Location Clues 201
GNSS-stereo-inertial SLAM for arable farming
The accelerating pace in the automation of agricultural tasks demands highly
accurate and robust localization systems for field robots. Simultaneous
Localization and Mapping (SLAM) methods inevitably accumulate drift on
exploratory trajectories and primarily rely on place revisiting and loop
closing to keep a bounded global localization error. Loop closure techniques
are significantly challenging in agricultural fields, as the local visual
appearance of different views is very similar and might change easily due to
weather effects. A suitable alternative in practice is to employ global sensor
positioning systems jointly with the rest of the robot sensors. In this paper
we propose and implement the fusion of global navigation satellite system
(GNSS), stereo views, and inertial measurements for localization purposes.
Specifically, we incorporate, in a tightly coupled manner, GNSS measurements
into the stereo-inertial ORB-SLAM3 pipeline. We thoroughly evaluate our
implementation in the sequences of the Rosario data set, recorded by an
autonomous robot in soybean fields, and our own in-house data. Our data
includes measurements from a conventional GNSS, rarely included in evaluations
of state-of-the-art approaches. We characterize the performance of
GNSS-stereo-inertial SLAM in this application case, reporting pose error
reductions between 10% and 30% compared to visual-inertial and loosely coupled
GNSS-stereo-inertial baselines. In addition to such analysis, we also release
the code of our implementation as open source.Comment: This paper has been accepted for publication in Journal of Field
Robotics, 202
Agent and object aware tracking and mapping methods for mobile manipulators
The age of the intelligent machine is upon us. They exist in our factories, our warehouses, our military, our hospitals, on our roads, and on the moon. Most of these things we call robots. When placed in a
controlled or known environment such as an automotive factory or a distribution warehouse they perform their given roles with exceptional efficiency, achieving far more than is within reach of a humble human being. Despite the remarkable success of intelligent machines in such domains, they have yet to make a full-hearted deployment into our homes. The missing link between the robots we have now and the robots that are soon to come to our houses is perception.
Perception as we mean it here refers to a level of understanding beyond the collection and aggregation of sensory data. Much of the available sensory information is noisy and unreliable, our homes contain many reflective surfaces, repeating textures on large flat surfaces, and many disruptive moving elements, including humans. These environments change over time, with objects frequently moving within and between rooms.
This idea of change in an environment is fundamental to robotic applications, as in most cases we expect them to be effectors of such change. We can identify two particular challenges1 that must be solved for robots to make the jump to less structured environments - how to manage noise and disruptive elements in observational data, and how to understand the world as a set of changeable elements (objects) which move over time within a wider environment. In this thesis we look at one possible approach to solving each of these problems.
For the first challenge we use proprioception aboard a robot with an articulated arm to handle difficult
and unreliable visual data caused both by the robot and the environment. We use sensor data aboard the robot to improve the pose tracking of a visual system when the robot moves rapidly, with high jerk, or when observing a scene with little visual variation.
For the second challenge, we build a model of the world on the level of rigid objects, and relocalise them both as they change location between different sequences and as they move. We use semantics, image keypoints, and 3D geometry to register and align objects between sequences, showing how their position has moved between disparate observations.Open Acces
Soft-connected Rigid Body Localization: State-of-the-Art and Research Directions for 6G
This white paper describes a proposed article that will aim to provide a
thorough study of the evolution of the typical paradigm of wireless
localization (WL), which is based on a single point model of each target,
towards wireless rigid body localization (W-RBL). We also look beyond the
concept of RBL itself, whereby each target is modeled as an independent
multi-point three-dimensional (3D), with shape enforced via a set of
conformation constraints, as a step towards a more general approach we refer to
as soft-connected RBL, whereby an ensemble of several objects embedded in a
given environment, is modeled as a set of soft-connected 3D objects, with rigid
and soft conformation constraints enforced within each object and among them,
respectively. A first intended contribution of the full version of this article
is a compact but comprehensive survey on mechanisms to evolve WL algorithms in
W-RBL schemes, considering their peculiarities in terms of the type of
information, mathematical approach, and features the build on or offer. A
subsequent contribution is a discussion of mechanisms to extend W-RBL
techniques to soft-connected rigid body localization (SCW-RBL) algorithms
PAMPC: Perception-Aware Model Predictive Control for Quadrotors
We present the first perception-aware model predictive control framework for
quadrotors that unifies control and planning with respect to action and
perception objectives. Our framework leverages numerical optimization to
compute trajectories that satisfy the system dynamics and require control
inputs within the limits of the platform. Simultaneously, it optimizes
perception objectives for robust and reliable sens- ing by maximizing the
visibility of a point of interest and minimizing its velocity in the image
plane. Considering both perception and action objectives for motion planning
and control is challenging due to the possible conflicts arising from their
respective requirements. For example, for a quadrotor to track a reference
trajectory, it needs to rotate to align its thrust with the direction of the
desired acceleration. However, the perception objective might require to
minimize such rotation to maximize the visibility of a point of interest. A
model-based optimization framework, able to consider both perception and action
objectives and couple them through the system dynamics, is therefore necessary.
Our perception-aware model predictive control framework works in a
receding-horizon fashion by iteratively solving a non-linear optimization
problem. It is capable of running in real-time, fully onboard our lightweight,
small-scale quadrotor using a low-power ARM computer, to- gether with a
visual-inertial odometry pipeline. We validate our approach in experiments
demonstrating (I) the contradiction between perception and action objectives,
and (II) improved behavior in extremely challenging lighting conditions
- …