1,215 research outputs found
An Effective Multi-Cue Positioning System for Agricultural Robotics
The self-localization capability is a crucial component for Unmanned Ground
Vehicles (UGV) in farming applications. Approaches based solely on visual cues
or on low-cost GPS are easily prone to fail in such scenarios. In this paper,
we present a robust and accurate 3D global pose estimation framework, designed
to take full advantage of heterogeneous sensory data. By modeling the pose
estimation problem as a pose graph optimization, our approach simultaneously
mitigates the cumulative drift introduced by motion estimation systems (wheel
odometry, visual odometry, ...), and the noise introduced by raw GPS readings.
Along with a suitable motion model, our system also integrates two additional
types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random
Field assumption. We demonstrate how using these additional cues substantially
reduces the error along the altitude axis and, moreover, how this benefit
spreads to the other components of the state. We report exhaustive experiments
combining several sensor setups, showing accuracy improvements ranging from 37%
to 76% with respect to the exclusive use of a GPS sensor. We show that our
approach provides accurate results even if the GPS unexpectedly changes
positioning mode. The code of our system along with the acquired datasets are
released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters,
201
Simultaneous Parameter Calibration, Localization, and Mapping
The calibration parameters of a mobile robot play a substantial role in navigation tasks. Often these parameters are subject to variations that depend either on changes in the environment or on the load of the robot. In this paper, we propose an approach to simultaneously estimate a map of the environment, the position of the on-board sensors of the robot, and its kinematic parameters. Our method requires no prior knowledge about the environment and relies only on a rough initial guess of the parameters of the platform. The proposed approach estimates the parameters online and it is able to adapt to non-stationary changes of the configuration. We tested our approach in simulated environments and on a wide range of real-world data using different types of robotic platforms. (C) 2012 Taylor & Francis and The Robotics Society of Japa
PIEKF-VIWO: Visual-Inertial-Wheel Odometry using Partial Invariant Extended Kalman Filter
Invariant Extended Kalman Filter (IEKF) has been successfully applied in
Visual-inertial Odometry (VIO) as an advanced achievement of Kalman filter,
showing great potential in sensor fusion. In this paper, we propose partial
IEKF (PIEKF), which only incorporates rotation-velocity state into the Lie
group structure and apply it for Visual-Inertial-Wheel Odometry (VIWO) to
improve positioning accuracy and consistency. Specifically, we derive the
rotation-velocity measurement model, which combines wheel measurements with
kinematic constraints. The model circumvents the wheel odometer's 3D
integration and covariance propagation, which is essential for filter
consistency. And a plane constraint is also introduced to enhance the position
accuracy. A dynamic outlier detection method is adopted, leveraging the
velocity state output. Through the simulation and real-world test, we validate
the effectiveness of our approach, which outperforms the standard Multi-State
Constraint Kalman Filter (MSCKF) based VIWO in consistency and accuracy
A mosaic of eyes
Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties
Kinematics Based Visual Localization for Skid-Steering Robots: Algorithm and Theory
To build commercial robots, skid-steering mechanical design is of increased
popularity due to its manufacturing simplicity and unique mechanism. However,
these also cause significant challenges on software and algorithm design,
especially for pose estimation (i.e., determining the robot's rotation and
position), which is the prerequisite of autonomous navigation. While the
general localization algorithms have been extensively studied in research
communities, there are still fundamental problems that need to be resolved for
localizing skid-steering robots that change their orientation with a skid. To
tackle this problem, we propose a probabilistic sliding-window estimator
dedicated to skid-steering robots, using measurements from a monocular camera,
the wheel encoders, and optionally an inertial measurement unit (IMU).
Specifically, we explicitly model the kinematics of skid-steering robots by
both track instantaneous centers of rotation (ICRs) and correction factors,
which are capable of compensating for the complexity of track-to-terrain
interaction, the imperfectness of mechanical design, terrain conditions and
smoothness, and so on. To prevent performance reduction in robots' lifelong
missions, the time- and location- varying kinematic parameters are estimated
online along with pose estimation states in a tightly-coupled manner. More
importantly, we conduct in-depth observability analysis for different sensors
and design configurations in this paper, which provides us with theoretical
tools in making the correct choice when building real commercial robots. In our
experiments, we validate the proposed method by both simulation tests and
real-world experiments, which demonstrate that our method outperforms competing
methods by wide margins.Comment: 18 pages in tota
Driving with Style: Inverse Reinforcement Learning in General-Purpose Planning for Automated Driving
Behavior and motion planning play an important role in automated driving.
Traditionally, behavior planners instruct local motion planners with predefined
behaviors. Due to the high scene complexity in urban environments,
unpredictable situations may occur in which behavior planners fail to match
predefined behavior templates. Recently, general-purpose planners have been
introduced, combining behavior and local motion planning. These general-purpose
planners allow behavior-aware motion planning given a single reward function.
However, two challenges arise: First, this function has to map a complex
feature space into rewards. Second, the reward function has to be manually
tuned by an expert. Manually tuning this reward function becomes a tedious
task. In this paper, we propose an approach that relies on human driving
demonstrations to automatically tune reward functions. This study offers
important insights into the driving style optimization of general-purpose
planners with maximum entropy inverse reinforcement learning. We evaluate our
approach based on the expected value difference between learned and
demonstrated policies. Furthermore, we compare the similarity of human driven
trajectories with optimal policies of our planner under learned and
expert-tuned reward functions. Our experiments show that we are able to learn
reward functions exceeding the level of manual expert tuning without prior
domain knowledge.Comment: Appeared at IROS 2019. Accepted version. Added/updated footnote,
minor correction in preliminarie
Fail-Aware LIDAR-Based Odometry for Autonomous Vehicles
Autonomous driving systems are set to become a reality in transport systems
and, so, maximum acceptance is being sought among users. Currently, the most
advanced architectures require driver intervention when functional system
failures or critical sensor operations take place, presenting problems related
to driver state, distractions, fatigue, and other factors that prevent safe
control. Therefore, this work presents a redundant, accurate, robust, and
scalable LiDAR odometry system with fail-aware system features that can allow
other systems to perform a safe stop manoeuvre without driver mediation. All
odometry systems have drift error, making it difficult to use them for
localisation tasks over extended periods. For this reason, the paper presents
an accurate LiDAR odometry system with a fail-aware indicator. This indicator
estimates a time window in which the system manages the localisation tasks
appropriately. The odometry error is minimised by applying a dynamic 6-DoF
model and fusing measures based on the Iterative Closest Points (ICP),
environment feature extraction, and Singular Value Decomposition (SVD) methods.
The obtained results are promising for two reasons: First, in the KITTI
odometry data set, the ranking achieved by the proposed method is twelfth,
considering only LiDAR-based methods, where its translation and rotation errors
are 1.00% and 0.0041 deg/m, respectively. Second, the encouraging results of
the fail-aware indicator demonstrate the safety of the proposed LiDAR odometry
system. The results depict that, in order to achieve an accurate odometry
system, complex models and measurement fusion techniques must be used to
improve its behaviour. Furthermore, if an odometry system is to be used for
redundant localisation features, it must integrate a fail-aware indicator for
use in a safe manner
Effective Target Aware Visual Navigation for UAVs
In this paper we propose an effective vision-based navigation method that
allows a multirotor vehicle to simultaneously reach a desired goal pose in the
environment while constantly facing a target object or landmark. Standard
techniques such as Position-Based Visual Servoing (PBVS) and Image-Based Visual
Servoing (IBVS) in some cases (e.g., while the multirotor is performing fast
maneuvers) do not allow to constantly maintain the line of sight with a target
of interest. Instead, we compute the optimal trajectory by solving a non-linear
optimization problem that minimizes the target re-projection error while
meeting the UAV's dynamic constraints. The desired trajectory is then tracked
by means of a real-time Non-linear Model Predictive Controller (NMPC): this
implicitly allows the multirotor to satisfy both the required constraints. We
successfully evaluate the proposed approach in many real and simulated
experiments, making an exhaustive comparison with a standard approach.Comment: Conference paper at "European Conference on Mobile Robotics" (ECMR)
201
- …