155 research outputs found
Vision-based localization methods under GPS-denied conditions
This paper reviews vision-based localization methods in GPS-denied
environments and classifies the mainstream methods into Relative Vision
Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss
the broad application of optical flow in feature extraction-based Visual
Odometry (VO) solutions and introduce advanced optical flow estimation methods.
For AVL, we review recent advances in Visual Simultaneous Localization and
Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman
Filter (EKF) based methods. We also introduce the application of offline map
registration and lane vision detection schemes to achieve Absolute Visual
Localization. This paper compares the performance and applications of
mainstream methods for visual localization and provides suggestions for future
studies.Comment: 32 pages, 15 figure
Towards bio-inspired unsupervised representation learning for indoor aerial navigation
Aerial navigation in GPS-denied, indoor environments, is still an open
challenge. Drones can perceive the environment from a richer set of viewpoints,
while having more stringent compute and energy constraints than other
autonomous platforms. To tackle that problem, this research displays a
biologically inspired deep-learning algorithm for simultaneous localization and
mapping (SLAM) and its application in a drone navigation system. We propose an
unsupervised representation learning method that yields low-dimensional latent
state descriptors, that mitigates the sensitivity to perceptual aliasing, and
works on power-efficient, embedded hardware. The designed algorithm is
evaluated on a dataset collected in an indoor warehouse environment, and
initial results show the feasibility for robust indoor aerial navigation
Active Mapping and Robot Exploration: A Survey
Simultaneous localization and mapping responds to the problem of building a map of the environment without any prior information and based on the data obtained from one or more sensors. In most situations, the robot is driven by a human operator, but some systems are capable of navigating autonomously while mapping, which is called native simultaneous localization and mapping. This strategy focuses on actively calculating the trajectories to explore the environment while building a map with a minimum error. In this paper, a comprehensive review of the research work developed in this field is provided, targeting the most relevant contributions in indoor mobile robotics.This research was funded by the ELKARTEK project ELKARBOT KK-2020/00092 of the Basque Government
Real-time single image depth perception in the wild with handheld devices
Depth perception is paramount to tackle real-world problems, ranging from
autonomous driving to consumer applications. For the latter, depth estimation
from a single image represents the most versatile solution, since a standard
camera is available on almost any handheld device. Nonetheless, two main issues
limit its practical deployment: i) the low reliability when deployed
in-the-wild and ii) the demanding resource requirements to achieve real-time
performance, often not compatible with such devices. Therefore, in this paper,
we deeply investigate these issues showing how they are both addressable
adopting appropriate network design and training strategies -- also outlining
how to map the resulting networks on handheld devices to achieve real-time
performance. Our thorough evaluation highlights the ability of such fast
networks to generalize well to new environments, a crucial feature required to
tackle the extremely varied contexts faced in real applications. Indeed, to
further support this evidence, we report experimental results concerning
real-time depth-aware augmented reality and image blurring with smartphones
in-the-wild.Comment: 11 pages, 9 figure
A Comprehensive Review on Autonomous Navigation
The field of autonomous mobile robots has undergone dramatic advancements
over the past decades. Despite achieving important milestones, several
challenges are yet to be addressed. Aggregating the achievements of the robotic
community as survey papers is vital to keep the track of current
state-of-the-art and the challenges that must be tackled in the future. This
paper tries to provide a comprehensive review of autonomous mobile robots
covering topics such as sensor types, mobile robot platforms, simulation tools,
path planning and following, sensor fusion methods, obstacle avoidance, and
SLAM. The urge to present a survey paper is twofold. First, autonomous
navigation field evolves fast so writing survey papers regularly is crucial to
keep the research community well-aware of the current status of this field.
Second, deep learning methods have revolutionized many fields including
autonomous navigation. Therefore, it is necessary to give an appropriate
treatment of the role of deep learning in autonomous navigation as well which
is covered in this paper. Future works and research gaps will also be
discussed
Object-Aware Tracking and Mapping
Reasoning about geometric properties of digital cameras and optical physics enabled
researchers to build methods that localise cameras in 3D space from a video
stream, while – often simultaneously – constructing a model of the environment.
Related techniques have evolved substantially since the 1980s, leading to increasingly
accurate estimations. Traditionally, however, the quality of results is strongly
affected by the presence of moving objects, incomplete data, or difficult surfaces
– i.e. surfaces that are not Lambertian or lack texture. One insight of this work is
that these problems can be addressed by going beyond geometrical and optical constraints,
in favour of object level and semantic constraints. Incorporating specific
types of prior knowledge in the inference process, such as motion or shape priors,
leads to approaches with distinct advantages and disadvantages.
After introducing relevant concepts in Chapter 1 and Chapter 2, methods for building
object-centric maps in dynamic environments using motion priors are investigated
in Chapter 5. Chapter 6 addresses the same problem as Chapter 5, but presents
an approach which relies on semantic priors rather than motion cues. To fully exploit
semantic information, Chapter 7 discusses the conditioning of shape representations
on prior knowledge and the practical application to monocular, object-aware
reconstruction systems
- …