6,801 research outputs found
Ego-Downward and Ambient Video based Person Location Association
Using an ego-centric camera to do localization and tracking is highly needed
for urban navigation and indoor assistive system when GPS is not available or
not accurate enough. The traditional hand-designed feature tracking and
estimation approach would fail without visible features. Recently, there are
several works exploring to use context features to do localization. However,
all of these suffer severe accuracy loss if given no visual context
information. To provide a possible solution to this problem, this paper
proposes a camera system with both ego-downward and third-static view to
perform localization and tracking in a learning approach. Besides, we also
proposed a novel action and motion verification model for cross-view
verification and localization. We performed comparative experiments based on
our collected dataset which considers the same dressing, gender, and background
diversity. Results indicate that the proposed model can achieve
improvement in accuracy performance. Eventually, we tested the model on
multi-people scenarios and obtained an average accuracy
Perception-aware Path Planning
In this paper, we give a double twist to the problem of planning under
uncertainty. State-of-the-art planners seek to minimize the localization
uncertainty by only considering the geometric structure of the scene. In this
paper, we argue that motion planning for vision-controlled robots should be
perception aware in that the robot should also favor texture-rich areas to
minimize the localization uncertainty during a goal-reaching task. Thus, we
describe how to optimally incorporate the photometric information (i.e.,
texture) of the scene, in addition to the the geometric one, to compute the
uncertainty of vision-based localization during path planning. To avoid the
caveats of feature-based localization systems (i.e., dependence on feature type
and user-defined thresholds), we use dense, direct methods. This allows us to
compute the localization uncertainty directly from the intensity values of
every pixel in the image. We also describe how to compute trajectories online,
considering also scenarios with no prior knowledge about the map. The proposed
framework is general and can easily be adapted to different robotic platforms
and scenarios. The effectiveness of our approach is demonstrated with extensive
experiments in both simulated and real-world environments using a
vision-controlled micro aerial vehicle.Comment: 16 pages, 20 figures, revised version. Conditionally accepted for
IEEE Transactions on Robotic
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …