21,521 research outputs found
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Real-Time Panoramic Tracking for Event Cameras
Event cameras are a paradigm shift in camera technology. Instead of full
frames, the sensor captures a sparse set of events caused by intensity changes.
Since only the changes are transferred, those cameras are able to capture quick
movements of objects in the scene or of the camera itself. In this work we
propose a novel method to perform camera tracking of event cameras in a
panoramic setting with three degrees of freedom. We propose a direct camera
tracking formulation, similar to state-of-the-art in visual odometry. We show
that the minimal information needed for simultaneous tracking and mapping is
the spatial position of events, without using the appearance of the imaged
scene point. We verify the robustness to fast camera movements and dynamic
objects in the scene on a recently proposed dataset and self-recorded
sequences.Comment: Accepted to International Conference on Computational Photography
201
Long-term experiments with an adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability
Accurate position tracking with a single UWB anchor
Accurate localization and tracking are a fundamental requirement for robotic
applications. Localization systems like GPS, optical tracking, simultaneous
localization and mapping (SLAM) are used for daily life activities, research,
and commercial applications. Ultra-wideband (UWB) technology provides another
venue to accurately locate devices both indoors and outdoors. In this paper, we
study a localization solution with a single UWB anchor, instead of the
traditional multi-anchor setup. Besides the challenge of a single UWB ranging
source, the only other sensor we require is a low-cost 9 DoF inertial
measurement unit (IMU). Under such a configuration, we propose continuous
monitoring of UWB range changes to estimate the robot speed when moving on a
line. Combining speed estimation with orientation estimation from the IMU
sensor, the system becomes temporally observable. We use an Extended Kalman
Filter (EKF) to estimate the pose of a robot. With our solution, we can
effectively correct the accumulated error and maintain accurate tracking of a
moving robot.Comment: Accepted by ICRA202
A surgical system for automatic registration, stiffness mapping and dynamic image overlay
In this paper we develop a surgical system using the da Vinci research kit
(dVRK) that is capable of autonomously searching for tumors and dynamically
displaying the tumor location using augmented reality. Such a system has the
potential to quickly reveal the location and shape of tumors and visually
overlay that information to reduce the cognitive overload of the surgeon. We
believe that our approach is one of the first to incorporate state-of-the-art
methods in registration, force sensing and tumor localization into a unified
surgical system. First, the preoperative model is registered to the
intra-operative scene using a Bingham distribution-based filtering approach. An
active level set estimation is then used to find the location and the shape of
the tumors. We use a recently developed miniature force sensor to perform the
palpation. The estimated stiffness map is then dynamically overlaid onto the
registered preoperative model of the organ. We demonstrate the efficacy of our
system by performing experiments on phantom prostate models with embedded stiff
inclusions.Comment: International Symposium on Medical Robotics (ISMR 2018
Microtesla MRI of the human brain combined with MEG
One of the challenges in functional brain imaging is integration of
complementary imaging modalities, such as magnetoencephalography (MEG) and
functional magnetic resonance imaging (fMRI). MEG, which uses highly sensitive
superconducting quantum interference devices (SQUIDs) to directly measure
magnetic fields of neuronal currents, cannot be combined with conventional
high-field MRI in a single instrument. Indirect matching of MEG and MRI data
leads to significant co-registration errors. A recently proposed imaging method
- SQUID-based microtesla MRI - can be naturally combined with MEG in the same
system to directly provide structural maps for MEG-localized sources. It
enables easy and accurate integration of MEG and MRI/fMRI, because microtesla
MR images can be precisely matched to structural images provided by high-field
MRI and other techniques. Here we report the first images of the human brain by
microtesla MRI, together with auditory MEG (functional) data, recorded using
the same seven-channel SQUID system during the same imaging session. The images
were acquired at 46 microtesla measurement field with pre-polarization at 30
mT. We also estimated transverse relaxation times for different tissues at
microtesla fields. Our results demonstrate feasibility and potential of human
brain imaging by microtesla MRI. They also show that two new types of imaging
equipment - low-cost systems for anatomical MRI of the human brain at
microtesla fields, and more advanced instruments for combined functional (MEG)
and structural (microtesla MRI) brain imaging - are practical.Comment: 8 pages, 5 figures - accepted by JM
- …