6,794 research outputs found
Extended Object Tracking: Introduction, Overview and Applications
This article provides an elaborate overview of current research in extended
object tracking. We provide a clear definition of the extended object tracking
problem and discuss its delimitation to other types of object tracking. Next,
different aspects of extended object modelling are extensively discussed.
Subsequently, we give a tutorial introduction to two basic and well used
extended object tracking approaches - the random matrix approach and the Kalman
filter-based approach for star-convex shapes. The next part treats the tracking
of multiple extended objects and elaborates how the large number of feasible
association hypotheses can be tackled using both Random Finite Set (RFS) and
Non-RFS multi-object trackers. The article concludes with a summary of current
applications, where four example applications involving camera, X-band radar,
light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are
highlighted.Comment: 30 pages, 19 figure
A Survey on Wireless Sensor Network Security
Wireless sensor networks (WSNs) have recently attracted a lot of interest in
the research community due their wide range of applications. Due to distributed
nature of these networks and their deployment in remote areas, these networks
are vulnerable to numerous security threats that can adversely affect their
proper functioning. This problem is more critical if the network is deployed
for some mission-critical applications such as in a tactical battlefield.
Random failure of nodes is also very likely in real-life deployment scenarios.
Due to resource constraints in the sensor nodes, traditional security
mechanisms with large overhead of computation and communication are infeasible
in WSNs. Security in sensor networks is, therefore, a particularly challenging
task. This paper discusses the current state of the art in security mechanisms
for WSNs. Various types of attacks are discussed and their countermeasures
presented. A brief discussion on the future direction of research in WSN
security is also included.Comment: 24 pages, 4 figures, 2 table
Obstacle-aware Adaptive Informative Path Planning for UAV-based Target Search
Target search with unmanned aerial vehicles (UAVs) is relevant problem to
many scenarios, e.g., search and rescue (SaR). However, a key challenge is
planning paths for maximal search efficiency given flight time constraints. To
address this, we propose the Obstacle-aware Adaptive Informative Path Planning
(OA-IPP) algorithm for target search in cluttered environments using UAVs. Our
approach leverages a layered planning strategy using a Gaussian Process
(GP)-based model of target occupancy to generate informative paths in
continuous 3D space. Within this framework, we introduce an adaptive replanning
scheme which allows us to trade off between information gain, field coverage,
sensor performance, and collision avoidance for efficient target detection.
Extensive simulations show that our OA-IPP method performs better than
state-of-the-art planners, and we demonstrate its application in a realistic
urban SaR scenario.Comment: Paper accepted for International Conference on Robotics and
Automation (ICRA-2019) to be held at Montreal, Canad
ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans
We introduce ScanComplete, a novel data-driven approach for taking an
incomplete 3D scan of a scene as input and predicting a complete 3D model along
with per-voxel semantic labels. The key contribution of our method is its
ability to handle large scenes with varying spatial extent, managing the cubic
growth in data size as scene size increases. To this end, we devise a
fully-convolutional generative 3D CNN model whose filter kernels are invariant
to the overall scene size. The model can be trained on scene subvolumes but
deployed on arbitrarily large scenes at test time. In addition, we propose a
coarse-to-fine inference strategy in order to produce high-resolution output
while also leveraging large input context sizes. In an extensive series of
experiments, we carefully evaluate different model design choices, considering
both deterministic and probabilistic models for completion and semantic
inference. Our results show that we outperform other methods not only in the
size of the environments handled and processing efficiency, but also with
regard to completion quality and semantic segmentation performance by a
significant margin.Comment: Video: https://youtu.be/5s5s8iH0NF
Learned Semantic Multi-Sensor Depth Map Fusion
Volumetric depth map fusion based on truncated signed distance functions has
become a standard method and is used in many 3D reconstruction pipelines. In
this paper, we are generalizing this classic method in multiple ways: 1)
Semantics: Semantic information enriches the scene representation and is
incorporated into the fusion process. 2) Multi-Sensor: Depth information can
originate from different sensors or algorithms with very different noise and
outlier statistics which are considered during data fusion. 3) Scene denoising
and completion: Sensors can fail to recover depth for certain materials and
light conditions, or data is missing due to occlusions. Our method denoises the
geometry, closes holes and computes a watertight surface for every semantic
class. 4) Learning: We propose a neural network reconstruction method that
unifies all these properties within a single powerful framework. Our method
learns sensor or algorithm properties jointly with semantic depth fusion and
scene completion and can also be used as an expert system, e.g. to unify the
strengths of various photometric stereo algorithms. Our approach is the first
to unify all these properties. Experimental evaluations on both synthetic and
real data sets demonstrate clear improvements.Comment: 11 pages, 7 figures, 2 tables, accepted for the 2nd Workshop on 3D
Reconstruction in the Wild (3DRW2019) in conjunction with ICCV201
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …