353 research outputs found
3D Registration of Aerial and Ground Robots for Disaster Response: An Evaluation of Features, Descriptors, and Transformation Estimation
Global registration of heterogeneous ground and aerial mapping data is a
challenging task. This is especially difficult in disaster response scenarios
when we have no prior information on the environment and cannot assume the
regular order of man-made environments or meaningful semantic cues. In this
work we extensively evaluate different approaches to globally register UGV
generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud
maps from vision sensors. The approaches are realizations of different
selections for: a) local features: key-points or segments; b) descriptors:
FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR.
Additionally, we compare the results against standard approaches like applying
ICP after a good prior transformation has been given. The evaluation criteria
include the distance which a UGV needs to travel to successfully localize, the
registration error, and the computational cost. In this context, we report our
findings on effectively performing the task on two new Search and Rescue
datasets. Our results have the potential to help the community take informed
decisions when registering point-cloud maps from ground robots to those from
aerial robots.Comment: Awarded Best Paper at the 15th IEEE International Symposium on
Safety, Security, and Rescue Robotics 2017 (SSRR 2017
ATG-PVD: Ticketing Parking Violations on A Drone
In this paper, we introduce a novel suspect-and-investigate framework, which
can be easily embedded in a drone for automated parking violation detection
(PVD). Our proposed framework consists of: 1) SwiftFlow, an efficient and
accurate convolutional neural network (CNN) for unsupervised optical flow
estimation; 2) Flow-RCNN, a flow-guided CNN for car detection and
classification; and 3) an illegally parked car (IPC) candidate investigation
module developed based on visual SLAM. The proposed framework was successfully
embedded in a drone from ATG Robotics. The experimental results demonstrate
that, firstly, our proposed SwiftFlow outperforms all other state-of-the-art
unsupervised optical flow estimation approaches in terms of both speed and
accuracy; secondly, IPC candidates can be effectively and efficiently detected
by our proposed Flow-RCNN, with a better performance than our baseline network,
Faster-RCNN; finally, the actual IPCs can be successfully verified by our
investigation module after drone re-localization.Comment: 17 pages, 11 figures and 3 tables. This paper is accepted by ECCV
Workshops 202
An Explicit Method for Fast Monocular Depth Recovery in Corridor Environments
Monocular cameras are extensively employed in indoor robotics, but their
performance is limited in visual odometry, depth estimation, and related
applications due to the absence of scale information.Depth estimation refers to
the process of estimating a dense depth map from the corresponding input image,
existing researchers mostly address this issue through deep learning-based
approaches, yet their inference speed is slow, leading to poor real-time
capabilities. To tackle this challenge, we propose an explicit method for rapid
monocular depth recovery specifically designed for corridor environments,
leveraging the principles of nonlinear optimization. We adopt the virtual
camera assumption to make full use of the prior geometric features of the
scene. The depth estimation problem is transformed into an optimization problem
by minimizing the geometric residual. Furthermore, a novel depth plane
construction technique is introduced to categorize spatial points based on
their possible depths, facilitating swift depth estimation in enclosed
structural scenarios, such as corridors. We also propose a new corridor
dataset, named Corr\_EH\_z, which contains images as captured by the UGV camera
of a variety of corridors. An exhaustive set of experiments in different
corridors reveal the efficacy of the proposed algorithm.Comment: 10 pages, 8 figures. arXiv admin note: text overlap with
arXiv:2111.08600 by other author
Exploring the Technical Advances and Limits of Autonomous UAVs for Precise Agriculture in Constrained Environments
In the field of precise agriculture with autonomous unmanned aerial vehicles (UAVs), the utilization of drones holds significant potential to transform crop monitoring, management, and harvesting techniques. However, despite the numerous benefits of UAVs in smart farming, there are still several technical challenges that need to be addressed in order to render their widespread adoption possible, especially in constrained environments. This paper provides a study of the technical aspect and limitations of autonomous UAVs in precise agriculture applications for constrained environments
3DS-SLAM: A 3D Object Detection based Semantic SLAM towards Dynamic Indoor Environments
The existence of variable factors within the environment can cause a decline
in camera localization accuracy, as it violates the fundamental assumption of a
static environment in Simultaneous Localization and Mapping (SLAM) algorithms.
Recent semantic SLAM systems towards dynamic environments either rely solely on
2D semantic information, or solely on geometric information, or combine their
results in a loosely integrated manner. In this research paper, we introduce
3DS-SLAM, 3D Semantic SLAM, tailored for dynamic scenes with visual 3D object
detection. The 3DS-SLAM is a tightly-coupled algorithm resolving both semantic
and geometric constraints sequentially. We designed a 3D part-aware hybrid
transformer for point cloud-based object detection to identify dynamic objects.
Subsequently, we propose a dynamic feature filter based on HDBSCAN clustering
to extract objects with significant absolute depth differences. When compared
against ORB-SLAM2, 3DS-SLAM exhibits an average improvement of 98.01% across
the dynamic sequences of the TUM RGB-D dataset. Furthermore, it surpasses the
performance of the other four leading SLAM systems designed for dynamic
environments
- …