414 research outputs found
High-Precision Localization Using Ground Texture
Location-aware applications play an increasingly critical role in everyday
life. However, satellite-based localization (e.g., GPS) has limited accuracy
and can be unusable in dense urban areas and indoors. We introduce an
image-based global localization system that is accurate to a few millimeters
and performs reliable localization both indoors and outside. The key idea is to
capture and index distinctive local keypoints in ground textures. This is based
on the observation that ground textures including wood, carpet, tile, concrete,
and asphalt may look random and homogeneous, but all contain cracks, scratches,
or unique arrangements of fibers. These imperfections are persistent, and can
serve as local features. Our system incorporates a downward-facing camera to
capture the fine texture of the ground, together with an image processing
pipeline that locates the captured texture patch in a compact database
constructed offline. We demonstrate the capability of our system to robustly,
accurately, and quickly locate test images on various types of outdoor and
indoor ground surfaces
Navigation and Guidance for Autonomous Quadcopter Drones Using Deep Learning on Indoor Corridors
Autonomous drones require accurate navigation and localization algorithms to carry out their duties. Outdoors drones can utilize GPS for navigation and localization systems. However, GPS is often unreliable or not available at all indoors. Therefore, in this research, an autonomous indoor drone navigation model was created using a deep learning algorithm, to assist drone navigation automatically, especially in indoor corridor areas. In this research, only the Caddx Ratel 2 FPV camera mounted on the drone was used as an input for the deep learning model to navigate the drone forward without a collision with the wall in the corridor. This research produces two deep learning models, namely, a rotational model to overcome a drone's orientation deviations with a loss of 0.0010 and a mean squared error of 0.0009, and a translation model to overcome a drone's translation deviation with a loss of 0.0140 and a mean squared error of 0.011. The implementation of the two models on autonomous drones reaches an NCR value of 0.2. The conclusion from the results obtained in this research is that the difference in resolution and FOV value in the actual image captured by the FPV camera on the drone with the image used for training the deep learning model results in a discrepancy in the output value during the implementation of the deep learning model on autonomous drones and produces low NCR implementation values
Tightly Coupled 3D Lidar Inertial Odometry and Mapping
Ego-motion estimation is a fundamental requirement for most mobile robotic
applications. By sensor fusion, we can compensate the deficiencies of
stand-alone sensors and provide more reliable estimations. We introduce a
tightly coupled lidar-IMU fusion method in this paper. By jointly minimizing
the cost derived from lidar and IMU measurements, the lidar-IMU odometry (LIO)
can perform well with acceptable drift after long-term experiment, even in
challenging cases where the lidar measurements can be degraded. Besides, to
obtain more reliable estimations of the lidar poses, a rotation-constrained
refinement algorithm (LIO-mapping) is proposed to further align the lidar poses
with the global map. The experiment results demonstrate that the proposed
method can estimate the poses of the sensor pair at the IMU update rate with
high precision, even under fast motion conditions or with insufficient
features.Comment: Accepted by ICRA 201
Sub-Nanosecond Time of Flight on Commercial Wi-Fi Cards
Time-of-flight, i.e., the time incurred by a signal to travel from
transmitter to receiver, is perhaps the most intuitive way to measure distances
using wireless signals. It is used in major positioning systems such as GPS,
RADAR, and SONAR. However, attempts at using time-of-flight for indoor
localization have failed to deliver acceptable accuracy due to fundamental
limitations in measuring time on Wi-Fi and other RF consumer technologies.
While the research community has developed alternatives for RF-based indoor
localization that do not require time-of-flight, those approaches have their
own limitations that hamper their use in practice. In particular, many existing
approaches need receivers with large antenna arrays while commercial Wi-Fi
nodes have two or three antennas. Other systems require fingerprinting the
environment to create signal maps. More fundamentally, none of these methods
support indoor positioning between a pair of Wi-Fi devices
without~third~party~support.
In this paper, we present a set of algorithms that measure the time-of-flight
to sub-nanosecond accuracy on commercial Wi-Fi cards. We implement these
algorithms and demonstrate a system that achieves accurate device-to-device
localization, i.e. enables a pair of Wi-Fi devices to locate each other without
any support from the infrastructure, not even the location of the access
points.Comment: 14 page
A Survey of Positioning Systems Using Visible LED Lights
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As Global Positioning System (GPS) cannot provide satisfying performance in indoor environments, indoor positioning technology, which utilizes indoor wireless signals instead of GPS signals, has grown rapidly in recent years. Meanwhile, visible light communication (VLC) using light devices such as light emitting diodes (LEDs) has been deemed to be a promising candidate in the heterogeneous wireless networks that may collaborate with radio frequencies (RF) wireless networks. In particular, light-fidelity has a great potential for deployment in future indoor environments because of its high throughput and security advantages. This paper provides a comprehensive study of a novel positioning technology based on visible white LED lights, which has attracted much attention from both academia and industry. The essential characteristics and principles of this system are deeply discussed, and relevant positioning algorithms and designs are classified and elaborated. This paper undertakes a thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost. It presents indoor hybrid positioning systems among VLC and other systems (e.g., inertial sensors and RF systems). We also review and classify outdoor VLC positioning applications for the first time. Finally, this paper surveys major advances as well as open issues, challenges, and future research directions in VLC positioning systems.Peer reviewe
Selective Subtraction: An Extension of Background Subtraction
Background subtraction or scene modeling techniques model the background of the scene using the stationarity property and classify the scene into two classes of foreground and background. In doing so, most moving objects become foreground indiscriminately, except for perhaps some waving tree leaves, water ripples, or a water fountain, which are typically learned as part of the background using a large training set of video data. Traditional techniques exhibit a number of limitations including inability to model partial background or subtract partial foreground, inflexibility of the model being used, need for large training data and computational inefficiency. In this thesis, we present our work to address each of these limitations and propose algorithms in two major areas of research within background subtraction namely single-view and multi-view based techniques. We first propose the use of both spatial and temporal properties to model a dynamic scene and show how Mapping Convergence framework within Support Vector Mapping Convergence (SVMC) can be used to minimize training data. We also introduce a novel concept of background as the objects other than the foreground, which may include moving objects in the scene that cannot be learned from a training set because they occur only irregularly and sporadically, e.g. a walking person. We propose a selective subtraction method as an alternative to standard background subtraction, and show that a reference plane in a scene viewed by two cameras can be used as the decision boundary between foreground and background. In our definition, the foreground may actually occur behind a moving object. Our novel use of projective depth as a decision boundary allows us to extend the traditional definition of background subtraction and propose a much more powerful framework. Furthermore, we show that the reference plane can be selected in a very flexible manner, using for example the actual moving objects in the scene, if needed. We present diverse set of examples to show that: (i) the technique performs better than standard background subtraction techniques without the need for training, camera calibration, disparity map estimation, or special camera configurations; (ii) it is potentially more powerful than standard methods because of its flexibility of making it possible to select in real-time what to filter out as background, regardless of whether the object is moving or not, or whether it is a rare event or a frequent one; (iii) the technique can be used for a variety of situations including when images are captured using stationary cameras or hand-held cameras and for both indoor and outdoor scenes. We provide extensive results to show the effectiveness of the proposed framework in a variety of very challenging environments
- …