1,290 research outputs found
Sub-Nanosecond Time of Flight on Commercial Wi-Fi Cards
Time-of-flight, i.e., the time incurred by a signal to travel from
transmitter to receiver, is perhaps the most intuitive way to measure distances
using wireless signals. It is used in major positioning systems such as GPS,
RADAR, and SONAR. However, attempts at using time-of-flight for indoor
localization have failed to deliver acceptable accuracy due to fundamental
limitations in measuring time on Wi-Fi and other RF consumer technologies.
While the research community has developed alternatives for RF-based indoor
localization that do not require time-of-flight, those approaches have their
own limitations that hamper their use in practice. In particular, many existing
approaches need receivers with large antenna arrays while commercial Wi-Fi
nodes have two or three antennas. Other systems require fingerprinting the
environment to create signal maps. More fundamentally, none of these methods
support indoor positioning between a pair of Wi-Fi devices
without~third~party~support.
In this paper, we present a set of algorithms that measure the time-of-flight
to sub-nanosecond accuracy on commercial Wi-Fi cards. We implement these
algorithms and demonstrate a system that achieves accurate device-to-device
localization, i.e. enables a pair of Wi-Fi devices to locate each other without
any support from the infrastructure, not even the location of the access
points.Comment: 14 page
Learning to Fly by Crashing
How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid
obstacles? One approach is to use a small dataset collected by human experts:
however, high capacity learning algorithms tend to overfit when trained with
little data. An alternative is to use simulation. But the gap between
simulation and real world remains large especially for perception problems. The
reason most research avoids using large-scale real data is the fear of crashes!
In this paper, we propose to bite the bullet and collect a dataset of crashes
itself! We build a drone whose sole purpose is to crash into objects: it
samples naive trajectories and crashes into random objects. We crash our drone
11,500 times to create one of the biggest UAV crash dataset. This dataset
captures the different ways in which a UAV can crash. We use all this negative
flying data in conjunction with positive data sampled from the same
trajectories to learn a simple yet powerful policy for UAV navigation. We show
that this simple self-supervised model is quite effective in navigating the UAV
even in extremely cluttered environments with dynamic obstacles including
humans. For supplementary video see: https://youtu.be/u151hJaGKU
Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots
Safety is paramount for mobile robotic platforms such as self-driving cars
and unmanned aerial vehicles. This work is devoted to a task that is
indispensable for safety yet was largely overlooked in the past -- detecting
obstacles that are of very thin structures, such as wires, cables and tree
branches. This is a challenging problem, as thin objects can be problematic for
active sensors such as lidar and sonar and even for stereo cameras. In this
work, we propose to use video sequences for thin obstacle detection. We
represent obstacles with edges in the video frames, and reconstruct them in 3D
using efficient edge-based visual odometry techniques. We provide both a
monocular camera solution and a stereo camera solution. The former incorporates
Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter
enjoys a novel, purely vision-based solution. Experiments demonstrated that the
proposed methods are fast and able to detect thin obstacles robustly and
accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
- …