604 research outputs found
Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots
Safety is paramount for mobile robotic platforms such as self-driving cars
and unmanned aerial vehicles. This work is devoted to a task that is
indispensable for safety yet was largely overlooked in the past -- detecting
obstacles that are of very thin structures, such as wires, cables and tree
branches. This is a challenging problem, as thin objects can be problematic for
active sensors such as lidar and sonar and even for stereo cameras. In this
work, we propose to use video sequences for thin obstacle detection. We
represent obstacles with edges in the video frames, and reconstruct them in 3D
using efficient edge-based visual odometry techniques. We provide both a
monocular camera solution and a stereo camera solution. The former incorporates
Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter
enjoys a novel, purely vision-based solution. Experiments demonstrated that the
proposed methods are fast and able to detect thin obstacles robustly and
accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
Towards Autonomous Unmanned Vehicle Systems
As an emerging technology, autonomous Unmanned Vehicle Systems (UVS) have found not only many military applications, but also various civil applications. For example, Google, Amazon and Facebook are developing their UVS plans to explore new markets. However, there are still a lot of challenging problems which deter the UVS’s development. We study two important and challenging problems in this dissertation, i.e. localization and 3D reconstruction. Specifically, most GPS based localization systems are not very accurate and can have problems in areas where no GPS signals are available. Based on the Received Signal Strength Indication (RSSI) and Inertial Navigation System (INS), we propose a new hybrid localization system, which is very efficient and can account for dynamic communication environments. Extensive simulation results demonstrate the efficiency of the proposed localization system. Besides, 3D reconstruction is a key problem in autonomous navigation and hence very important for UVS.With the help of high-speed Internet and powerful cloud servers, the light-weight computers on the UVS can now execute computationally expensive computer vision based algorithms. We develop a 3D reconstruction scheme which employs cloud computing to perform realtime 3D reconstruction. Simulations and experiments show the efficacy and efficiency of our scheme
MOMA: Visual Mobile Marker Odometry
In this paper, we present a cooperative odometry scheme based on the
detection of mobile markers in line with the idea of cooperative positioning
for multiple robots [1]. To this end, we introduce a simple optimization scheme
that realizes visual mobile marker odometry via accurate fixed marker-based
camera positioning and analyse the characteristics of errors inherent to the
method compared to classical fixed marker-based navigation and visual odometry.
In addition, we provide a specific UAV-UGV configuration that allows for
continuous movements of the UAV without doing stops and a minimal
caterpillar-like configuration that works with one UGV alone. Finally, we
present a real-world implementation and evaluation for the proposed UAV-UGV
configuration
Application of augmented reality and robotic technology in broadcasting: A survey
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper
Visual SLAM for flying vehicles
The ability to learn a map of the environment is important for numerous types of robotic vehicles. In this paper, we address the problem of learning a visual map of the ground using flying vehicles. We assume that the vehicles are equipped with one or two low-cost downlooking cameras in combination with an attitude sensor. Our approach is able to construct a visual map that can later on be used for navigation. Key advantages of our approach are that it is comparably easy to implement, can robustly deal with noisy camera images, and can operate either with a monocular camera or a stereo camera system. Our technique uses visual features and estimates the correspondences between features using a variant of the progressive sample consensus (PROSAC) algorithm. This allows our approach to extract spatial constraints between camera poses that can then be used to address the simultaneous localization and mapping (SLAM) problem by applying graph methods. Furthermore, we address the problem of efficiently identifying loop closures. We performed several experiments with flying vehicles that demonstrate that our method is able to construct maps of large outdoor and indoor environments. © 2008 IEEE
Direct Sparse Visual-Inertial Odometry using Dynamic Marginalization
We present VI-DSO, a novel approach for visual-inertial odometry, which
jointly estimates camera poses and sparse scene geometry by minimizing
photometric and IMU measurement errors in a combined energy functional. The
visual part of the system performs a bundle-adjustment like optimization on a
sparse set of points, but unlike key-point based systems it directly minimizes
a photometric error. This makes it possible for the system to track not only
corners, but any pixels with large enough intensity gradients. IMU information
is accumulated between several frames using measurement preintegration, and is
inserted into the optimization as an additional constraint between keyframes.
We explicitly include scale and gravity direction into our model and jointly
optimize them together with other variables such as poses. As the scale is
often not immediately observable using IMU data this allows us to initialize
our visual-inertial system with an arbitrary scale instead of having to delay
the initialization until everything is observable. We perform partial
marginalization of old variables so that updates can be computed in a
reasonable time. In order to keep the system consistent we propose a novel
strategy which we call "dynamic marginalization". This technique allows us to
use partial marginalization even in cases where the initial scale estimate is
far from the optimum. We evaluate our method on the challenging EuRoC dataset,
showing that VI-DSO outperforms the state of the art
- …