8,467 research outputs found
Vision-model-based Real-time Localization of Unmanned Aerial Vehicle for Autonomous Structure Inspection under GPS-denied Environment
UAVs have been widely used in visual inspections of buildings, bridges and
other structures. In either outdoor autonomous or semi-autonomous flights
missions strong GPS signal is vital for UAV to locate its own positions.
However, strong GPS signal is not always available, and it can degrade or fully
loss underneath large structures or close to power lines, which can cause
serious control issues or even UAV crashes. Such limitations highly restricted
the applications of UAV as a routine inspection tool in various domains. In
this paper a vision-model-based real-time self-positioning method is proposed
to support autonomous aerial inspection without the need of GPS support.
Compared to other localization methods that requires additional onboard
sensors, the proposed method uses a single camera to continuously estimate the
inflight poses of UAV. Each step of the proposed method is discussed in detail,
and its performance is tested through an indoor test case.Comment: 8 pages, 5 figures, submitted to i3ce 201
Planar PØP: feature-less pose estimation with applications in UAV localization
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.We present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) approaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar PØP. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.Peer ReviewedPostprint (author's final draft
A Survey on Joint Object Detection and Pose Estimation using Monocular Vision
In this survey we present a complete landscape of joint object detection and
pose estimation methods that use monocular vision. Descriptions of traditional
approaches that involve descriptors or models and various estimation methods
have been provided. These descriptors or models include chordiograms,
shape-aware deformable parts model, bag of boundaries, distance transform
templates, natural 3D markers and facet features whereas the estimation methods
include iterative clustering estimation, probabilistic networks and iterative
genetic matching. Hybrid approaches that use handcrafted feature extraction
followed by estimation by deep learning methods have been outlined. We have
investigated and compared, wherever possible, pure deep learning based
approaches (single stage and multi stage) for this problem. Comprehensive
details of the various accuracy measures and metrics have been illustrated. For
the purpose of giving a clear overview, the characteristics of relevant
datasets are discussed. The trends that prevailed from the infancy of this
problem until now have also been highlighted.Comment: Accepted at the International Joint Conference on Computer Vision and
Pattern Recognition (CCVPR) 201
Occlusion Handling using Semantic Segmentation and Visibility-Based Rendering for Mixed Reality
Real-time occlusion handling is a major problem in outdoor mixed reality
system because it requires great computational cost mainly due to the
complexity of the scene. Using only segmentation, it is difficult to accurately
render a virtual object occluded by complex objects such as trees, bushes etc.
In this paper, we propose a novel occlusion handling method for real-time,
outdoor, and omni-directional mixed reality system using only the information
from a monocular image sequence. We first present a semantic segmentation
scheme for predicting the amount of visibility for different type of objects in
the scene. We also simultaneously calculate a foreground probability map using
depth estimation derived from optical flow. Finally, we combine the
segmentation result and the probability map to render the computer generated
object and the real scene using a visibility-based rendering method. Our
results show great improvement in handling occlusions compared to existing
blending based methods
Mixed marker-based/marker-less visual odometry system for mobile robots
When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test
- …