3,059 research outputs found
Aircraft Detection and Tracking Using UAV-Mounted Vision System
For unmanned aerial vehicles (UAVs) to operate safely in the national airspace where non-collaborating flying objects, such as general aviation (GA) aircraft without automatic dependent surveillance-broadcast (ADS-B), exist, the UAVs\u27 capability of “seeing these objects is especially important. This “seeing , or sensing, can be implemented via various means, such as Radar or Lidar. Here we consider using cameras mounted on UAVs only, which has the advantage of light weight and low power. For the visual system to work well, it is required that the camera-based sensing capability should be at the level equal to or exceeding that of human pilots.
This thesis deals with two basic issues/challenges of the camera-based sensing of flying objects. The first one is the stabilization of the shaky videos taken on the UAVs due to vibrations at different locations where the cameras are mounted. In the thesis, we consider several algorithms, including Kalman filters and particle filters, for stabilization. We provide detailed theoretical discussions of these filters as well as their implementations. The second one is reliable detection and tracking of aircraft using image processing algorithms. We combine morphological processing and dynamic programming to accomplish good results under different situations. The performance evaluation of different image processing algorithms is accomplished using synthetic and recorded data
Robust automatic target tracking based on a Bayesian ego-motion compensation framework for airborne FLIR imagery
Automatic target tracking in airborne FLIR imagery is currently a challenge due to the camera ego-motion. This phenomenon distorts the spatio-temporal correlation of the video sequence, which dramatically reduces the tracking performance. Several works address this problem using ego-motion compensation strategies. They use a deterministic approach to compensate the camera motion assuming a specific model of geometric transformation. However, in real sequences a specific geometric transformation can not accurately describe the camera ego-motion for the whole sequence, and as consequence of this, the performance of the tracking stage can significantly decrease, even completely fail. The optimum transformation for each pair of consecutive frames depends on the relative depth of the elements that compose the scene, and their degree of texturization. In this work, a novel Particle Filter framework is proposed to efficiently manage several hypothesis of geometric transformations: Euclidean, affine, and projective. Each type of transformation is used to compute candidate locations of the object in the current frame. Then, each candidate is evaluated by the measurement model of the Particle Filter using the appearance information. This approach is able to adapt to different camera ego-motion conditions, and thus to satisfactorily perform the tracking. The proposed strategy has been tested on the AMCOM FLIR dataset, showing a high efficiency in the tracking of different types of targets in real working conditions
Visibility Constrained Generative Model for Depth-based 3D Facial Pose Tracking
In this paper, we propose a generative framework that unifies depth-based 3D
facial pose tracking and face model adaptation on-the-fly, in the unconstrained
scenarios with heavy occlusions and arbitrary facial expression variations.
Specifically, we introduce a statistical 3D morphable model that flexibly
describes the distribution of points on the surface of the face model, with an
efficient switchable online adaptation that gradually captures the identity of
the tracked subject and rapidly constructs a suitable face model when the
subject changes. Moreover, unlike prior art that employed ICP-based facial pose
estimation, to improve robustness to occlusions, we propose a ray visibility
constraint that regularizes the pose based on the face model's visibility with
respect to the input point cloud. Ablation studies and experimental results on
Biwi and ICT-3DHP datasets demonstrate that the proposed framework is effective
and outperforms completing state-of-the-art depth-based methods
Real-time video stabilization without phantom movements for micro aerial vehicles
In recent times, micro aerial vehicles (MAVs) are becoming popular for several applications as rescue, surveillance, mapping, etc. Undesired motion between consecutive frames is a problem in a video recorded by MAVs. There are different approaches, applied in video post-processing, to solve this issue. However, there are only few algorithms able to be applied in real time. An additional and critical problem is the presence of false movements in the stabilized video. In this paper, we present a new approach of video stabilization which can be used in real time without generating false movements. Our proposal uses a combination of a low-pass filter and control action information to estimate the motion intention.Peer ReviewedPostprint (published version
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
高速ビジョンを用いたリアルタイムビデオモザイキングと安定化に関する研究
広島大学(Hiroshima University)博士(工学)Doctor of Engineeringdoctora
Online Digital Image Stabilization for an Unmanned Aerial Vehicle (UAV)
The Unmanned Aerial Vehicle (UAV) video system uses a portable camera mounted on the robot to monitor scene activities. In general, UAVs have very little stabilization equipment, so getting good and stable images of UAVs in real-time is still a challenge. This paper presents a novel framework for digital image stabilization for online applications using a UAV. This idea aims to solve the problem of unwanted vibration and motion when recording video using a UAV. The proposed method is based on dense optical flow to select features representing the displacement of two consecutive frames. K-means clustering is used to find the cluster of the motion vector field that has the largest members. The centroid of the largest cluster was chosen to estimate the rigid transform motion that handles rotation and translation. Then, the trajectory is compensated using the Kalman filter. The experimental results show that the proposed method is suitable for online video stabilization and achieves an average computation time performance of 47.5 frames per second (fps)
Recommended from our members
Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling
Conventional pattern recognition systems have two components: feature analysis and pattern classification. For any object in an image, features could be considered as the major characteristic of the object either for object recognition or object tracking purpose. Features extracted from a training image, can be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable scene analysis, it is important that the features extracted from the training image are detectable even under changes in image scale, noise and illumination. Scale invariant feature has wide applications such as image classification, object recognition and object tracking in the image processing area. In this thesis, color feature and SIFT (scale invariant feature transform) are considered to be scale invariant feature. The classification, recognition and tracking result were evaluated with novel evaluation criterion and compared with some existing methods. I also studied different types of scale invariant feature for the purpose of solving scene analysis problems. I propose probabilistic models as the foundation of analysis scene scenario of images. In order to differential the content of image, I develop novel algorithms for the adaptive combination for multiple features extracted from images. I demonstrate the performance of the developed algorithm on several scene analysis tasks, including object tracking, video stabilization, medical video segmentation and scene classification
SO(3)-invariant asymptotic observers for dense depth field estimation based on visual data and known camera motion
In this paper, we use known camera motion associated to a video sequence of a
static scene in order to estimate and incrementally refine the surrounding
depth field. We exploit the SO(3)-invariance of brightness and depth fields
dynamics to customize standard image processing techniques. Inspired by the
Horn-Schunck method, we propose a SO(3)-invariant cost to estimate the depth
field. At each time step, this provides a diffusion equation on the unit
Riemannian sphere that is numerically solved to obtain a real time depth field
estimation of the entire field of view. Two asymptotic observers are derived
from the governing equations of dynamics, respectively based on optical flow
and depth estimations: implemented on noisy sequences of synthetic images as
well as on real data, they perform a more robust and accurate depth estimation.
This approach is complementary to most methods employing state observers for
range estimation, which uniquely concern single or isolated feature points.Comment: Submitte
- …