2,207 research outputs found

    Multi-object Tracking in Aerial Image Sequences using Aerial Tracking Learning and Detection Algorithm

    Get PDF
    Vison based tracking in aerial images has its own significance in the areas of both civil and defense applications.  A novel algorithm called aerial tracking learning detection which works on the basis of the popular tracking learning detection algorithm to effectively track single and multiple objects in aerial images is proposed in this study. Tracking learning detection (TLD) considers both appearance and motion features for tracking. It can handle occlusion to certain extent, and can work well on long duration video sequences. However, when objects are tracked in aerial images taken from platforms like unmanned air vehicle, the problems of frequent pose change, scale and illumination variations arise adding to low resolution, noise and jitter introduced by motion of the camera.  The proposed algorithm incorporates compensation for the camera movement, algorithmic modifications in combining appearance and motion cues for detection and tracking of multiple objects and enhancements in the form of inter object distance measure for improved performance of the tracker when there are many identical objects in proximity. This algorithm has been tested on a large number of aerial sequences including benchmark videos, TLD dataset and many classified unmanned air vehicle sequences and has shown better performance in comparison to TLD.

    Deep Drone Racing: From Simulation to Reality with Domain Randomization

    Full text link
    Dynamically changing environments, unreliable state estimation, and operation under severe resource constraints are fundamental challenges that limit the deployment of small autonomous drones. We address these challenges in the context of autonomous, vision-based drone racing in dynamic environments. A racing drone must traverse a track with possibly moving gates at high speed. We enable this functionality by combining the performance of a state-of-the-art planning and control system with the perceptual awareness of a convolutional neural network (CNN). The resulting modular system is both platform- and domain-independent: it is trained in simulation and deployed on a physical quadrotor without any fine-tuning. The abundance of simulated data, generated via domain randomization, makes our system robust to changes of illumination and gate appearance. To the best of our knowledge, our approach is the first to demonstrate zero-shot sim-to-real transfer on the task of agile drone flight. We extensively test the precision and robustness of our system, both in simulation and on a physical platform, and show significant improvements over the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854

    Visual Servoing from Deep Neural Networks

    Get PDF
    We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions.A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.Comment: fixed authors lis
    corecore