541 research outputs found

    A content-based retrieval system for UAV-like video and associated metadata

    Get PDF
    In this paper we provide an overview of a content-based retrieval (CBR) system that has been specifically designed for handling UAV video and associated meta-data. Our emphasis in designing this system is on managing large quantities of such information and providing intuitive and efficient access mechanisms to this content, rather than on analysis of the video content. The retrieval unit in our system is termed a "trip". At capture time, each trip consists of an MPEG-1 video stream and a set of time stamped GPS locations. An analysis process automatically selects and associates GPS locations with the video timeline. The indexed trip is then stored in a shared trip repository. The repository forms the backend of a MPEG-211 compliant Web 2.0 application for subsequent querying, browsing, annotation and video playback. The system interface allows users to search/browse across the entire archive of trips and, depending on their access rights, to annotate other users' trips with additional information. Interaction with the CBR system is via a novel interactive map-based interface. This interface supports content access by time, date, region of interest on the map, previously annotated specific locations of interest and combinations of these. To develop such a system and investigate its practical usefulness in real world scenarios, clearly a significant amount of appropriate data is required. In the absence of a large volume of UAV data with which to work, we have simulated UAV-like data using GPS tagged video content captured from moving vehicles

    SUAVE: Integrating UAV video using a 3D model

    Get PDF
    Controlling an unmanned aerial vehicle (UAV) requires the operator to perform continuous surveillance and path planning. The operator's situation awareness degrades as an increasing number of surveillance videos must be viewed and integrated. The Picture-in-Picture display (PiP) provides a solution for integrating multiple UAV camera video by allowing the operator to view the video feed in the context of surrounding terrain. The experimental SUAVE (Simple Unmanned Aerial Vehicle Environment) display extends PiP methods by sampling imagery from the video stream to texture a 3D map of the terrain. The operator can then inspect this imagery using world in miniature (WIM) or fly-through methods. We investigate the properties and advantages of SUAVE in the context of a search mission with 3 UAVs

    Advanced framework for microscopic and lane‐level macroscopic traffic parameters estimation from UAV video

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/166282/1/itr2bf00873.pd

    Gaussian mixture model classifiers for detection and tracking in UAV video streams.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.Manual visual surveillance systems are subject to a high degree of human-error and operator fatigue. The automation of such systems often employs detectors, trackers and classifiers as fundamental building blocks. Detection, tracking and classification are especially useful and challenging in Unmanned Aerial Vehicle (UAV) based surveillance systems. Previous solutions have addressed challenges via complex classification methods. This dissertation proposes less complex Gaussian Mixture Model (GMM) based classifiers that can simplify the process; where data is represented as a reduced set of model parameters, and classification is performed in the low dimensionality parameter-space. The specification and adoption of GMM based classifiers on the UAV visual tracking feature space formed the principal contribution of the work. This methodology can be generalised to other feature spaces. This dissertation presents two main contributions in the form of submissions to ISI accredited journals. In the first paper, objectives are demonstrated with a vehicle detector incorporating a two stage GMM classifier, applied to a single feature space, namely Histogram of Oriented Gradients (HoG). While the second paper demonstrates objectives with a vehicle tracker using colour histograms (in RGB and HSV), with Gaussian Mixture Model (GMM) classifiers and a Kalman filter. The proposed works are comparable to related works with testing performed on benchmark datasets. In the tracking domain for such platforms, tracking alone is insufficient. Adaptive detection and classification can assist in search space reduction, building of knowledge priors and improved target representations. Results show that the proposed approach improves performance and robustness. Findings also indicate potential further enhancements such as a multi-mode tracker with global and local tracking based on a combination of both papers

    Implementation and Validation of Video Stabilization using Simulink

    Get PDF
    A fast video stabilization technique based on Gray-coded bit-plane (GCBP) matching for translational motion is implemented and tested using various image sequences. This technique performs motion estimation using GCBP of image sequences which greatly reduces the computational load. In order to further improve computational efficiency, the three-step search (TSS) is used along with GCBP matching to perform a competent search during correlation measure calculation. The entire technique has been implemented in Simulink to perform in real-time

    Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles.

    Get PDF
    Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach

    Sensor-Assisted Global Motion Estimation for Efficient UAV Video Coding

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.In this paper, we propose a novel video coding scheme to significantly reduce the coding complexity and enhance overall coding efficiency in videos acquired by high mobility devices such as unmanned aerial vehicles (UAVs). In order to reduce the encoded data bits and encoding time to facilitate real-time data transmission, as well as minimize the image distortion caused by the jitter of onboard camera, a sensor-assisted global motion estimation (GMV) algorithm is designed to calculate perspective transformation model and global motion vectors, which are used in both the inter-frame coding to improve the coding efficiency and intra-frame coding to reduce block search complexity. We conducted comprehensive simulation experiments on official HM-16.10 codec and the performance results show the proposed method can achieve faster block search by 50% to 60% speedup and lower bitrate by 15% to 30% compared with standard HEVC coding software

    SUAVE: Integrating UAV Video Using a 3D Model

    Full text link
    corecore