212 research outputs found

    高速ビジョンを用いたリアルタイムビデオモザイキングと安定化に関する研究

    Get PDF
    広島大学(Hiroshima University)博士(工学)Doctor of Engineeringdoctora

    Hybrid Video Stabilization for Mobile Vehicle Detection on SURF in Aerial Surveillance

    Get PDF
    Detection of moving vehicles in aerial video sequences is of great importance with many promising applications in surveillance, intelligence transportation, or public service applications such as emergency evacuation and policy security. However, vehicle detection is a challenging task due to global camera motion, low resolution of vehicles, and low contrast between vehicles and background. In this paper, we present a hybrid method to efficiently detect moving vehicle in aerial videos. Firstly, local feature extraction and matching were performed to estimate the global motion. It was demonstrated that the Speeded Up Robust Feature (SURF) key points were more suitable for the stabilization task. Then, a list of dynamic pixels was obtained and grouped for different moving vehicles by comparing the different optical flow normal. To enhance the precision of detection, some preprocessing methods were applied to the surveillance system, such as road extraction and other features. A quantitative evaluation on real video sequences indicated that the proposed method improved the detection performance significantly

    Traffic Surveillance and Automated Data Extraction from Aerial Video Using Computer Vision, Artificial Intelligence, and Probabilistic Approaches

    Get PDF
    In transportation engineering, sufficient, reliable, and diverse traffic data is necessary for effective planning, operations, research, and professional practice. Using aerial imagery to achieve traffic surveillance and collect traffic data is one of the feasible ways that is facilitated by the advances of technologies in many related areas. A great deal of aerial imagery datasets are currently available and more datasets are collected every day for various applications. It will be beneficial to make full and efficient use of the attribute rich imagery as a resource for valid and useful traffic data for many applications in transportation research and practice. In this dissertation, a traffic surveillance system that can collect valid and useful traffic data using quality-limited aerial imagery datasets with diverse characteristics is developed. Two novel approaches, which can achieve robust and accurate performance, are proposed and implemented for this system. The first one is a computer vision-based approach, which uses convolutional neural network (CNN) to detect vehicles in aerial imagery and uses features to track those detections. This approach is capable of detecting and tracking vehicles in the aerial imagery datasets with a very limited quality. Experimental results indicate the performance of this approach is very promising and it can achieve accurate measurements for macroscopic traffic data and is also potential for reliable microscopic traffic data. The second approach is a multiple hypothesis tracking (MHT) approach with innovative kinematics and appearance models (KAM). The implemented MHT module is designed to cooperate with the CNN module in order to extend and improve the vehicle tracking system. Experiments are designed based on a meticulously established synthetic vehicle detection datasets, originally induced scale-agonistic property of MHT, and comprehensively identified metrics for performance evaluation. The experimental results not only indicate that the performance of this approach can be very promising, but also provide solutions for some long-standing problems and reveal the impacts of frame rate, detection noise, and traffic configurations as well as the effects of vehicle appearance information on the performance. The experimental results of both approaches prove the feasibility of traffic surveillance and data collection by detecting and tracking vehicles in aerial video, and indicate the direction of further research as well as solutions to achieve satisfactory performance with existing aerial imagery datasets that have very limited quality and frame rates. This traffic surveillance system has the potential to be transformational in how large area traffic data is collected in the future. Such a system will be capable of achieving wide area traffic surveillance and extracting valid and useful traffic data from wide area aerial video captured with a single platfor

    Hydraulics and drones: observations of water level, bathymetry and water surface velocity from Unmanned Aerial Vehicles

    Get PDF

    Aerial Semantic Mapping for Precision Agriculture using Multispectral Imagery

    Get PDF
    Nowadays constant technological evolution cover several necessities and daily tasks in our society. In particular, drones usage, given its wide vision to capture the terrain surface images, allows to collect large amounts of information with high efficiency, performance and accuracy. This master dissertation’s main purpose is the analysis, classification and respective mapping of different terrain types and characteristics, using multispectral imagery. Solar radiation flow reflected on the surface is captured by the used multispectral camera’s different lenses (RedEdge-M, created by Micasense). Each one of these five lenses is able to capture different colour spectrums (i.e. Blue, Green, Red, Near-Infrared and RedEdge). It is possible to analyse the various spectrum indices from the collected imagery, according to the fusion of different combinations between coloured bands (e.g. NDVI, ENDVI, RDVI. . . ). This project engages a ROS (Robot Operating System) framework development, capable of correcting different captured imagery and, hence, calculating the implemented spectral indices. Several parametrizations of terrain analysis were carried throughout the project, and this information was represented in semantic maps by layers (e.g. vegetation, water, soil, rocks). The obtained experimental results were validated in the scope of several projects incorporated in PDR2020, with success rates between 70% and 90%. This framework can have multiple technical applications, not only in Precision Agriculture, but also in vehicles autonomous navigation and multi-robot cooperation

    StratoTrans : Unmanned Aerial System (UAS) 4G communication framework applied on the monitoring of road traffic and linear infrastructure

    Get PDF
    This study provides an operational solution to directly connect drones to internet by means of 4G telecommunications and exploit drone acquired data, including telemetry and imagery but focusing on video transmission. The novelty of this work is the application of 4G connection to link the drone directly to a data server where video (in this case to monitor road traffic) and imagery (in the case of linear infrastructures) are processed. However, this framework is appliable to any other monitoring purpose where the goal is to send real-time video or imagery to the headquarters where the drone data is processed, analyzed, and exploited. We describe a general framework and analyze some key points, such as the hardware to use, the data stream, and the network coverage, but also the complete resulting implementation of the applied unmanned aerial system (UAS) communication system through a Virtual Private Network (VPN) featuring a long-range telemetry high-capacity video link (up to 15 Mbps, 720 p video at 30 fps with 250 ms of latency). The application results in the real-time exploitation of the video, obtaining key information for traffic managers such as vehicle tracking, vehicle classification, speed estimation, and roundabout in-out matrices. The imagery downloads and storage is also performed thorough internet, although the Structure from Motion postprocessing is not real-time due to photogrammetric workflows. In conclusion, we describe a real-case application of drone connection to internet thorough 4G network, but it can be adapted to other applications. Although 5G will -in time- surpass 4G capacities, the described framework can enhance drone performance and facilitate paths for upgrading the connection of on-board devices to the 5G network

    EM 2000 Microbathymetric and HYDROSWEEP DS-2 Bathymetric Surveying – a Comparison of Seafloor Topography at Porcupine Bank, west of Ireland

    Get PDF
    One of the latest discoveries in the world oceans are carbonate structures in the North-East Atlantic. In the frameworks of several European projects, the research vessel POLARSTERN and underwater robot VICTOR 6000 were engaged to explore these areas. The data described in this thesis were collected during the expedition ARK XIX/3 between 16 - 19th June 2003. Bathymetric and microbathymetric data in parts of the Pelagia Province, located on the northern Porcupine Bank, west of Ireland, were measured with two multibeam sonar systems deployed at different distances from the bottom. The four compared models come from a KONGSBERG SIMRAD EM 2000 multibeam sonar system and an ATLAS ELEKTRONIK HYDROSWEEP DS-2 multibeam sonar system. After necessary corrections of the data, digital terrain models were created, subtracted and correlated using appropriate software. This thesis begins with a description of the historical background of bathymetry, followed by a description of the principles of navigation and underwater navigation, inertial navigation systems, and the calibration of these systems. Systematic errors will be pointed out. It examines the measurement principles of the echo sounders used on the ARK XIX/3a expedition and accompanying necessary procedures, such as CTD measurements. A discussion of how the data are processed from raw data to edited results, and the effects of the errors, follows. One chapter is dedicated to a comparison and interpretation of the data. Sidescan, mosaic and PARASOUND data from the Hedge and Scarp Mounds are introduced as complementary information

    Video Processing with Additional Information

    Get PDF
    Cameras are frequently deployed along with many additional sensors in aerial and ground-based platforms. Many video datasets have metadata containing measurements from inertial sensors, GPS units, etc. Hence the development of better video processing algorithms using additional information attains special significance. We first describe an intensity-based algorithm for stabilizing low resolution and low quality aerial videos. The primary contribution is the idea of minimizing the discrepancy in the intensity of selected pixels between two images. This is an application of inverse compositional alignment for registering images of low resolution and low quality, for which minimizing the intensity difference over salient pixels with high gradients results in faster and better convergence than when using all the pixels. Secondly, we describe a feature-based method for stabilization of aerial videos and segmentation of small moving objects. We use the coherency of background motion to jointly track features through the sequence. This enables accurate tracking of large numbers of features in the presence of repetitive texture, lack of well conditioned feature windows etc. We incorporate the segmentation problem within the joint feature tracking framework and propose the first combined joint-tracking and segmentation algorithm. The proposed approach enables highly accurate tracking, and segmentation of feature tracks that is used in a MAP-MRF framework for obtaining dense pixelwise labeling of the scene. We demonstrate competitive moving object detection in challenging video sequences of the VIVID dataset containing moving vehicles and humans that are small enough to cause background subtraction approaches to fail. Structure from Motion (SfM) has matured to a stage, where the emphasis is on developing fast, scalable and robust algorithms for large reconstruction problems. The availability of additional sensors such as inertial units and GPS along with video cameras motivate the development of SfM algorithms that leverage these additional measurements. In the third part, we study the benefits of the availability of a specific form of additional information - the vertical direction (gravity) and the height of the camera both of which can be conveniently measured using inertial sensors, and a monocular video sequence for 3D urban modeling. We show that in the presence of this information, the SfM equations can be rewritten in a bilinear form. This allows us to derive a fast, robust, and scalable SfM algorithm for large scale applications. The proposed SfM algorithm is experimentally demonstrated to have favorable properties compared to the sparse bundle adjustment algorithm. We provide experimental evidence indicating that the proposed algorithm converges in many cases to solutions with lower error than state-of-art implementations of bundle adjustment. We also demonstrate that for the case of large reconstruction problems, the proposed algorithm takes lesser time to reach its solution compared to bundle adjustment. We also present SfM results using our algorithm on the Google StreetView research dataset, and several other datasets
    corecore