615 research outputs found

    A pan-tilt camera Fuzzy vision controller on an unmanned aerial vehicle

    Get PDF
    is paper presents an implementation of two Fuzzy Logic controllers working in parallel for a pan-tilt camera platform on an UAV. This implementation uses a basic Lucas-Kanade tracker algorithm, which sends information about the error between the center of the object to track and the center of the image, to the Fuzzy controller. This information is enough for the controller, to follow the object moving a two axis servo-platform, besides the UAV vibrations and movements. The two Fuzzy controllers of each axis, work with a rules-base of 49 rules, two inputs and one output with a more significant sector defined to improve the behavior of those

    A Decentralized Interactive Architecture for Aerial and Ground Mobile Robots Cooperation

    Full text link
    This paper presents a novel decentralized interactive architecture for aerial and ground mobile robots cooperation. The aerial mobile robot is used to provide a global coverage during an area inspection, while the ground mobile robot is used to provide a local coverage of ground features. We include a human-in-the-loop to provide waypoints for the ground mobile robot to progress safely in the inspected area. The aerial mobile robot follows continuously the ground mobile robot in order to always keep it in its coverage view.Comment: Submitted to 2015 International Conference on Control, Automation and Robotics (ICCAR

    Evaluating the accuracy of vehicle tracking data obtained from Unmanned Aerial Vehicles

    Get PDF
    Abstract This paper presents a methodology for tracking moving vehicles that integrates Unmanned Aerial Vehicles with video processing techniques. The authors investigated the usefulness of Unmanned Aerial Vehicles to capture reliable individual vehicle data by using GPS technology as a benchmark. A video processing algorithm for vehicles trajectory acquisition is introduced. The algorithm is based on OpenCV libraries. In order to assess the accuracy of the proposed video processing algorithm an instrumented vehicle was equipped with a high precision GPS. The video capture experiments were performed in two case studies. From the field, about 24,000 positioning data were acquired for the analysis. The results of these experiments highlight the versatility of the Unmanned Aerial Vehicles technology combined with video processing technique in monitoring real traffic data

    Visual servoing using fuzzy controllers on an unmanned aerial vehicles

    Get PDF
    This paper presents an implementa- tion of three Fuzzy Logic controllers working in parallel onboard a UAV, two for a pan-tilt camera platform and the third for control the yaw of the helicopter. This implementation uses a Lucas-Kanade tracker algo- rithm with a pyramidal optical ow implementation, which gives infor- mation to follow statics and moving objects, besides the UAV vibrations and movements. The platform con- troller is helped by the heading con- troller, in order to make smooth the big movements to the platform, re- ducing the risk of lost the warp selec- tion of the object to track. Also, the heading control remove the physic limit of the platform at the yaw axis. Some laboratory and UAV tests are presented in order to show the di er- ent behaviors and the good response of the presented controllers

    Compressed UAV sensing for flood monitoring by solving the continuous travelling salesman problem over hyperspectral maps

    Get PDF
    This is the final version. Available from SPIE via the DOI in this record.Remote Sensing of the Ocean, Sea Ice, Coastal Waters, and Large Water Regions 2018, 10 - 13 September 2018, Berlin, GermanyUnmanned Aerial Vehicles (UAVs) have shown great capability for disaster management due to their fast speed, automated deployment and low maintenance requirements. In recent years, disasters such as flooding are having increasingly damaging societal and environmental effects. To reduce their impact, real-time and reliable flood monitoring and prevention strategies are required. The limited battery life of small lightweight UAVs imposes efficient strategies to subsample the sensing field. This paper proposes a novel solution to maximise the number of inspected flooded cells while keeping the travelled distance bounded. Our proposal solves the so-called continuous Travelling Salesman Problem (TSP), where the costs of travelling from one cell to another depend not only on the distance, but also on the presence of water. To determine the optimal path between checkpoints, we employ the fast sweeping algorithm using a cost function defined from hyperspectral satellite maps identifying flooded regions. Preliminary results using MODIS flood maps show that our UAV planning strategy achieves a covered flooded surface approximately 4 times greater for the same travelled distance when compared to the conventional TSP solution. These results show new insights on the use of hyperspectral imagery acquired from UAVs to monitor water resourcesThis work was funded by the Royal Society of Edinburgh and National Science Foundation of China within the international project “Flood Detection and Monitoring using Hyperspectral Remote Sensing from Unmanned Aerial Vehicles” (project NNS/INT 15-16 Casaseca)

    Traffic Surveillance and Automated Data Extraction from Aerial Video Using Computer Vision, Artificial Intelligence, and Probabilistic Approaches

    Get PDF
    In transportation engineering, sufficient, reliable, and diverse traffic data is necessary for effective planning, operations, research, and professional practice. Using aerial imagery to achieve traffic surveillance and collect traffic data is one of the feasible ways that is facilitated by the advances of technologies in many related areas. A great deal of aerial imagery datasets are currently available and more datasets are collected every day for various applications. It will be beneficial to make full and efficient use of the attribute rich imagery as a resource for valid and useful traffic data for many applications in transportation research and practice. In this dissertation, a traffic surveillance system that can collect valid and useful traffic data using quality-limited aerial imagery datasets with diverse characteristics is developed. Two novel approaches, which can achieve robust and accurate performance, are proposed and implemented for this system. The first one is a computer vision-based approach, which uses convolutional neural network (CNN) to detect vehicles in aerial imagery and uses features to track those detections. This approach is capable of detecting and tracking vehicles in the aerial imagery datasets with a very limited quality. Experimental results indicate the performance of this approach is very promising and it can achieve accurate measurements for macroscopic traffic data and is also potential for reliable microscopic traffic data. The second approach is a multiple hypothesis tracking (MHT) approach with innovative kinematics and appearance models (KAM). The implemented MHT module is designed to cooperate with the CNN module in order to extend and improve the vehicle tracking system. Experiments are designed based on a meticulously established synthetic vehicle detection datasets, originally induced scale-agonistic property of MHT, and comprehensively identified metrics for performance evaluation. The experimental results not only indicate that the performance of this approach can be very promising, but also provide solutions for some long-standing problems and reveal the impacts of frame rate, detection noise, and traffic configurations as well as the effects of vehicle appearance information on the performance. The experimental results of both approaches prove the feasibility of traffic surveillance and data collection by detecting and tracking vehicles in aerial video, and indicate the direction of further research as well as solutions to achieve satisfactory performance with existing aerial imagery datasets that have very limited quality and frame rates. This traffic surveillance system has the potential to be transformational in how large area traffic data is collected in the future. Such a system will be capable of achieving wide area traffic surveillance and extracting valid and useful traffic data from wide area aerial video captured with a single platfor

    Automatic Pipeline Surveillance Air-Vehicle

    Get PDF
    This thesis presents the developments of a vision-based system for aerial pipeline Right-of-Way surveillance using optical/Infrared sensors mounted on Unmanned Aerial Vehicles (UAV). The aim of research is to develop a highly automated, on-board system for detecting and following the pipelines; while simultaneously detecting any third-party interference. The proposed approach of using a UAV platform could potentially reduce the cost of monitoring and surveying pipelines when compared to manned aircraft. The main contributions of this thesis are the development of the image-analysis algorithms, the overall system architecture and validation of in hardware based on scaled down Test environment. To evaluate the performance of the system, the algorithms were coded using Python programming language. A small-scale test-rig of the pipeline structure, as well as expected third-party interference, was setup to simulate the operational environment and capture/record data for the algorithm testing and validation. The pipeline endpoints are identified by transforming the 16-bits depth data of the explored environment into 3D point clouds world coordinates. Then, using the Random Sample Consensus (RANSAC) approach, the foreground and background are separated based on the transformed 3D point cloud to extract the plane that corresponds to the ground. Simultaneously, the boundaries of the explored environment are detected based on the 16-bit depth data using a canny detector. Following that, these boundaries were filtered out, after being transformed into a 3D point cloud, based on the real height of the pipeline for fast and accurate measurements using a Euclidean distance of each boundary point, relative to the plane of the ground extracted previously. The filtered boundaries were used to detect the straight lines of the object boundary (Hough lines), once transformed into 16-bit depth data, using a Hough transform method. The pipeline is verified by estimating a centre line segment, using a 3D point cloud of each pair of the Hough line segments, (transformed into 3D). Then, the corresponding linearity of the pipeline points cloud is filtered within the width of the pipeline using Euclidean distance in the foreground point cloud. Then, the segment length of the detected centre line is enhanced to match the exact pipeline segment by extending it along the filtered point cloud of the pipeline. The third-party interference is detected based on four parameters, namely: foreground depth data; pipeline depth data; pipeline endpoints location in the 3D point cloud; and Right-of-Way distance. The techniques include detection, classification, and localization algorithms. Finally, a waypoints-based navigation system was implemented for the air- vehicle to fly over the course waypoints that were generated online by a heading angle demand to follow the pipeline structure in real-time based on the online identification of the pipeline endpoints relative to a camera frame

    Target classification in multimodal video

    Get PDF
    The presented thesis focuses on enhancing scene segmentation and target recognition methodologies via the mobilisation of contextual information. The algorithms developed to achieve this goal utilise multi-modal sensor information collected across varying scenarios, from controlled indoor sequences to challenging rural locations. Sensors are chiefly colour band and long wave infrared (LWIR), enabling persistent surveillance capabilities across all environments. In the drive to develop effectual algorithms towards the outlined goals, key obstacles are identified and examined: the recovery of background scene structure from foreground object ’clutter’, employing contextual foreground knowledge to circumvent training a classifier when labeled data is not readily available, creating a labeled LWIR dataset to train a convolutional neural network (CNN) based object classifier and the viability of spatial context to address long range target classification when big data solutions are not enough. For an environment displaying frequent foreground clutter, such as a busy train station, we propose an algorithm exploiting foreground object presence to segment underlying scene structure that is not often visible. If such a location is outdoors and surveyed by an infra-red (IR) and visible band camera set-up, scene context and contextual knowledge transfer allows reasonable class predictions for thermal signatures within the scene to be determined. Furthermore, a labeled LWIR image corpus is created to train an infrared object classifier, using a CNN approach. The trained network demonstrates effective classification accuracy of 95% over 6 object classes. However, performance is not sustainable for IR targets acquired at long range due to low signal quality and classification accuracy drops. This is addressed by mobilising spatial context to affect network class scores, restoring robust classification capability
    corecore