453 research outputs found

    MOR-UAV: A Benchmark Dataset and Baselines for Moving Object Recognition in UAV Videos

    Full text link
    Visual data collected from Unmanned Aerial Vehicles (UAVs) has opened a new frontier of computer vision that requires automated analysis of aerial images/videos. However, the existing UAV datasets primarily focus on object detection. An object detector does not differentiate between the moving and non-moving objects. Given a real-time UAV video stream, how can we both localize and classify the moving objects, i.e. perform moving object recognition (MOR)? The MOR is one of the essential tasks to support various UAV vision-based applications including aerial surveillance, search and rescue, event recognition, urban and rural scene understanding.To the best of our knowledge, no labeled dataset is available for MOR evaluation in UAV videos. Therefore, in this paper, we introduce MOR-UAV, a large-scale video dataset for MOR in aerial videos. We achieve this by labeling axis-aligned bounding boxes for moving objects which requires less computational resources than producing pixel-level estimates. We annotate 89,783 moving object instances collected from 30 UAV videos, consisting of 10,948 frames in various scenarios such as weather conditions, occlusion, changing flying altitude and multiple camera views. We assigned the labels for two categories of vehicles (car and heavy vehicle). Furthermore, we propose a deep unified framework MOR-UAVNet for MOR in UAV videos. Since, this is a first attempt for MOR in UAV videos, we present 16 baseline results based on the proposed framework over the MOR-UAV dataset through quantitative and qualitative experiments. We also analyze the motion-salient regions in the network through multiple layer visualizations. The MOR-UAVNet works online at inference as it requires only few past frames. Moreover, it doesn't require predefined target initialization from user. Experiments also demonstrate that the MOR-UAV dataset is quite challenging

    Survey on video anomaly detection in dynamic scenes with moving cameras

    Full text link
    The increasing popularity of compact and inexpensive cameras, e.g.~dash cameras, body cameras, and cameras equipped on robots, has sparked a growing interest in detecting anomalies within dynamic scenes recorded by moving cameras. However, existing reviews primarily concentrate on Video Anomaly Detection (VAD) methods assuming static cameras. The VAD literature with moving cameras remains fragmented, lacking comprehensive reviews to date. To address this gap, we endeavor to present the first comprehensive survey on Moving Camera Video Anomaly Detection (MC-VAD). We delve into the research papers related to MC-VAD, critically assessing their limitations and highlighting associated challenges. Our exploration encompasses three application domains: security, urban transportation, and marine environments, which in turn cover six specific tasks. We compile an extensive list of 25 publicly-available datasets spanning four distinct environments: underwater, water surface, ground, and aerial. We summarize the types of anomalies these datasets correspond to or contain, and present five main categories of approaches for detecting such anomalies. Lastly, we identify future research directions and discuss novel contributions that could advance the field of MC-VAD. With this survey, we aim to offer a valuable reference for researchers and practitioners striving to develop and advance state-of-the-art MC-VAD methods.Comment: Under revie

    Bridge Inspection: Human Performance, Unmanned Aerial Systems and Automation

    Get PDF
    Unmanned aerial systems (UASs) have become of considerable private and commercial interest for a variety of jobs and entertainment in the past 10 years. This paper is a literature review of the state of practice for the United States bridge inspection programs and outlines how automated and unmanned bridge inspections can be made suitable for present and future needs. At its best, current technology limits UAS use to an assistive tool for the inspector to perform a bridge inspection faster, safer, and without traffic closure. The major challenges for UASs are satisfying restrictive Federal Aviation Administration regulations, control issues in a GPS-denied environment, pilot expenses and availability, time and cost allocated to tuning, maintenance, post-processing time, and acceptance of the collected data by bridge owners. Using UASs with self-navigation abilities and improving image-processing algorithms to provide results near real-time could revolutionize the bridge inspection industry by providing accurate, multi-use, autonomous three-dimensional models and damage identification

    Revisión de algoritmos, métodos y técnicas para la detección de UAVs y UAS en aplicaciones de audio, radiofrecuencia y video

    Get PDF
    Unmanned Aerial Vehicles (UAVs), also known as drones, have had an exponential evolution in recent times due in large part to the development of technologies that enhance the development of these devices. This has resulted in increasingly affordable and better-equipped artifacts, which implies their application in new fields such as agriculture, transport, monitoring, and aerial photography. However, drones have also been used in terrorist acts, privacy violations, and espionage, in addition to involuntary accidents in high-risk zones such as airports. In response to these events, multiple technologies have been introduced to control and monitor the airspace in order to ensure protection in risk areas. This paper is a review of the state of the art of the techniques, methods, and algorithms used in video, radiofrequency, and audio-based applications to detect UAVs and Unmanned Aircraft Systems (UAS). This study can serve as a starting point to develop future drone detection systems with the most convenient technologies that meet certain requirements of optimal scalability, portability, reliability, and availability.Los vehículos aéreos no tripulados, conocidos también como drones, han tenido una evolución exponencial en los últimos tiempos, debido en gran parte al desarrollo de las tecnologías que potencian su desarrollo, lo cual ha desencadenado en artefactos cada vez más asequibles y con mejores prestaciones, lo que implica el desarrollo de nuevas aplicaciones como agricultura, transporte, monitoreo, fotografía aérea, entre otras. No obstante, los drones se han utilizado también en actos terroristas, violaciones a la privacidad y espionaje, además de haber producido accidentes involuntarios en zonas de alto riesgo de operación como aeropuertos. En respuesta a dichos eventos, aparecen tecnologías que permiten controlar y monitorear el espacio aéreo, con el fin de garantizar la protección en zonas de riesgo. En este artículo se realiza un estudio del estado del arte de la técnicas, métodos y algoritmos basados en video, en análisis de sonido y en radio frecuencia, para tener un punto de partida que permita el desarrollo en el futuro de un sistema de detección de drones, con las tecnologías más propicias, según los requerimientos que puedan ser planteados con las características de escalabilidad, portabilidad, confiabilidad y disponibilidad óptimas

    Developing a Traffic Safety Diagnostics System for Unmanned Aerial Vehicles UsingDeep Learning Algorithms

    Get PDF
    This thesis presents an automated traffic safety diagnostics solution using deep learning techniques to process traffic videos by Unmanned Aerial Vehicle (UAV). Mask R-CNN is employed to better detect vehicles in UAV videos after video stabilization. The vehicle trajectories are generated when tracking the detected vehicle by Channel and Spatial Reliability Tracking (CSRT) algorithm. During the detection process, missing vehicles could be tracked by the process of identifying stopped vehicles and comparing Intersect of Union (IOU) between the tracking results and the detection results. In addition, rotated bounding rectangles based on the pixel-to- pixel manner masks that are generated by Mask R-CNN detection, which are also introduced to obtain precise vehicle size and location data. Moreover, surrogate safety measures (i.e. post- encroachment time (PET)) are calculated for each conflict event at the pixel level. Therefore, conflicts could be identified through the process of comparing the PET values and the threshold. To be more specific, conflict types that include rear-end, head-on, sideswipe, and angle could be determined. A case study is presented at a typical signalized intersection, the results indicate that the proposed framework could notably improve the accuracy of the output data. Furthermore, by calculating the PET values for each conflict event, an automated traffic safety diagnostic for the studied intersection could be conducted. According to the research, rear-end conflicts are the most prevalent conflict type at the studied location, while one angle collision conflict is identified at the study duration. It is expected that the proposed method could help diagnose the safety problems efficiently with UAVs and appropriate countermeasures could be proposed after then

    Mock Certification Basis for an Unmanned Rotorcraft for Precision Agricultural Spraying

    Get PDF
    This technical report presents the results of a case study using a hazard-based approach to develop preliminary design and performance criteria for an unmanned agricultural rotorcraft requiring airworthiness certification. This case study is one of the first in the public domain to examine design and performance criteria for an unmanned aircraft system (UAS) in tandem with its concept of operations. The case study results are intended to support development of airworthiness standards that could form a minimum safety baseline for midsize unmanned rotorcraft performing precision agricultural spraying operations under beyond visual line-of-sight conditions in a rural environment. This study investigates the applicability of current methods, processes, and standards for assuring airworthiness of conventionally piloted (manned) aircraft to assuring the airworthiness of UAS. The study started with the development of a detailed concept of operations for precision agricultural spraying with an unmanned rotorcraft (pp. 5-18). The concept of operations in conjunction with a specimen unmanned rotorcraft were used to develop an operational context and a list of relevant hazards (p. 22). Minimum design and performance requirements necessary to mitigate the hazards provide the foundation of a proposed (or mock) type certification basis. A type certification basis specifies the applicable standards an applicant must show compliance with to receive regulatory approval. A detailed analysis of the current airworthiness regulations for normal-category rotorcraft (14 Code of Federal Regulations, Part 27) was performed. Each Part 27 regulation was evaluated to determine whether it mitigated one of the relevant hazards for the specimen UAS. Those regulations that did were included in the initial core of the type certification basis (pp. 26-31) as written or with some simple modifications. Those regulations that did not mitigate a recognized hazard were excluded from the certification basis. The remaining regulations were applicable in intent, but the text could not be easily tailored. Those regulations were addressed in separate issue papers. Exploiting established regulations avoids the difficult task of generating and interpreting novel requirements, through the use of acceptable, standardized language. The rationale for the disposition of the regulations was assessed and captured (pp. 58-115). The core basis was then augmented by generating additional requirements (pp. 38-47) to mitigate hazards for an unmanned sprayer that are not covered in Part 27

    Automatic Fire Detection Using Computer Vision Techniques for UAV-based Forest Fire Surveillance

    Get PDF
    Due to their rapid response capability and maneuverability, extended operational range, and improved personnel safety, unmanned aerial vehicles (UAVs) with vision-based systems have great potentials for forest fire surveillance and detection. Over the last decade, it has shown an increasingly strong demand for UAV-based forest fire detection systems, as they can avoid many drawbacks of other forest fire detection systems based on satellites, manned aerial vehicles, and ground equipments. Despite this, the existing UAV-based forest fire detection systems still possess numerous practical issues for their use in operational conditions. In particular, the successful forest fire detection remains difficult, given highly complicated and non-structured environments of forest, smoke blocking the fire, motion of cameras mounted on UAVs, and analogues of flame characteristics. These adverse effects can seriously cause either false alarms or alarm failures. In order to successfully execute missions and meet their corresponding performance criteria and overcome these ever-increasing challenges, investigations on how to reduce false alarm rates, increase the probability of successful detection, and enhance adaptive capabilities to various circumstances are strongly demanded to improve the reliability and accuracy of forest fire detection system. According to the above-mentioned requirements, this thesis concentrates on the development of reliable and accurate forest fire detection algorithms which are applicable to UAVs. These algorithms provide a number of contributions, which include: (1) a two-layered forest fire detection method is designed considering both color and motion features of fire; it is expected to greatly improve the forest fire detection performance, while significantly reduce the motion of background caused by the movement of UAV; (2) a forest fire detection scheme is devised combining both visual and infrared images for increasing the accuracy and reliability of forest fire alarms; and (3) a learning-based fire detection approach is developed for distinguishing smoke (which is widely considered as an early signal of fire) from other analogues and achieving early stage fire detection

    Autonomous Target Tracking Of A Quadrotor UAV Using Monocular Visual-Inertial Odometry

    Get PDF
    Unmanned Aerial Vehicle (UAV) has been finding its ways into different applications. Hence, recent years witness extensive research towards achieving higher autonomy in UAV. Computer Vision (CV) algorithms replace Global Navigation Satellite System (GNSS), which is not reliable when the weather is bad, inside buildings or at secluded areas in performing real-time pose estimation. Thecontroller later uses the pose to navigate the UAV. This project presents a simulation of UAV, in MATLAB & SIMULINK, capable of autonomously detecting and tracking a designed visual marker. Referring to and improving the state-of-the-art CV algorithms, there is a newly formulated approach to detect the designed visual marker. The combination of data from the monocular camera with that from Inertial Measurement Unit (IMU) and sonar sensor enables the pose estimation of the UAV relative to the designed visual marker. A Proportional-Integral-Derivative (PID) controller later uses the pose of the UAV to navigate itself to be always following the target of interest
    corecore