4,795 research outputs found

    Unmanned Aerial Systems for Wildland and Forest Fires

    Full text link
    Wildfires represent an important natural risk causing economic losses, human death and important environmental damage. In recent years, we witness an increase in fire intensity and frequency. Research has been conducted towards the development of dedicated solutions for wildland and forest fire assistance and fighting. Systems were proposed for the remote detection and tracking of fires. These systems have shown improvements in the area of efficient data collection and fire characterization within small scale environments. However, wildfires cover large areas making some of the proposed ground-based systems unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial Systems (UAS) were proposed. UAS have proven to be useful due to their maneuverability, allowing for the implementation of remote sensing, allocation strategies and task planning. They can provide a low-cost alternative for the prevention, detection and real-time support of firefighting. In this paper we review previous work related to the use of UAS in wildfires. Onboard sensor instruments, fire perception algorithms and coordination strategies are considered. In addition, we present some of the recent frameworks proposing the use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at: https://doi.org/10.3390/drones501001

    Development and evaluation of low cost 2-d lidar based traffic data collection methods

    Get PDF
    Traffic data collection is one of the essential components of a transportation planning exercise. Granular traffic data such as volume count, vehicle classification, speed measurement, and occupancy, allows managing transportation systems more effectively. For effective traffic operation and management, authorities require deploying many sensors across the network. Moreover, the ascending efforts to achieve smart transportation aspects put immense pressure on planning authorities to deploy more sensors to cover an extensive network. This research focuses on the development and evaluation of inexpensive data collection methodology by using two-dimensional (2-D) Light Detection and Ranging (LiDAR) technology. LiDAR is adopted since it is economical and easily accessible technology. Moreover, its 360-degree visibility and accurate distance information make it more reliable. To collect traffic count data, the proposed method integrates a Continuous Wavelet Transform (CWT), and Support Vector Machine (SVM) into a single framework. Proof-of-Concept (POC) test is conducted in three different places in Newark, New Jersey to examine the performance of the proposed method. The POC test results demonstrate that the proposed method achieves acceptable performances, resulting in 83% ~ 94% accuracy. It is discovered that the proposed method\u27s accuracy is affected by the color of the exterior surface of a vehicle since some colored surfaces do not produce enough reflective rays. It is noticed that the blue and black colors are less reflective, while white-colored surfaces produce high reflective rays. A methodology is proposed that comprises K-means clustering, inverse sensor model, and Kalman filter to obtain trajectories of the vehicles at the intersections. The primary purpose of vehicle detection and tracking is to obtain the turning movement counts at an intersection. A K-means clustering is an unsupervised machine learning technique that clusters the data into different groups by analyzing the smallest mean of a data point from the centroid. The ultimate objective of applying K-mean clustering is to identify the difference between pedestrians and vehicles. An inverse sensor model is a state model of occupancy grid mapping that localizes the detected vehicles on the grid map. A constant velocity model based Kalman filter is defined to track the trajectory of the vehicles. The data are collected from two intersections located in Newark, New Jersey, to study the accuracy of the proposed method. The results show that the proposed method has an average accuracy of 83.75%. Furthermore, the obtained R-squared value for localization of the vehicles on the grid map is ranging between 0.87 to 0.89. Furthermore, a primary cost comparison is made to study the cost efficiency of the developed methodology. The cost comparison shows that the proposed methodology based on 2-D LiDAR technology can achieve acceptable accuracy at a low price and be considered a smart city concept to conduct extensive scale data collection

    Object Detection in 20 Years: A Survey

    Full text link
    Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible publicatio

    Towards an interactive framework for robot dancing applications

    Get PDF
    Estágio realizado no INESC-Porto e orientado pelo Prof. Doutor Fabien GouyonTese de mestrado integrado. Engenharia Electrotécnica e de Computadores - Major Telecomunicações. Faculdade de Engenharia. Universidade do Porto. 200

    Box-level Segmentation Supervised Deep Neural Networks for Accurate and Real-time Multispectral Pedestrian Detection

    Get PDF
    Effective fusion of complementary information captured by multi-modal sensors (visible and infrared cameras) enables robust pedestrian detection under various surveillance situations (e.g. daytime and nighttime). In this paper, we present a novel box-level segmentation supervised learning framework for accurate and real-time multispectral pedestrian detection by incorporating features extracted in visible and infrared channels. Specifically, our method takes pairs of aligned visible and infrared images with easily obtained bounding box annotations as input and estimates accurate prediction maps to highlight the existence of pedestrians. It offers two major advantages over the existing anchor box based multispectral detection methods. Firstly, it overcomes the hyperparameter setting problem occurred during the training phase of anchor box based detectors and can obtain more accurate detection results, especially for small and occluded pedestrian instances. Secondly, it is capable of generating accurate detection results using small-size input images, leading to improvement of computational efficiency for real-time autonomous driving applications. Experimental results on KAIST multispectral dataset show that our proposed method outperforms state-of-the-art approaches in terms of both accuracy and speed
    corecore