319 research outputs found
Low-power dynamic object detection and classification with freely moving event cameras
We present the first purely event-based, energy-efficient approach for dynamic object detection and categorization with a freely moving event camera. Compared to traditional cameras, event-based object recognition systems are considerably behind in terms of accuracy and algorithmic maturity. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional object representation when hardware resources are limited to implement PCA. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance compared to state-of-the-art algorithms. Additionally, we verified the real-time FPGA performance of the proposed object detection method, trained with limited data as opposed to deep learning methods, under a closed-loop aerial vehicle flight mode. We also compare the proposed object categorization framework to pre-trained convolutional neural networks using transfer learning and highlight the drawbacks of using frame-based sensors under dynamic camera motion. Finally, we provide critical insights about the feature extraction method and the classification parameters on the system performance, which aids in understanding the framework to suit various low-power (less than a few watts) application scenarios
Flying Objects Detection from a Single Moving Camera
We propose an approach to detect flying objects such as UAVs and aircrafts
when they occupy a small portion of the field of view, possibly moving against
complex backgrounds, and are filmed by a camera that itself moves.
Solving such a difficult problem requires combining both appearance and
motion cues. To this end we propose a regression-based approach to motion
stabilization of local image patches that allows us to achieve effective
classification on spatio-temporal image cubes and outperform state-of-the-art
techniques.
As the problem is relatively new, we collected two challenging datasets for
UAVs and Aircrafts, which can be used as benchmarks for flying objects
detection and vision-guided collision avoidance
UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments
The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry
Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a
more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of
meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This
path is already being taken by the recent and fast-developing research in computational fields, however, some
issues related to computationally expensive processes in the integration of multi-source sensing data remain.
Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned
Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope,
many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and
multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant
contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and
hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields
concentrate most applications and are widely studied. Many challenges are currently being overcome by recent
methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image
datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that
are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are
presented.European Commission 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Junta de Andalucia 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU
FPU19/0010
Stereo visual simultaneous localisation and mapping for an outdoor wheeled robot: a front-end study
For many mobile robotic systems, navigating an environment is a crucial step in autonomy and Visual Simultaneous Localisation and Mapping (vSLAM) has seen increased effective usage in this capacity. However, vSLAM is strongly dependent on the context in which it is applied, often using heuristic and special cases to provide efficiency and robustness. It is thus crucial to identify the important parameters and factors regarding a particular context as this heavily influences the necessary algorithms, processes, and hardware required for the best results. In this body of work, a generic front-end stereo vSLAM pipeline is tested in the context of a small-scale outdoor wheeled robot that occupies less than 1m3 of volume. The scale of the vehicle constrained the available processing power, Field Of View (FOV), actuation systems, and image distortions present. A dataset was collected with a custom platform that consisted of a Point Grey Bumblebee (Discontinued) stereo camera and Nvidia Jetson TK1 processor. A stereo front-end feature tracking framework was described and evaluated both in simulation and experimentally where appropriate. It was found that scale adversely affected lighting conditions, FOV, baseline, and processing power available, all crucial factors to improve upon. The stereo constraint was effective for robustness criteria, but ineffective in terms of processing power and metric reconstruction. An overall absolute odometer error of 0.25-3m was produced on the dataset but was unable to run in real-time
Evaluation of machine vision techniques for use within flight control systems
In this thesis, two of the main technical limitations for a massive deployment of Unmanned Aerial Vehicle (UAV) have been considered.;The Aerial Refueling problem is analyzed in the first section. A solution based on the integration of \u27conventional\u27 GPS/INS and Machine Vision sensor is proposed with the purpose of measuring the relative distance between a refueling tanker and UAV. In this effort, comparisons between Point Matching (PM) algorithms and Pose Estimation (PE) algorithms have been developed in order to improve the performance of the Machine Vision sensor. A method of integration based on Extended Kalman Filter (EKF) between GPS/INS and Machine Vision system is also developed with the goal of reducing the tracking error in the \u27pre-contact\u27 to contact and refueling phases.;In the second section of the thesis the issue of Collision Identification (CI) is addressed. A proposed solution consists on the use of Optical Flow (OF) algorithms for the detection of possible collisions in the range of vision of a single camera. The effort includes a study of the performance of different Optical Flow algorithms in different scenarios as well as a method to compute the ideal optical flow with the aim of evaluating the algorithms. An analysis on the suitability for a future real time implementation is also performed for all the analyzed algorithms.;Results of the tests show that the Machine Vision technology can be used to improve the performance in the Aerial Refueling problem. In the Collision Identification problem, the Machine Vision has to be integrated with standard sensors in order to be used inside the Flight Control System
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Scheduling of Two Real-Time Tasks with Non-Fixed Sampling Rates Modelled on an Unmanned Air Vehicle with Autonomous Navigation and Image Processing Capabilities
Control tasks and scheduling problems are usually treated in separate contexts, but when they are implemented in a real-time system their co-design becomes essential, as it will allow a better use of the limited computational resources. This project regards the creation of a scheduling algorithm for two real-time tasks sharing the same Processing Unit. Once a theoretical solution has been developed, it will be applied to a realistic scenario: UAV control with image processing abilities.ope
- …