161 research outputs found

    Automated location of active fire perimeters in aerial infrared imaging using unsupervised edge detectors

    Get PDF
    A variety of remote sensing techniques have been applied to forest fires. However, there is at present no system capable of monitoring an active fire precisely in a totally automated manner. Spaceborne sensors show too coarse spatio-temporal resolutions and all previous studies that extracted fire properties from infrared aerial imagery incorporated manual tasks within the image processing workflow. As a contribution to this topic, this paper presents an algorithm to automatically locate the fuel burning interface of an active wildfire in georeferenced aerial thermal infrared (TIR) imagery. An unsupervised edge detector, built upon the Canny method, was accompanied by the necessary modules for the extraction of line coordinates and the location of the total burned perimeter. The system was validated in different scenarios ranging from laboratory tests to large-scale experimental burns performed under extreme weather conditions. Output accuracy was computed through three common similarity indices and proved acceptable. Computing times were below 1¿s per image on average. The produced information was used to measure the temporal evolution of the fire perimeter and automatically generate rate of spread (ROS) fields. Information products were easily exported to standard Geographic Information Systems (GIS), such as GoogleEarth and QGIS. Therefore, this work contributes towards the development of an affordable and totally automated system for operational wildfire surveillance.Peer ReviewedPostprint (author's final draft

    Thermal infrared video stabilization for aerial monitoring of active wildfires

    Get PDF
    Measuring wildland fire behavior is essential for fire science and fire management. Aerial thermal infrared (TIR) imaging provides outstanding opportunities to acquire such information remotely. Variables such as fire rate of spread (ROS), fire radiative power (FRP), and fireline intensity may be measured explicitly both in time and space, providing the necessary data to study the response of fire behavior to weather, vegetation, topography, and firefighting efforts. However, raw TIR imagery acquired by unmanned aerial vehicles (UAVs) requires stabilization and georeferencing before any other processing can be performed. Aerial video usually suffers from instabilities produced by sensor movement. This problem is especially acute near an active wildfire due to fire-generated turbulence. Furthermore, the nature of fire TIR video presents some specific challenges that hinder robust interframe registration. Therefore, this article presents a software-based video stabilization algorithm specifically designed for TIR imagery of forest fires. After a comparative analysis of existing image registration algorithms, the KAZE feature-matching method was selected and accompanied by pre- and postprocessing modules. These included foreground histogram equalization and a multireference framework designed to increase the algorithm's robustness in the presence of missing or faulty frames. The performance of the proposed algorithm was validated in a total of nine video sequences acquired during field fire experiments. The proposed algorithm yielded a registration accuracy between 10 and 1000x higher than other tested methods, returned 10x more meaningful feature matches, and proved robust in the presence of faulty video frames. The ability to automatically cancel camera movement for every frame in a video sequence solves a key limitation in data processing pipelines and opens the door to a number of systematic fire behavior experimental analyses. Moreover, a completely automated process supports the development of decision support tools that can operate in real time during an emergency

    Event-based neuromorphic stereo vision

    Full text link

    A novel image feature descriptor for SLM spattering pattern classification using a consumable camera

    Get PDF
    In selective laser melting (SLM), spattering is an important phenomenon that is highly related to the quality of the manufactured parts. Characterisation and monitoring of spattering behaviours are highly valuable in understanding the manufacturing process and improving the manufacturing quality of SLM. This paper introduces a method of automatic visual classification to distinguish spattering characteristics of SLM processes in different manufacturing conditions. A compact feature descriptor is proposed to represent spattering patterns and its effectiveness is evaluated using real images captured in different conditions. The feature descriptor of this work combines information of spatter trajectory morphology, spatial distributions, and temporal information. The classification is performed using support vector machine (SVM) and random forests for testing and shows highly promising classification accuracy of about 97%. The advantages of this work include compactness for representation and semantic interpretability with the feature description. In addition, the qualities of manufacturing parts are mapped with spattering characteristics under different laser energy densities. Such a map table can be then used to define the desired spatter features, providing a non-contact monitoring solution for online anomaly detection. This work will lead to a further integration of real-time vision monitoring system for an online closed-loop prognostic system for SLM systems, in order to improve the performance in terms of manufacturing quality, power consumption, and fault detection

    Cognitive privacy middleware for deep learning mashup in environmental IoT

    Get PDF
    Data mashup is a Web technology that combines information from multiple sources into a single Web application. Mashup applications support new services, such as environmental monitoring. The different organizations utilize data mashup services to merge data sets from the different Internet of Multimedia Things (IoMT) context-based services in order to leverage the performance of their data analytics. However, mashup, different data sets from multiple sources, is a privacy hazard as it might reveal citizens specific behaviors in different regions. In this paper, we present our efforts to build a cognitive-based middleware for private data mashup (CMPM) to serve a centralized environmental monitoring service. The proposed middleware is equipped with concealment mechanisms to preserve the privacy of the merged data sets from multiple IoMT networks involved in the mashup application. In addition, we presented an IoT-enabled data mashup service, where the multimedia data are collected from the various IoMT platforms, and then fed into an environmental deep learning service in order to detect interesting patterns in hazardous areas. The viable features within each region were extracted using a multiresolution wavelet transform, and then fed into a discriminative classifier to extract various patterns. We also provide a scenario for IoMT-enabled data mashup service and experimentation results

    Percepción basada en visión estereoscópica, planificación de trayectorias y estrategias de navegación para exploración robótica autónoma

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia artificial, leída el 13-05-2015En esta tesis se trata el desarrollo de una estrategia de navegación autónoma basada en visión artificial para exploración robótica autónoma de superficies planetarias. Se han desarrollado una serie de subsistemas, módulos y software específicos para la investigación desarrollada en este trabajo, ya que la mayoría de las herramientas existentes para este dominio son propiedad de agencias espaciales nacionales, no accesibles a la comunidad científica. Se ha diseñado una arquitectura software modular multi-capa con varios niveles jerárquicos para albergar el conjunto de algoritmos que implementan la estrategia de navegación autónoma y garantizar la portabilidad del software, su reutilización e independencia del hardware. Se incluye también el diseño de un entorno de trabajo destinado a dar soporte al desarrollo de las estrategias de navegación. Éste se basa parcialmente en herramientas de código abierto al alcance de cualquier investigador o institución, con las necesarias adaptaciones y extensiones, e incluye capacidades de simulación 3D, modelos de vehículos robóticos, sensores, y entornos operacionales, emulando superficies planetarias como Marte, para el análisis y validación a nivel funcional de las estrategias de navegación desarrolladas. Este entorno también ofrece capacidades de depuración y monitorización.La presente tesis se compone de dos partes principales. En la primera se aborda el diseño y desarrollo de las capacidades de autonomía de alto nivel de un rover, centrándose en la navegación autónoma, con el soporte de las capacidades de simulación y monitorización del entorno de trabajo previo. Se han llevado a cabo un conjunto de experimentos de campo, con un robot y hardware real, detallándose resultados, tiempo de procesamiento de algoritmos, así como el comportamiento y rendimiento del sistema en general. Como resultado, se ha identificado al sistema de percepción como un componente crucial dentro de la estrategia de navegación y, por tanto, el foco principal de potenciales optimizaciones y mejoras del sistema. Como consecuencia, en la segunda parte de este trabajo, se afronta el problema de la correspondencia en imágenes estéreo y reconstrucción 3D de entornos naturales no estructurados. Se han analizado una serie de algoritmos de correspondencia, procesos de imagen y filtros. Generalmente se asume que las intensidades de puntos correspondientes en imágenes del mismo par estéreo es la misma. Sin embargo, se ha comprobado que esta suposición es a menudo falsa, a pesar de que ambas se adquieren con un sistema de visión compuesto de dos cámaras idénticas. En consecuencia, se propone un sistema experto para la corrección automática de intensidades en pares de imágenes estéreo y reconstrucción 3D del entorno basado en procesos de imagen no aplicados hasta ahora en el campo de la visión estéreo. Éstos son el filtrado homomórfico y la correspondencia de histogramas, que han sido diseñados para corregir intensidades coordinadamente, ajustando una imagen en función de la otra. Los resultados se han podido optimizar adicionalmente gracias al diseño de un proceso de agrupación basado en el principio de continuidad espacial para eliminar falsos positivos y correspondencias erróneas. Se han estudiado los efectos de la aplicación de dichos filtros, en etapas previas y posteriores al proceso de correspondencia, con eficiencia verificada favorablemente. Su aplicación ha permitido la obtención de un mayor número de correspondencias válidas en comparación con los resultados obtenidos sin la aplicación de los mismos, consiguiendo mejoras significativas en los mapas de disparidad y, por lo tanto, en los procesos globales de percepción y reconstrucción 3D.Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu

    Digital Image Processing

    Get PDF
    This book presents several recent advances that are related or fall under the umbrella of 'digital image processing', with the purpose of providing an insight into the possibilities offered by digital image processing algorithms in various fields. The presented mathematical algorithms are accompanied by graphical representations and illustrative examples for an enhanced readability. The chapters are written in a manner that allows even a reader with basic experience and knowledge in the digital image processing field to properly understand the presented algorithms. Concurrently, the structure of the information in this book is such that fellow scientists will be able to use it to push the development of the presented subjects even further

    Resilient Perception for Outdoor Unmanned Ground Vehicles

    Get PDF
    This thesis promotes the development of resilience for perception systems with a focus on Unmanned Ground Vehicles (UGVs) in adverse environmental conditions. Perception is the interpretation of sensor data to produce a representation of the environment that is necessary for subsequent decision making. Long-term autonomy requires perception systems that correctly function in unusual but realistic conditions that will eventually occur during extended missions. State-of-the-art UGV systems can fail when the sensor data are beyond the operational capacity of the perception models. The key to resilient perception system lies in the use of multiple sensor modalities and the pre-selection of appropriate sensor data to minimise the chance of failure. This thesis proposes a framework based on diagnostic principles to evaluate and preselect sensor data prior to interpretation by the perception system. Image-based quality metrics are explored and evaluated experimentally using infrared (IR) and visual cameras onboard a UGV in the presence of smoke and airborne dust. A novel quality metric, Spatial Entropy (SE), is introduced and evaluated. The proposed framework is applied to a state-of-the-art Visual-SLAM algorithm combining visual and IR imaging as a real-world example. An extensive experimental evaluation demonstrates that the framework allows for camera-based localisation that is resilient to a range of low-visibility conditions when compared to other methods that use a single sensor or combine sensor data without selection. The proposed framework allows for a resilient localisation in adverse conditions using image data but also has significant potential to benefit many perception applications. Employing multiple sensing modalities along with pre-selection of appropriate data is a powerful method to create resilient perception systems by anticipating and mitigating errors. The development of such resilient perception systems is a requirement for next-generation outdoor UGVs
    corecore