421 research outputs found

    Chemical detection of explosives in soil for locating buried landmines.

    Get PDF
    Trinitrotoluene (TNT) is a highly explosive nitroaromatic compound that is used for military and terrorist activities such as the development of improvised explosive devices (IEDs), landmines and is the main charge or explosive in most of the anti-personal and anti-vehicle mines. Different chemicals/ contaminants associated with TNT in soils near buried land mines comprise the microbial transformation products of TNT (2-amino-4,6-dinitrotoluene [2-Am-DNT] and 4-amino-2,6-dinitrotoluene [4-Am-DNT]), manufacturing impurities of TNT (2,4-DNT, 2,6-DNT, and 1,3-DNB), and TNT. Time, cost, and casualties associated with demining have necessitated the demand for improved detection techniques with reduced false positives by directly detecting the explosive material, rather than casing material of mines. Different analytical methods used to detect trace level of explosives in soil include ion mobility mass spectrometry, gas chromatography-mass spectrometry (GC-MS), and liquid chromatography-mass spectrometry (LC-MS) that require samples to be collected from hazardous sites to laboratories. This is extremely unsafe, time consuming, involve large and expensive instrumentation cost and specially trained staff. Thus, detecting chemical signatures of these nitroaromatics in soil infected with these chemicals due to leaked TNT mines can provide location of landmines/ landmine prone zones to aide humanitarian demining process. This paper illustrates soil analysis for explosives and selected contaminants by Raman spectroscopy as a chemical, nondestructive, remote sensing method. As with advancement of Raman-based standoff detection techniques, field-portable instruments and UAV deployable probes, this technique can be effectively employed in detecting buried landmines based on specific chemical signatures of target analyte. In this present study, TNT-based nitroaromatic was assessed in contaminated soil samples using Raman spectroscopy, where uncontaminated soil was used as background and matrix for spiking target contaminants at different concentrations

    CMOS Image Sensors in Surveillance System Applications

    Get PDF
    Recent technology advances in CMOS image sensors (CIS) enable their utilization in the most demanding of surveillance fields, especially visual surveillance and intrusion detection in intelligent surveillance systems, aerial surveillance in war zones, Earth environmental surveillance by satellites in space monitoring, agricultural monitoring using wireless sensor networks and internet of things and driver assistance in automotive fields. This paper presents an overview of CMOS image sensor-based surveillance applications over the last decade by tabulating the design characteristics related to image quality such as resolution, frame rate, dynamic range, signal-to-noise ratio, and also processing technology. Different models of CMOS image sensors used in all applications have been surveyed and tabulated for every year and application.https://doi.org/10.3390/s2102048

    Autonomous Monitoring of Litter using Sunlight

    Get PDF

    Terrestrial and Astronomical Applications of Uncooled Infrared Technology

    Get PDF
    This thesis provides an account of the application of uncooled microbolometer technology in two different fields; astronomy and conservation ecology. Uncooled microbolometers are readily available as low-cost, commercial detectors for imaging in the 7.5 − 13.5μm spectral region. Their affordable price and coverage of the wavelengths at which the the spectral energy distribution of animals that use metabolic heat to maintain a stable internal temperature peaks, has popularised these systems for use in ecology. However, their use as the primary detectors in conservation surveys suffers from a data analysis bottleneck due to the large volumes of data and the manual approach to detecting and identifying animal species. As a result, the efficiency and efficacy of these systems is limited by the volume of data and the lack of resources and experience in handling infrared imagery. In contrast, astronomical techniques are well developed for infrared wavelengths and the analysis of large quantities of data has been successfully automated for many years. Apart from observations of solar system objects with very small (<200mm aperture) telescopes and some high altitude experiments, microbolometers have not been deployed in astronomy for ground-based observing. Although observing in this spectral region is key to understanding the cold, dusty and distant universe, resources for mid-IR observing are not as readily available as those for other wavelengths due to the very high thermal background and the requirement for specialist detector systems. Through our work, we attempt to discern whether microbolometers, and standard astronomical infrared observing and data analysis techniques, can be applied successfully in these two fields. In part I three analogous, standard astronomical instrumentation techniques are applied to characterise the random and spatial noise present in a FLIR Tau 2 Core microbolometer to determine the systematics and sources of statistical noise that limit read-out accuracy. Flat fielding, stacking and binning are used to determine that the focal plane array is dominated by large structure noise, and demonstrate how this can be corrected. An NEdT of < 60mK is isolated and recorded for the system. Part II introduces Astro-Ecology, a field that couples microbolometers and astronomy instrumentation techniques for application in conservation biology to monitor vulnerable species. Several investigations are presented to determine the feasibility of developing an astronomy based, fully automated reduction pipeline for mid-IR data collected using unmanned aerial vehicle (UAV) technology From these investigations the efficacy of microbolometers for animal surveys was found to be highly dependent on ground and ambient air temperatures, and to fully automate a pipeline would require more than standard astronomical techniques and software. In part III, a small, passively cooled, prototype, N -band (∼ 10μm) instrument is developed and tested. The optical and mechanical design of the instrument is described, with the instrument constructed from commercially available components and an uncooled microbolometer focal-plane array. The incorporation of adjustable germanium reimaging optics rescale the image to the appropriate plate scale for the 2-m diameter Liverpool Telescope on La Palma. A week-long programme of observations was conducted to test the system sensitivity and stability, and the feasibility of using this technology in ‘facility’ class instruments for small telescopes. From observations of bright solar system and stellar sources, a plate scale of 0.75′′ per pixel is measured, with this confirming that the optical design allows diffraction limited imaging. A photometric stability of ∼ 10% is recorded with this largely due to sky variability. A 3σ sensitivity of 7 × 103 Jy for a single, ∼ 0.11 second exposure is measured, which corresponds to a sensitivity limit of 3 × 102 Jy for a 60 second total integration. Recognising the need for improved sensitivity, the instrument was upgraded to be optimised for mid-IR observations using a chopping system. The instrument was deployed for a week on the 1.52m Carlos Sanchez Telescope on Tenerife alongside a regime of chopping and nodding. Observations of several very bright mid-IR sources with catalogue fluxes down to ∼ 600 Jy are presented. A sensitivity improvement of ∼ 4 magnitudes over previous unchopped observations is recorded, in line with sensitivity predictions

    Autonomous detection of high-voltage towers employing unmanned aerial vehicles

    Get PDF
    Monitoring and controlling the power grid is extremely important to prevent power outages and blackouts. Traditionally, the most common methods for inspecting high-voltage towers and power lines include visual inspections by qualified personnel, inspections by helicopters, and, in some cases, the use of specialized robots, among others. These inspections aim to detect anomalies near the infrastructure, anomalies due to hotspots in the insulation or visual defects in the various structural elements. One of the main problems of these techniques is the high economic cost and the lack of accuracy. As an alternative, unmanned aerial vehicles (UAVs) are becoming more popular in the market, and the trend is gradually moving towards this technology, as it offers significant cost reduction, better mobility and flexibility, and great potential for obtaining high-quality images. This thesis (TFG) studies the feasibility of developing a system capable of autonomously detecting high-voltage towers using an unmanned aerial vehicle and conducting power line inspections. This system consists of a desktop application that allows the user to program legs of a flight plan, and a drone that executes them and detects the high-voltage towers using an artificial intelligence model. The system developed in this study is part of a contribution to the Drone Engineering Ecosystem (DEE), a platform for controlling and monitoring unmanned aerial vehicles using desktop and/or web applications. The main goal of this platform is the improvement and continuity of its development by future students.Objectius de Desenvolupament Sostenible::9 - Indústria, Innovació i Infraestructur

    Assessment of monthly rain fade in the equatorial region at C & KU-band using measat-3 satellite links

    Get PDF
    C & Ku-band satellite communication links are the most commonly used for equatorial satellite communication links. Severe rainfall rate in equatorial regions can cause a large rain attenuation in real compared to the prediction. ITU-R P. 618 standards are commonly used to predict satellite rain fade in designing satellite communication network. However, the prediction of ITU-R is still found to be inaccurate hence hinder a reliable operational satellite communication link in equatorial region. This paper aims to provide an accurate insight by assessment of the monthly C & Ku-band rain fade performance by collecting data from commercial earth stations using C band and Ku-band antenna with 11 m and 13 m diameter respectively. The antennas measure the C & Ku-band beacon signal from MEASAT-3 under equatorial rain conditions. The data is collected for one year in 2015. The monthly cumulative distribution function is developed based on the 1-year data. RMSE analysis is made by comparing the monthly measured data of C-band and Ku-band to the ITU-R predictions developed based on ITU-R’s P.618, P.837, P.838 and P.839 standards. The findings show that Ku-band produces an average of 25 RMSE value while the C-band rain attenuation produces an average of 2 RMSE value. Therefore, the ITU-R model still under predicts the rain attenuation in the equatorial region and this call for revisit of the fundamental quantity in determining the rain fade for rain attenuation to be re-evaluated

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality

    Vision-Based navigation system for unmanned aerial vehicles

    Get PDF
    Mención Internacional en el título de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles (UAVs) with a robust navigation system; in order to allow the UAVs to perform complex tasks autonomously and in real-time. The proposed algorithms deal with solving the navigation problem for outdoor as well as indoor environments, mainly based on visual information that is captured by monocular cameras. In addition, this dissertation presents the advantages of using the visual sensors as the main source of data, or complementing other sensors in providing useful information; in order to improve the accuracy and the robustness of the sensing purposes. The dissertation mainly covers several research topics based on computer vision techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of the UAV. This algorithm is based on the combination of SIFT detector and FREAK descriptor; which maintains the performance of the feature points matching and decreases the computational time. Thereafter, the pose estimation problem is solved based on the decomposition of the world-to-frame and frame-to-frame homographies. (II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to sense and detect the frontal obstacles that are situated in its path. The detection algorithm mimics the human behaviors for detecting the approaching obstacles; by analyzing the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. Then, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, the algorithm extracts the collision-free zones around the obstacle, and combining with the tracked waypoints, the UAV performs the avoidance maneuver. (III) Navigation Guidance, which generates the waypoints to determine the flight path based on environment and the situated obstacles. Then provide a strategy to follow the path segments and in an efficient way and perform the flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in order to achieve the flight stability as well as to perform the correct maneuver; to avoid the possible collisions and track the waypoints. All the proposed algorithms have been verified with real flights in both indoor and outdoor environments, taking into consideration the visual conditions; such as illumination and textures. The obtained results have been validated against other systems; such as VICON motion capture system, DGPS in the case of pose estimate algorithm. In addition, the proposed algorithms have been compared with several previous works in the state of the art, and are results proves the improvement in the accuracy and the robustness of the proposed algorithms. Finally, this dissertation concludes that the visual sensors have the advantages of lightweight and low consumption and provide reliable information, which is considered as a powerful tool in the navigation systems to increase the autonomy of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados (UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos tratan de resolver problemas de la navegacion tanto en ambientes interiores como al aire libre basandose principalmente en la informacion visual captada por las camaras monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores visuales bien como fuente principal de datos o complementando a otros sensores en el suministro de informacion util, con el fin de mejorar la precision y la robustez de los procesos de deteccion. La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas de vision por computador: (I) Estimacion de la Posicion y la Orientacion (Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia y disminuye el tiempo computacional. De esta manera, se soluciona el problema de la estimacion de la posicion basandose en la descomposicion de las homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales que se encuentran en su camino. El algoritmo de deteccion imita comportamientos humanos para detectar los obstaculos que se acercan, mediante el analisis de la magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado con los ratios de expansion de los contornos convexos construidos alrededor de los puntos caracteristicos detectados en frames consecutivos. A continuacion, comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo extrae las zonas libres de colision alrededor del obstaculo y combinandolo con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona una estrategia para seguir los segmentos del trazado de una manera eficiente y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de referencia. Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes exteriores e interiores, tomando en consideracion condiciones visuales como la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos han sido comparados con trabajos anteriores recogidos en el estado del arte con resultados que demuestran una mejora de la precision y la robustez de los algoritmos propuestos. Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo hace una poderosa herramienta en los sistemas de navegacion para aumentar la autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando García Fernández.- Vocal: Pascual Campoy Cerver
    corecore