1,428 research outputs found

    Adaptive detection and tracking using multimodal information

    Get PDF
    This thesis describes work on fusing data from multiple sources of information, and focuses on two main areas: adaptive detection and adaptive object tracking in automated vision scenarios. The work on adaptive object detection explores a new paradigm in dynamic parameter selection, by selecting thresholds for object detection to maximise agreement between pairs of sources. Object tracking, a complementary technique to object detection, is also explored in a multi-source context and an efficient framework for robust tracking, termed the Spatiogram Bank tracker, is proposed as a means to overcome the difficulties of traditional histogram tracking. As well as performing theoretical analysis of the proposed methods, specific example applications are given for both the detection and the tracking aspects, using thermal infrared and visible spectrum video data, as well as other multi-modal information sources

    Real-time Aerial Vehicle Detection and Tracking using a Multi-modal Optical Sensor

    Get PDF
    Vehicle tracking from an aerial platform poses a number of unique challenges including the small number of pixels representing a vehicle, large camera motion, and parallax error. For these reasons, it is accepted to be a more challenging task than traditional object tracking and it is generally tackled through a number of different sensor modalities. Recently, the Wide Area Motion Imagery sensor platform has received reasonable attention as it can provide higher resolution single band imagery in addition to its large area coverage. However, still, richer sensory information is required to persistently track vehicles or more research on the application of WAMI for tracking is required. With the advancements in sensor technology, hyperspectral data acquisition at video frame rates become possible as it can be cruical in identifying objects even in low resolution scenes. For this reason, in this thesis, a multi-modal optical sensor concept is considered to improve tracking in adverse scenes. The Rochester Institute of Technology Multi-object Spectrometer is capable of collecting limited hyperspectral data at desired locations in addition to full-frame single band imagery. By acquiring hyperspectral data quickly, tracking can be achieved at reasonableframe rates which turns out to be crucial in tracking. On the other hand, the relatively high cost of hyperspectral data acquisition and transmission need to be taken into account to design a realistic tracking. By inserting extended data of the pixels of interest we can address or avoid the unique challenges posed by aerial tracking. In this direction, we integrate limited hyperspectral data to improve measurement-to-track association. Also, a hyperspectral data based target detection method is presented to avoid the parallax effect and reduce the clutter density. Finally, the proposed system is evaluated on realistic, synthetic scenarios generated by the Digital Image and Remote Sensing software

    Revisión de algoritmos, métodos y técnicas para la detección de UAVs y UAS en aplicaciones de audio, radiofrecuencia y video

    Get PDF
    Unmanned Aerial Vehicles (UAVs), also known as drones, have had an exponential evolution in recent times due in large part to the development of technologies that enhance the development of these devices. This has resulted in increasingly affordable and better-equipped artifacts, which implies their application in new fields such as agriculture, transport, monitoring, and aerial photography. However, drones have also been used in terrorist acts, privacy violations, and espionage, in addition to involuntary accidents in high-risk zones such as airports. In response to these events, multiple technologies have been introduced to control and monitor the airspace in order to ensure protection in risk areas. This paper is a review of the state of the art of the techniques, methods, and algorithms used in video, radiofrequency, and audio-based applications to detect UAVs and Unmanned Aircraft Systems (UAS). This study can serve as a starting point to develop future drone detection systems with the most convenient technologies that meet certain requirements of optimal scalability, portability, reliability, and availability.Los vehículos aéreos no tripulados, conocidos también como drones, han tenido una evolución exponencial en los últimos tiempos, debido en gran parte al desarrollo de las tecnologías que potencian su desarrollo, lo cual ha desencadenado en artefactos cada vez más asequibles y con mejores prestaciones, lo que implica el desarrollo de nuevas aplicaciones como agricultura, transporte, monitoreo, fotografía aérea, entre otras. No obstante, los drones se han utilizado también en actos terroristas, violaciones a la privacidad y espionaje, además de haber producido accidentes involuntarios en zonas de alto riesgo de operación como aeropuertos. En respuesta a dichos eventos, aparecen tecnologías que permiten controlar y monitorear el espacio aéreo, con el fin de garantizar la protección en zonas de riesgo. En este artículo se realiza un estudio del estado del arte de la técnicas, métodos y algoritmos basados en video, en análisis de sonido y en radio frecuencia, para tener un punto de partida que permita el desarrollo en el futuro de un sistema de detección de drones, con las tecnologías más propicias, según los requerimientos que puedan ser planteados con las características de escalabilidad, portabilidad, confiabilidad y disponibilidad óptimas

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Vision-based Monitoring System for High Quality TIG Welding

    Get PDF
    The current study evaluates an automatic system for real-time arc welding quality assessment and defect detection. The system research focuses on the identification of defects that may arise during the welding process by analysing the occurrence of any changes in the visible spectrum of the weld pool and the surrounding area. Currently, the state-of-the-art is very simplistic, involving an operator observing the process continuously. The operator assessment is subjective, and the criteria of acceptance based solely on operator observations can change over time due to the fatigue leading to incorrect classification. Variations in the weld pool are the initial result of the chosen welding parameters and torch position and at the same time the very first indication of the resulting weld quality. The system investigated in this research study consists of a camera used to record the welding process and a processing unit which analyse the frames giving an indication of the quality expected. The categorisation is achieved by employing artificial neural networks and correlating the weld pool appearance with the resulting quality. Six categories denote the resulting quality of a weld for stainless steel and aluminium. The models use images to learn the correlation between the aspect of the weld pool and the surrounding area and the state of the weld as denoted by the six categories, similar to a welder categorisation. Therefore the models learn the probability distribution of images’ aspect over the categories considered
    corecore