102 research outputs found

    Infrared Small Targets Detection based on MRF Model

    Get PDF
    AbstractAiming at the difficulty in detecting infrared dim small target under strong background clutter, we propose a novel algorithm of infrared dim small target detection based on Markov random fields(MRF) model. We at first use adaptive morphological filter to suppress the background clutter, then introduce new potential function and energy function according to MRF theory and infrared small target image features, and build a target detection model to confirm automatically the location and size of the target. Simulation results show that the algorithm is effective

    SDDNet: Infrared small and dim target detection network

    Get PDF
    This study focuses on developing deep learning methods for small and dim target detection. We model infrared images as the union of the target region and background region. Based on this model, the target detection problem is considered a two‐class segmentation problem that divides an image into the target and background. Therefore, a neural network called SDDNet for single‐frame images is constructed. The network yields target extraction results according to the original images. For multiframe images, a network called IC‐SDDNet, a combination of SDDNet and an interframe correlation network module is constructed. SDDNet and IC‐SDDNet achieve target detection rates close to 1 on typical datasets with very low false positives, thereby performing significantly better than current methods. Both models can be executed end to end, so both are very convenient to use, and their implementation efficiency is very high. Average speeds of 540+/230+ FPS and 170+/60+ FPS are achieved with SDDNet and IC‐SDDNet on a single Tesla V100 graphics processing unit and a single Jetson TX2 embedded module respectively. Additionally, neither network needs to use future information, so both networks can be directly used in real‐time systems. The well‐trained models and codes used in this study are available at https://github.com/LittlePieces/ObjectDetection

    Vegetation detection and terrain classification for autonomous navigation

    Get PDF
    Diese Arbeit beleuchtet sieben neuartige Ansätze aus zwei Bereichen der maschinellen Wahrnehmung: Erkennung von Vegetation und Klassifizierung von Gelände. Diese Elemente bilden den Kern eines jeden Steuerungssystems für effiziente, autonome Navigation im Außenbereich. Bezüglich der Vegetationserkennung, wird zuerst ein auf Indizierung basierender Ansatz beschrieben (1), der die reflektierenden und absorbierenden Eigenschaften von Pflanzen im Bezug auf sichtbares und nah-infrarotes Licht auswertet. Zweitens wird eine Fusionmethode von 2D/3D Merkmalen untersucht (2), die das menschliche System der Vegetationserkennung nachbildet. Zusätzlich wird ein integriertes System vorgeschlagen (3), welches die visuelle Wahrnehmung mit multi-spektralen Methoden ko mbiniert. Aufbauend auf detaillierten Studien zu Farb- und Textureigenschaften von Vegetation wird ein adaptiver selbstlernender Algorithmus eingeführt der robust und schnell Pflanzen(bewuchs) erkennt (4). Komplettiert wird die Vegetationserkennung durch einen Algorithmus zur Befahrbarkeitseinschätzung von Vegetation, der die Verformbarkeit von Pflanzen erkennt. Je leichter sich Pflanzen bewegen lassen, umso größer ist ihre Befahrbarkeit. Bezüglich der Geländeklassifizierung wird eine struktur-basierte Methode vorgestellt (6), welche die 3D Strukturdaten einer Umgebung durch die statistische Analyse lokaler Punkte von LiDAR Daten unterstützt. Zuletzt wird eine auf Klassifizierung basierende Methode (7) beschrieben, die LiDAR und Kamera-Daten kombiniert, um eine 3D Szene zu rekonstruieren. Basierend auf den Vorteilen der vorgestellten Algorithmen im Bezug auf die maschinelle Wahrnehmung, hoffen wir, dass diese Arbeit als Ausgangspunkt für weitere Entwicklung en von zuverlässigen Erkennungsmethoden dient.This thesis introduces seven novel contributions for two perception tasks: vegetation detection and terrain classification, that are at the core of any control system for efficient autonomous navigation in outdoor environments. Regarding vegetation detection, we first describe a vegetation index-based method (1), which relies on the absorption and reflectance properties of vegetation to visual light and near-infrared light, respectively. Second, a 2D/3D feature fusion (2), which imitates the human visual system in vegetation interpretation, is investigated. Alternatively, an integrated vision system (3) is proposed to realise our greedy ambition in combining visual perception-based and multi-spectral methods by only using a unit device. A depth study on colour and texture features of vegetation has been carried out, which leads to a robust and fast vegetation detection through an adaptive learning algorithm (4). In addition, a double-check of passable vegetation detection (5) is realised, relying on the compressibility of vegetation. The lower degree of resistance vegetation has, the more traversable it is. Regarding terrain classification, we introduce a structure-based method (6) to capture the world scene by inferring its 3D structures through a local point statistic analysis on LiDAR data. Finally, a classification-based method (7), which combines the LiDAR data and visual information to reconstruct 3D scenes, is presented. Whereby, object representation is described more details, thus enabling an ability to classify more object types. Based on the success of the proposed perceptual inference methods in the environmental sensing tasks, we hope that this thesis will really serve as a key point for further development of highly reliable perceptual inference methods

    Unmanned aerial vehicle video-based target tracking algorithm Using sparse representation

    Get PDF
    Target tracking based on unmanned aerial vehicle (UAV) video is a significant technique in intelligent urban surveillance systems for smart city applications, such as smart transportation, road traffic monitoring, inspection of stolen vehicle, etc. In this paper, a vision-based target tracking algorithm aiming at locating UAV-captured targets, like pedestrian and vehicle, is proposed using sparse representation theory. First of all, each target candidate is sparsely represented in the subspace spanned by a joint dictionary. Then, the sparse representation coefficient is further constrained by an L2 regularization based on the temporal consistency. To cope with the partial occlusion appearing in UAV videos, a Markov Random Field (MRF)-based binary support vector with contiguous occlusion constraint is introduced to our sparse representation model. For long-term tracking, the particle filter framework along with a dynamic template update scheme is designed. Both qualitative and quantitative experiments implemented on visible (Vis) and infrared (IR) UAV videos prove that the presented tracker can achieve better performances in terms of precision rate and success rate when compared with other state-of-the-art tracker

    Functional Biomarkers to Assess Visual System Integrity: An eye tracking based approach:Functional Biomarkers to Assess Visual System Integrity

    Get PDF
    Functional Biomarkers to Assess Visual System Integrity: An eye tracking based approac

    Functional Biomarkers to Assess Visual System Integrity: An eye tracking based approach:Functional Biomarkers to Assess Visual System Integrity

    Get PDF
    Functional Biomarkers to Assess Visual System Integrity: An eye tracking based approac

    Characterization and Modelling of Composites

    Get PDF
    Composites have increasingly been used in various structural components in the aerospace, marine, automotive, and wind energy sectors. The material characterization of composites is a vital part of the product development and production process. Physical, mechanical, and chemical characterization helps developers to further their understanding of products and materials, thus ensuring quality control. Achieving an in-depth understanding and consequent improvement of the general performance of these materials, however, still requires complex material modeling and simulation tools, which are often multiscale and encompass multiphysics. This Special Issue aims to solicit papers concerning promising, recent developments in composite modeling, simulation, and characterization, in both design and manufacturing areas, including experimental as well as industrial-scale case studies. All submitted manuscripts will undergo a rigorous review process and will only be considered for publication if they meet journal standards. Selected top articles may have their processing charges waived at the recommendation of reviewers and the Guest Editor

    Modeling Eye Tracking Data with Application to Object Detection

    Get PDF
    This research focuses on enhancing computer vision algorithms using eye tracking and visual saliency. Recent advances in eye tracking device technology have enabled large scale collection of eye tracking data, without affecting viewer experience. As eye tracking data is biased towards high level image and video semantics, it provides a valuable prior for object detection in images and object extraction in videos. We specifically explore the following problems in the thesis: 1) eye tracking and saliency enhanced object detection, 2) eye tracking assisted object extraction in videos, and 3) role of object co-occurrence and camera focus in visual attention modeling.Since human attention is biased towards faces and text, in the first work we propose an approach to isolate face and text regions in images by analyzing eye tracking data from multiple subjects. Eye tracking data is clustered and region labels are predicted using a Markov random field model. In the second work, we study object extraction in videos using eye tracking prior. We propose an algorithm to extract dominant visual tracks in eye tracking data from multiple subjects by solving a linear assignment problem. Visual tracks localize object search and we propose a novel mixed graph association framework, inferred by binary integer linear programming. In the final work, we address the problem of predicting where people look in images. We specifically explore the importance of scene context in the form of object co-occurrence and camera focus. The proposed model extracts low-, mid- and high-level and scene context features and uses a regression framework to predict visual attention map. In all the above cases, extensive experimental results show that the proposed methods outperform current state-of-the-art
    • …
    corecore