3,904 research outputs found

    Computationally Efficient Target Classification in Multispectral Image Data with Deep Neural Networks

    Full text link
    Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.Comment: Presented at SPIE Security + Defence 2016 Proc. SPIE 9997, Target and Background Signatures I

    Target classification in multimodal video

    Get PDF
    The presented thesis focuses on enhancing scene segmentation and target recognition methodologies via the mobilisation of contextual information. The algorithms developed to achieve this goal utilise multi-modal sensor information collected across varying scenarios, from controlled indoor sequences to challenging rural locations. Sensors are chiefly colour band and long wave infrared (LWIR), enabling persistent surveillance capabilities across all environments. In the drive to develop effectual algorithms towards the outlined goals, key obstacles are identified and examined: the recovery of background scene structure from foreground object ’clutter’, employing contextual foreground knowledge to circumvent training a classifier when labeled data is not readily available, creating a labeled LWIR dataset to train a convolutional neural network (CNN) based object classifier and the viability of spatial context to address long range target classification when big data solutions are not enough. For an environment displaying frequent foreground clutter, such as a busy train station, we propose an algorithm exploiting foreground object presence to segment underlying scene structure that is not often visible. If such a location is outdoors and surveyed by an infra-red (IR) and visible band camera set-up, scene context and contextual knowledge transfer allows reasonable class predictions for thermal signatures within the scene to be determined. Furthermore, a labeled LWIR image corpus is created to train an infrared object classifier, using a CNN approach. The trained network demonstrates effective classification accuracy of 95% over 6 object classes. However, performance is not sustainable for IR targets acquired at long range due to low signal quality and classification accuracy drops. This is addressed by mobilising spatial context to affect network class scores, restoring robust classification capability

    Measurements and analysis of multistatic and multimodal micro-Doppler signatures for automatic target classification

    Get PDF
    The purpose of this paper is to present an experimental trial carried out at the Defence Academy of the United Kingdom to measure simultaneous multistatic and multimodal micro-Doppler signatures of various targets, including humans and flying UAVs. ewline Signatures were gathered using a network of sensors consisting of a CW monostatic radar operating at 10 GHz (X-band) and an ultrasound radar with a monostatic and a bistatic channel operating at 45 kHz and 35 kHz, respectively. A preliminary analysis of automatic target classification performance and a comparison with the radar monostatic case is also presented

    Ground target classification for airborne bistatic radar

    Get PDF

    Dynamic Target Classification in Wireless Sensor Networks

    Get PDF
    Information exploitation schemes with high-accuracy and low computational cost play an important role in Wireless Sensor Networks (WSNs). This thesis studies the problem of target classification in WSNs. Specifically, due to the resource constraints and dynamic nature of WSNs, we focus on the design of the energy-efficient solutionwith high accuracy for target classification in WSNs. Feature extraction and classification are two intertwined components in pattern recognition. Our hypothesis is that for each type of target, there exists an optimal set of features in conjunction with a specific classifier, which can yield the best performance in terms of classification accuracy using least amount of computation, measured by the number of features used. Our objective is to find such an optimal combination of features and classifiers. Our study is in the context of applications deployed in a wireless sensor network (WSN) environment, composed of large number of small-size sensors with their own processing, sensing and networking capabilities powered by onboard battery supply. Due to the extremely limited resources on each sensor platform, the decision making is prune to fault, making sensor fusion a necessity. We present a concept, referred to as dynamic target classification in WSNs. The main idea is to dynamically select the optimal combination of features and classifiers based on the probability that the target to be classified might belong to a certain category. We use two data sets to validate our hypothesis and derive the optimal combination sets by minimizing a cost function. We apply the proposed algorithm to a scenario of collaborative target classification among a group of sensors which are selected using information based sensor selection rule in WSNs. Experimental results show that our approach can significantly reduce the computational time while at the same time, achieve better classification accuracy without using any fusion algorithm, compared with traditional classification approaches, making it a viable solution in practice
    corecore