399 research outputs found

    Convolutional Neural Network Approach for Multispectral Facial Presentation Attack Detection in Automated Border Control Systems

    Get PDF
    [EN] Automated border control systems are the first critical infrastructure point when crossing a border country. Crossing border lines for unauthorized passengers is a high security risk to any country. This paper presents a multispectral analysis of presentation attack detection for facial biometrics using the learned features from a convolutional neural network. Three sensors are considered to design and develop a new database that is composed of visible (VIS), near-infrared (NIR), and thermal images. Most studies are based on laboratory or ideal conditions-controlled environments. However, in a real scenario, a subject’s situation is completely modified due to diverse physiological conditions, such as stress, temperature changes, sweating, and increased blood pressure. For this reason, the added value of this study is that this database was acquired in situ. The attacks considered were printed, masked, and displayed images. In addition, five classifiers were used to detect the presentation attack. Note that thermal sensors provide better performance than other solutions. The results present better outputs when all sensors are used together, regardless of whether classifier or feature-level fusion is considered. Finally, classifiers such as KNN or SVM show high performance and low computational level

    Deep Learning Based Multi-Modal Fusion Architectures for Maritime Vessel Detection

    Get PDF
    Object detection is a fundamental computer vision task for many real-world applications. In the maritime environment, this task is challenging due to varying light, view distances, weather conditions, and sea waves. In addition, light reflection, camera motion and illumination changes may cause to false detections. To address this challenge, we present three fusion architectures to fuse two imaging modalities: visible and infrared. These architectures can provide complementary information from two modalities in different levels: pixel-level, feature-level, and decision-level. They employed deep learning for performing fusion and detection. We investigate the performance of the proposed architectures conducting a real marine image dataset, which is captured by color and infrared cameras on-board a vessel in the Finnish archipelago. The cameras are employed for developing autonomous ships, and collect data in a range of operation and climatic conditions. Experiments show that feature-level fusion architecture outperforms the state-of-the-art other fusion level architectures

    Assessing High Dynamic Range Imagery Performance for Object Detection in Maritime Environments

    Get PDF
    The field of autonomous robotics has benefited from the implementation of convolutional neural networks in vision-based situational awareness. These strategies help identify surface obstacles and nearby vessels. This study proposes the introduction of high dynamic range cameras on autonomous surface vessels because these cameras capture images at different levels of exposure revealing more detail than fixed exposure cameras. To see if this introduction will be beneficial for autonomous vessels this research will create a dataset of labeled high dynamic range images and single exposure images, then train object detection networks with these datasets to compare the performance of these networks. Faster-RCNN, SSD, and YOLOv5 were used to compare. Results determined Faster-RCNN and YOLOv5 networks trained on fixed exposure images outperformed their HDR counterparts while SSDs performed better when using HDR images. Better fixed exposure network performance is likely attributed to better feature extraction for fixed exposure images. Despite performance metrics, HDR images prove more beneficial in cases of extreme light exposure since features are not lost

    Application of Multi-Sensor Fusion Technology in Target Detection and Recognition

    Get PDF
    Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems

    Deep learning for the early detection of harmful algal blooms and improving water quality monitoring

    Get PDF
    Climate change will affect how water sources are managed and monitored. The frequency of algal blooms will increase with climate change as it presents favourable conditions for the reproduction of phytoplankton. During monitoring, possible sensory failures in monitoring systems result in partially filled data which may affect critical systems. Therefore, imputation becomes necessary to decrease error and increase data quality. This work investigates two issues in water quality data analysis: improving data quality and anomaly detection. It consists of three main topics: data imputation, early algal bloom detection using in-situ data and early algal bloom detection using multiple modalities.The data imputation problem is addressed by experimenting with various methods with a water quality dataset that includes four locations around the North Sea and the Irish Sea with different characteristics and high miss rates, testing model generalisability. A novel neural network architecture with self-attention is proposed in which imputation is done in a single pass, reducing execution time. The self-attention components increase the interpretability of the imputation process at each stage of the network, providing knowledge to domain experts.After data curation, algal activity is predicted using transformer networks, between 1 to 7 days ahead, and the importance of the input with regard to the output of the prediction model is explained using SHAP, aiming to explain model behaviour to domain experts which is overlooked in previous approaches. The prediction model improves bloom detection performance by 5% on average and the explanation summarizes the complex structure of the model to input-output relationships. Performance improvements on the initial unimodal bloom detection model are made by incorporating multiple modalities into the detection process which were only used for validation purposes previously. The problem of missing data is also tackled by using coordinated representations, replacing low quality in-situ data with satellite data and vice versa, instead of imputation which may result in biased results

    Maritime ship recognition based on convolutional neural network and linear weighted decision fusion for multimodal images

    Get PDF
    Ship images are easily affected by light, weather, sea state, and other factors, making maritime ship recognition a highly challenging task. To address the low accuracy of ship recognition in visible images, we propose a maritime ship recognition method based on the convolutional neural network (CNN) and linear weighted decision fusion for multimodal images. First, a dual CNN is proposed to learn the effective classification features of multimodal images (i.e., visible and infrared images) of the ship target. Then, the probability value of the input multimodal images is obtained using the softmax function at the output layer. Finally, the probability value is processed by linear weighted decision fusion method to perform maritime ship recognition. Experimental results on publicly available visible and infrared spectrum dataset and RGB-NIR dataset show that the recognition accuracy of the proposed method reaches 0.936 and 0.818, respectively, and it achieves a promising recognition effect compared with the single-source sensor image recognition method and other existing recognition methods

    Increasing the reuse of wood in bulky waste using artificial intelligence and imaging in the VIS, IR, and terahertz ranges

    Get PDF
    Bulky waste contains valuable raw materials, especially wood, which accounts for around 50% of the volume. Sorting is very time-consuming in view of the volume and variety of bulky waste and is often still done manually. Therefore, only about half of the available wood is used as a material, while the rest is burned with unsorted waste. In order to improve the material recycling of wood from bulky waste, the project ASKIVIT aims to develop a solution for the automated sorting of bulky waste. For that, a multi-sensor approach is proposed including: (i) Conventional imaging in the visible spectral range; (ii) Near-infrared hyperspectral imaging; (iii) Active heat flow thermography; (iv) Terahertz imaging. This paper presents a demonstrator used to obtain images with the aforementioned sensors. Differences between the imaging systems are discussed and promising results on common problems like painted materials or black plastic are presented. Besides that, pre-examinations show the importance of near-infrared hyperspectral imaging for the characterization of bulky waste
    corecore