67 research outputs found

    Change detection in multitemporal monitoring images under low illumination

    Get PDF

    Robust Individual-Cell/Object Tracking via PCANet Deep Network in Biomedicine and Computer Vision

    Get PDF

    LEARNING-FREE DEEP FEATURES FOR MULTISPECTRAL PALM-PRINT CLASSIFICATION

    Get PDF
    The feature extraction step is a major and crucial step in analyzing and understanding raw data as it has a considerable impact on the system accuracy. Unfortunately, despite the very acceptable results obtained by many handcrafted methods, they can have difficulty representing the features in the case of large databases or with strongly correlated samples. In this context, we proposed a new, simple and lightweight method for deep feature extraction. Our method can be configured to produce four different deep features, each controlled to tune the system accuracy. We have evaluated the performance of our method using a multispectral palmprint based biometric system and the experimental results, using the CASIA database, have shown that our method has high accuracy compared to many current handcrafted feature extraction methods and many well known deep learning based methods

    Deep convolutional networks without backpropagation

    Get PDF
    This thesis attempts to develop networks trained without gradient descent or backpropagation designed specifically for classification tasks. The emergence of issues with gradient-based neural networks, such as long training time, vanishing or exploding gradients and high computational costs, has led to the development of such alternatives. In fact, the works presented in this thesis extend PCANet, with the fundamental objective being the development of networks capable of providing both good performance and significant improvements in network depth. Chapter 1 of this thesis formulates the problem, describes the challenges, outlines the research questions and summarises the contributions. In Chapter 2, gradient-based and non-gradient-based networks are reviewed. Chapter 3 presents the Multi-Layer PCANet, whose design is inspired by that of PCANet. However, using second-order pooling and CNNlike filters, the evaluation experiments indicate that the proposed network provide a considerable reduction in the number of features and, consequently, a gain in performance. The networks in Chapters 4 and 5 share the same design as the Multi-Layer PCANet but generate their filter banks using different supervised learning approaches. The experimental results on four databases (CIFAR-10, CIFAR-100, MNIST and TinyImageNet) show that semi-supervised Stacked-LDA filters are sufficient for providing good data representation in the convolutional layers. These filters are produced by combining 50% PCA filters (Chapter 3) with 50% Stacked-LDA filters (Chapter 4). Chapter 6 introduces deep residual compensation convolutional networks for image classification. The design of this network comprises several convolutional layers, each post-processed and trained with new labels learned from the residual information of all preceding layers. The evaluation experiments indicate that the proposed network is competitive with standard gradient-based networks not only in terms of accuracy but also in the number of FLOPs required for training. Chapter 7 summarises the findings and discusses the field’s potential future directions

    Two-Phase Object-Based Deep Learning for Multi-Temporal SAR Image Change Detection

    Get PDF
    Change detection is one of the fundamental applications of synthetic aperture radar (SAR) images. However, speckle noise presented in SAR images has a negative effect on change detection, leading to frequent false alarms in the mapping products. In this research, a novel two-phase object-based deep learning approach is proposed for multi-temporal SAR image change detection. Compared with traditional methods, the proposed approach brings two main innovations. One is to classify all pixels into three categories rather than two categories: unchanged pixels, changed pixels caused by strong speckle (false changes), and changed pixels formed by real terrain variation (real changes). The other is to group neighbouring pixels into superpixel objects such as to exploit local spatial context. Two phases are designed in the methodology: (1) Generate objects based on the simple linear iterative clustering (SLIC) algorithm, and discriminate these objects into changed and unchanged classes using fuzzy c-means (FCM) clustering and a deep PCANet. The prediction of this Phase is the set of changed and unchanged superpixels. (2) Deep learning on the pixel sets over the changed superpixels only, obtained in the first phase, to discriminate real changes from false changes. SLIC is employed again to achieve new superpixels in the second phase. Low rank and sparse decomposition are applied to these new superpixels to suppress speckle noise significantly. A further clustering step is applied to these new superpixels via FCM. A new PCANet is then trained to classify two kinds of changed superpixels to achieve the final change maps. Numerical experiments demonstrate that, compared with benchmark methods, the proposed approach can distinguish real changes from false changes effectively with significantly reduced false alarm rates, and achieve up to 99.71% change detection accuracy using multi-temporal SAR imagery

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others
    • …
    corecore