2 research outputs found

    Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency

    Get PDF
    Recently, sparse representation-based (SR) methods have been presented for the fusion of multi-focus images. However, most of them independently consider the local information from each image patch during sparse coding and fusion, giving rise to the spatial artifacts on the fused image. In order to overcome this issue, we present a novel multi-focus image fusion method by jointly considering information from each local image patch as well as its spatial contextual information during the sparse coding and fusion in this paper. Specifically, we employ a robust sparse representation (LR_RSR, for short) model with a Laplacian regularization term on the sparse error matrix in the sparse coding phase, ensuring the local consistency among the spatially-adjacent image patches. In the subsequent fusion process, we define a focus measure to determine the focused and de-focused regions in the multi-focus images by collaboratively employing information from each local image patch as well as those from its 8-connected spatial neighbors. As a result of that, the proposed method is likely to introduce fewer spatial artifacts to the fused image. Moreover, an over-complete dictionary with small atoms that maintains good representation capability, rather than using the input data themselves, is constructed for the LR_RSR model during sparse coding. By doing that, the computational complexity of the proposed fusion method is greatly reduced, while the fusion performance is not degraded and can be even slightly improved. Experimental results demonstrate the validity of the proposed method, and more importantly, it turns out that our LR-RSR algorithm is more computationally efficient than most of the traditional SR-based fusion methods

    Beyond PCA: Deep Learning Approaches for Face Modeling and Aging

    Get PDF
    Modeling faces with large variations has been a challenging task in computer vision. These variations such as expressions, poses and occlusions are usually complex and non-linear. Moreover, new facial images also come with their own characteristic artifacts greatly diverse. Therefore, a good face modeling approach needs to be carefully designed for flexibly adapting to these challenging issues. Recently, Deep Learning approach has gained significant attention as one of the emerging research topics in both higher-level representation of data and the distribution of observations. Thanks to the nonlinear structure of deep learning models and the strength of latent variables organized in hidden layers, it can efficiently capture variations and structures in complex data. Inspired by this motivation, we present two novel approaches, i.e. Deep Appearance Models (DAM) and Robust Deep Appearance Models (RDAM), to accurately capture both shape and texture of face images under large variations. In DAM, three crucial components represented in hierarchical layers are modeled using Deep Boltzmann Machines (DBM) to robustly capture the variations of facial shapes and appearances. DAM has shown its potential in inferencing a representation for new face images under various challenging conditions. An improved version of DAM, named Robust DAM (RDAM), is also introduced to better handle the occluded face areas and, therefore, produces more plausible reconstruction results. These proposed approaches are evaluated in various applications to demonstrate their robustness and capabilities, e.g. facial super-resolution reconstruction, facial off-angle reconstruction, facial occlusion removal and age estimation using challenging face databases: Labeled Face Parts in the Wild (LFPW), Helen and FG-NET. Comparing to classical and other deep learning based approaches, the proposed DAM and RDAM achieve competitive results in those applications, thus this showed their advantages in handling occlusions, facial representation, and reconstruction. In addition to DAM and RDAM that are mainly used for modeling single facial image, the second part of the thesis focuses on novel deep models, i.e. Temporal Restricted Boltzmann Machines (TRBM) and tractable Temporal Non-volume Preserving (TNVP) approaches, to further model face sequences. By exploiting the additional temporal relationships presented in sequence data, the proposed models have their advantages in predicting the future of a sequence from its past. In the application of face age progression, age regression, and age-invariant face recognition, these models have shown their potential not only in efficiently capturing the non-linear age related variance but also producing a smooth synthesis in age progression across faces. Moreover, the structure of TNVP can be transformed into a deep convolutional network while keeping the advantages of probabilistic models with tractable log-likelihood density estimation. The proposed approach is evaluated in terms of synthesizing age-progressed faces and cross-age face verification. It consistently shows the state-of-the-art results in various face aging databases, i.e. FG-NET, MORPH, our collected large-scale aging database named AginG Faces in the Wild (AGFW), and Cross-Age Celebrity Dataset (CACD). A large-scale face verification on Megaface challenge 1 is also performed to further show the advantages of our proposed approach
    corecore