8 research outputs found

    Sparse image representation with encryption

    Get PDF
    In this thesis we present an overview of sparse approximations of grey level images. The sparse representations are realized by classic, Matching Pursuit (MP) based, greedy selection strategies. One such technique, termed Orthogonal Matching Pursuit (OMP), is shown to be suitable for producing sparse approximations of images, if they are processed in small blocks. When the blocks are enlarged, the proposed Self Projected Matching Pursuit (SPMP) algorithm, successfully renders equivalent results to OMP. A simple coding algorithm is then proposed to store these sparse approximations. This is shown, under certain conditions, to be competitive with JPEG2000 image compression standard. An application termed image folding, which partially secures the approximated images is then proposed. This is extended to produce a self contained folded image, containing all the information required to perform image recovery. Finally a modified OMP selection technique is applied to produce sparse approximations of Red Green Blue (RGB) images. These RGB approximations are then folded with the self contained approach

    Sparse Methods for Learning Multiple Subspaces from Large-scale, Corrupted and Imbalanced Data

    Get PDF
    In many practical applications in machine learning, computer vision, data mining and information retrieval one is confronted with datasets whose intrinsic dimension is much smaller than the dimension of the ambient space. This has given rise to the challenge of effectively learning multiple low-dimensional subspaces from such data. Multi-subspace learning methods based on sparse representation, such as sparse representation based classification (SRC) and sparse subspace clustering (SSC) have become very popular due to their conceptual simplicity and empirical success. However, there have been very limited theoretical explanations for the correctness of such approaches in the literature. Moreover, the applicability of existing algorithms to real world datasets is limited due to their high computational and memory complexity, sensitivity to data corruptions as well as sensitivity to imbalanced data distributions. This thesis attempts to advance our theoretical understanding of sparse representation based multi-subspace learning methods, as well as develop new algorithms for handling large-scale, corrupted and imbalanced data. The first contribution of this thesis is a theoretical analysis of the correctness of such methods. In our geometric and randomized analysis, we answer important theoretical questions such as the effect of subspace arrangement, data distribution, subspace dimension, data sampling density, and so on. The second contribution of this thesis is the development of practical subspace clustering algorithms that are able to deal with large-scale, corrupted and imbalanced datasets. To deal with large-scale data, we study different approaches based on active support and divide-and-conquer ideas, and show that these approaches offer a good tradeoff between high accuracy and low running time. To deal with corrupted data, we construct a Markov chain whose stationary distribution can be used to separate between inliers and outliers. Finally, we propose an efficient exemplar selection and subspace clustering method that outperforms traditional methods on imbalanced data

    Dictionaries for fast and informative dynamic MRI acquisition

    No full text
    Magnetic resonance (MR) imaging is an invaluable tool for medical research and diagnosis but suffers from inefficiencies. The speed of its acquisition mechanism, based on sequentially probing the interactions between nuclear atom spins and a changing magnetic field, is limited by atomic properties and scanner physics. Modern sampling techniques termed compressed sensing have nevertheless demonstrated how near perfect reconstructions are possible from undersampled, accelerated acquisitions, showing promise for more efficient MR acquisition paradigms. At the same time, information extraction from MR images through image analysis implies a considerable dimensionality reduction, in which an image is processed for the extraction of a few clinically useful parameters. This signals an inefficient handling of information in the separated treatment of acquisition and analysis that could be tackled by joining these two essential stages of the imaging pipeline. In this thesis, we explore the use of adaptive sparse modelling for novel acquisition strategies of cardiac cine MR data. Conventional compressed sensing MR acquisition relies on fixed basis transforms for sparse modelling, which are only able to guarantee suboptimal sparse modelling. We introduce spatio-temporal dictionaries that are able to optimally adapt sparse modelling by absorbing salient features of cardiac cine data, and demonstrate how they can outperform sampling methods based on fixed basis transforms. Additionally, we extend the framework introduced to handle parallel data acquisition. Given the flexibility of the formulation, we show how it can be combined with a labelling model that provides a segmentation of the image as a by-product of the reconstruction, hence performing joint reconstruction and analysis.Open Acces

    Global Geometric Conditions on Sensing Matrices for the Success of L1 Minimization Algorithm

    Get PDF
    Compressed Sensing concerns a new class of linear data acquisition protocols that are more efficient than the classical Shannon sampling theorem when targeting at signals with sparse structures. In this thesis, we study the stability of a Statistical Restricted Isometry Property and show how this property can be further relaxed while maintaining its sufficiency for the Basis Pursuit algorithm to recover sparse signals. We then look at the dictionary extension of Compressed Sensing where signals are sparse under a redundant dictionary and reconstruction is achieved by the 1\ell_1 synthesis method. By establishing a necessary and sufficient condition for the stability of 1\ell_1 synthesis, we are able to predict this algorithm's performances under different dictionaries. Last, we construct a class of deterministic sensing matrix for the Dirac-Fourier joint dictionary

    Cloud removal from optical remote sensing images

    Full text link
    Optical remote sensing images used for Earth surface observations are constantly contaminated by cloud cover. Clouds dynamically affect the applications of optical data and increase the difficulty of image analysis. Therefore, cloud is considered as one of the sources of noise in optical image data, and its detection and removal need to be operated as a pre-processing step in most remote sensing image processing applications. This thesis investigates the current cloud detection and removal algorithms and develops three new cloud removal methods to improve the accuracy of the results. A thin cloud removal method based on signal transmission principles and spectral mixture analysis (ST-SMA) for pixel correction is developed in the first contribution. This method considers not only the additive reflectance from the clouds but also the energy absorption when solar radiation passes through them. Data correction is achieved by subtracting the product of the cloud endmember signature and the cloud abundance and rescaling according to the cloud thickness. The proposed method has no requirement for meteorological data and does not rely on reference images. The experimental results indicate that the proposed approach is able to perform effective removal of thin clouds in different scenarios. In the second study, an effective cloud removal method is proposed by taking advantage of the noise-adjusted principal components transform (CR-NAPCT). It is found that the signal-to-noise ratio (S/N) of cloud data is higher than data without cloud contamination, when spatial correlation is considered and are shown in the first NAPCT component (NAPC1) in the NAPCT data. An inverse transformation with a modified first component is then applied to generate the cloud free image. The effectiveness of the proposed method is assessed by performing experiments on simulated and real data to compare the quantitative and qualitative performance of the proposed approach. The third study of this thesis deals with both cloud and cloud shadow problems with the aid of an auxiliary image in a clear sky condition. A new cloud removal approach called multitemporal dictionary learning (MDL) is proposed. Dictionaries of the cloudy areas (target data) and the cloud free areas (reference data) are learned separately in the spectral domain. An online dictionary learning method is then applied to obtain the two dictionaries in this method. The removal process is conducted by using the coefficients from the reference image and the dictionary learned from the target image. This method is able to recover the data contaminated by thin and thick clouds or cloud shadows. The experimental results show that the MDL method is effective from both quantitative and qualitative viewpoints

    Overcomplete Dictionary and Deep Learning Approaches to Image and Video Analysis

    Get PDF
    Extracting useful information while ignoring others (e.g. noise, occlusion, lighting) is an essential and challenging data analyzing step for many computer vision tasks such as facial recognition, scene reconstruction, event detection, image restoration, etc. Data analyzing of those tasks can be formulated as a form of matrix decomposition or factorization to separate useful and/or fill in missing information based on sparsity and/or low-rankness of the data. There has been an increasing number of non-convex approaches including conventional matrix norm optimizing and emerging deep learning models. However, it is hard to optimize the ideal l0-norm or learn the deep models directly and efficiently. Motivated from this challenging process, this thesis proposes two sets of approaches: conventional and deep learning based. For conventional approaches, this thesis proposes a novel online non-convex lp-norm based Robust PCA (OLP-RPCA) approach for matrix decomposition, where 0 < p < 1. OLP-RPCA is developed from the offline version LP-RPCA. A robust face recognition framework is also developed from Robust PCA and sparse coding approaches. More importantly, OLP-RPCA method can achieve real-time performance on large-scale data without parallelizing or implementing on a graphics processing unit. We mathematically and empirically show that our OLP-RPCA algorithm is linear in both the sample dimension and the number of samples. The proposed OLP-RPCA and LP-RPCA approaches are evaluated in various applications including Gaussian/non-Gaussian image denoising, face modeling, real-time background subtraction and video inpainting and compared against numerous state-of-the-art methods to demonstrate the robustness of the algorithms. In addition, this thesis proposes a novel Robust lp-norm Singular Value Decomposition (RP-SVD) method for analyzing two-way functional data. The proposed RP-SVD is formulated as an lp-norm based penalized loss minimization problem. The proposed RP-SVD method is evaluated in four applications, i.e. noise and outlier removal, estimation of missing values, structure from motion reconstruction and facial image reconstruction. For deep learning based approaches, this thesis explores the idea of matrix decomposition via Robust Deep Boltzmann Machines (RDBM), an alternative form of Robust Boltzmann Machines, which aiming at dealing with noise and occlusion for face-related applications, particularly. This thesis proposes an extension to texture modeling in the Deep Appearance Models (DAMs) by using RDBM to enhance its robustness against noise and occlusion. The extended model can cope with occlusion and extreme poses when modeling human faces in 2D image reconstruction. This thesis also introduces new fitting algorithms with occlusion awareness through the mask obtained from the RDBM reconstruction. The proposed approach is evaluated in various applications by using challenging face datasets, i.e. Labeled Face Parts in the Wild (LFPW), Helen, EURECOM and AR databases, to demonstrate its robustness and capabilities
    corecore