24 research outputs found

    Postreconstruction filtering of 3D PET images by using weighted higher-order singular value decomposition

    Get PDF
    Additional file 1. Original 3D PET images data used in this work to generate the results

    Adaptive Image Denoising by Targeted Databases

    Full text link
    We propose a data-dependent denoising procedure to restore noisy images. Different from existing denoising algorithms which search for patches from either the noisy image or a generic database, the new algorithm finds patches from a database that contains only relevant patches. We formulate the denoising problem as an optimal filter design problem and make two contributions. First, we determine the basis function of the denoising filter by solving a group sparsity minimization problem. The optimization formulation generalizes existing denoising algorithms and offers systematic analysis of the performance. Improvement methods are proposed to enhance the patch search process. Second, we determine the spectral coefficients of the denoising filter by considering a localized Bayesian prior. The localized prior leverages the similarity of the targeted database, alleviates the intensive Bayesian computation, and links the new method to the classical linear minimum mean squared error estimation. We demonstrate applications of the proposed method in a variety of scenarios, including text images, multiview images and face images. Experimental results show the superiority of the new algorithm over existing methods.Comment: 15 pages, 13 figures, 2 tables, journa

    A Sparsity-Based InSAR Phase Denoising Algorithm Using Nonlocal Wavelet Shrinkage

    Get PDF
    An interferometric synthetic aperture radar (InSAR) phase denoising algorithm using the local sparsity of wavelet coefficients and nonlocal similarity of grouped blocks was developed. From the Bayesian perspective, the double-l1 norm regularization model that enforces the local and nonlocal sparsity constraints was used. Taking advantages of coefficients of the nonlocal similarity between group blocks for the wavelet shrinkage, the proposed algorithm effectively filtered the phase noise. Applying the method to simulated and acquired InSAR data, we obtained satisfactory results. In comparison, the algorithm outperformed several widely-used InSAR phase denoising approaches in terms of the number of residues, root-mean-square errors and other edge preservation indexes

    NEW ALGORITHMS FOR COMPRESSED SENSING OF MRI: WTWTS, DWTS, WDWTS

    Get PDF
    Magnetic resonance imaging (MRI) is one of the most accurate imaging techniques that can be used to detect several diseases, where other imaging methodologies fail. MRI data takes a longer time to capture. This is a pain taking process for the patients to remain still while the data is being captured. This is also hard for the doctor as well because if the images are not captured correctly then it will lead to wrong diagnoses of illness that might put the patients lives in danger. Since long scanning time is one of most serious drawback of the MRI modality, reducing acquisition time for MRI acquisition is a crucial challenge for many imaging techniques. Compressed Sensing (CS) theory is an appealing framework to address this issue since it provides theoretical guarantees on the reconstruction of sparse signals while projection on a low dimensional linear subspace. Further enhancements have extended the CS framework by performing Variable Density Sampling (VDS) or using wavelet domain as sparsity basis generator. Recent work in this approach considers parent-child relations in the wavelet levels. This paper further extends the prior approach by utilizing the entire wavelet tree structure as an argument for coefficient correlation and also considers the directionality of wavelet coefficients using Hybrid Directional Wavelets (HDW). Incorporating coefficient thresholding in both wavelet tree structure as well as directional wavelet tree structure, the experiments reveal higher Signal to Noise ratio (SNR), Peak Signal to Noise ratio (PSNR) and lower Mean Square Error (MSE) for the CS based image reconstruction approach. Exploiting the sparsity of wavelet tree using the above-mentioned techniques achieves further lessening for data needed for the reconstruction, while improving the reconstruction result. These techniques are applied on a variety of images including both MRI and non-MRI data. The results show the efficacy of our techniques

    Sparse Representation-Based Framework for Preprocessing Brain MRI

    Get PDF
    This thesis addresses the use of sparse representations, specifically Dictionary Learning and Sparse Coding, for pre-processing brain MRI, so that the processed image retains the fine details of the original image, to improve the segmentation of brain structures, to assess whether there is any relationship between alterations in brain structures and the behavior of young offenders. Denoising an MRI while keeping fine details is a difficult task; however, the proposed method, based on sparse representations, NLM, and SVD can filter noise while prevents blurring, artifacts, and residual noise. Segmenting an MRI is a non-trivial task; because normally the limits between regions in these images may be neither clear nor well defined, due to the problems which affect MRI. However, this method, from both the label matrix of the segmented MRI and the original image, yields a new improved label matrix in which improves the limits among regions.DoctoradoDoctor en Ingeniería de Sistemas y Computació

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Tensor-Train decomposition for image classification problems

    Get PDF
    In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.Negli ultimi anni si è registrato un notevole sviluppo di nuove tecniche per il riconoscimento automatico di oggetti, anche dovuto alle possibili ricadute di tali avanzamenti nel campo medico o automobilistico. A tal fine sono stati sviluppati svariati modelli matematici dai metodi di regressione fino alle reti neurali. Un aspetto cruciale di questi cosiddetti algoritmi di classificazione è l'uso di aspetti algebrici per la rappresentazione e l'approssimazione dei dati in input. In questa tesi esamineremo due diversi modelli per la classificazione di immagini basati sulla decomposizione Tensor-Train (TT). In generale, l'uso di approcci tensoriali è fondamentale per preservare la struttura intrinsecamente multidimensionale dei dati. Inoltre l'occupazione di memoria per la decomposizione Tensor-Train non cresce esponenzialmente all'aumentare dei dati, a differenza di altre decomposizioni tensoriali. Questo la rende particolarmente adatta nel caso di dati di grandi dimensioni. Inoltre permette, attraverso l'uso di opportune strategie di troncamento, di limitare notevolmente l'occupazione di memoria senza ricadute negative sulle performance di classificazione. Il primo modello proposto in questa tesi è basato su una decomposizione diretta del database tramite la decomposizione TT. In questo modo viene determinata una base che verrà di seguito utilizzata nella classificazione di nuove immagini. Il secondo è invece un modello di dictionary learning tensoriale sempre basato sulla decomposizione TT in cui i termini della decomposizione sono determinati utilizzando un nuovo metodo di ottimizzazione alternato con l'utilizzo di passi spettrali

    Side information in robust principal component analysis: algorithms and applications

    Get PDF
    Dimensionality reduction and noise removal are fundamental machine learning tasks that are vital to artificial intelligence applications. Principal component analysis has long been utilised in computer vision to achieve the above mentioned goals. Recently, it has been enhanced in terms of robustness to outliers in robust principal component analysis. Both convex and non-convex programs have been developed to solve this new formulation, some with exact convergence guarantees. Its effectiveness can be witnessed in image and video applications ranging from image denoising and alignment to background separation and face recognition. However, robust principal component analysis is by no means perfect. This dissertation identifies its limitations, explores various promising options for improvement and validates the proposed algorithms on both synthetic and real-world datasets. Common algorithms approximate the NP-hard formulation of robust principal component analysis with convex envelopes. Though under certain assumptions exact recovery can be guaranteed, the relaxation margin is too big to be squandered. In this work, we propose to apply gradient descent on the Burer-Monteiro bilinear matrix factorisation to squeeze this margin given available subspaces. This non-convex approach improves upon conventional convex approaches both in terms of accuracy and speed. On the other hand, oftentimes there is accompanying side information when an observation is made. The ability to assimilate such auxiliary sources of data can ameliorate the recovery process. In this work, we investigate in-depth such possibilities for incorporating side information in restoring the true underlining low-rank component from gross sparse noise. Lastly, tensors, also known as multi-dimensional arrays, represent real-world data more naturally than matrices. It is thus advantageous to adapt robust principal component analysis to tensors. Since there is no exact equivalence between tensor rank and matrix rank, we employ the notions of Tucker rank and CP rank as our optimisation objectives. Overall, this dissertation carefully defines the problems when facing real-world computer vision challenges, extensively and impartially evaluates the state-of-the-art approaches, proposes novel solutions and provides sufficient validations on both simulated data and popular real-world datasets for various mainstream computer vision tasks.Open Acces
    corecore