7 research outputs found

    Algorithmes rapides de restauration de signaux avec prise en compte des discontinuités

    Get PDF
    Cette communication présente des algorithmes pour la restauration des signaux stationnaires par morceaux. Une approche algorithmique basée sur la programmation dynamique conduit à des résultats rapides et optimaux. Deux algorithmes sont présentés. Le premier est d'une mise en oeuvre simple, le second en est une amélioration permettant d'éviter l'énumération de la plupart des états qui ne contribuent pas à l'élaboration de la solution optimale. Ces deux algorithmes fournissent la solution exacte d'un problème d'optimisation non convexe en variable mixte avec des complexités algorithmiques respectives en O(n2) et O(n)

    Superresolution of Hyperspectral Image Using Advanced Nonlocal Means Filter and Iterative Back Projection

    Get PDF
    We introduce an efficient superresolution algorithm based on advanced nonlocal means (NLM) filter and iterative back projection for hyperspectral image. The nonlocal means method achieves the to-be-interpolated pixel by the weighted average of all pixels within an image, and the unrelated neighborhoods are automatically eliminated by the trivial weights. However, spatial location distance is also an important issue to reconstruct the missing pixel. Therefore, we proposed an advanced NLM (ANLM) filter considering both neighborhood similarity and patch distance. In the conventional NLM method, the search region was the whole image, while the proposed ANLM utilizes the limited search to reduce the complexity. The iterative back projection (IBP) is a very famous method to deal with the image restoration. In the superresolution issue, IBP is able to recover the high-resolution image iteratively from the given low-resolution image which is blurred due to the noise by minimizing the reconstruction error, while, because the reconstruction error of IBP is back projection and isotropic, the conventional IBP suffers from jaggy and ringing artifacts. Introducing the ANLM method to improve the visual quality is necessary

    On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs

    Get PDF
    Soft wavelet shrinkage, total variation (TV) diffusion, total variation regularization, and a dynamical system called SIDEs are four useful techniques for discontinuity preserving denoising of signals and images. In this paper we investigate under which circumstances these methods are equivalent in the 1-D case. First we prove that Haar wavelet shrinkage on a single scale is equivalent to a single step of space-discrete TV diffusion or regularization of two-pixel pairs. In the translationally invariant case we show that applying cycle spinning to Haar wavelet shrinkage on a single scale can be regarded as an absolutely stable explicit discretization of TV diffusion. We prove that space-discrete TV difusion and TV regularization are identical, and that they are also equivalent to the SIDEs system when a specific force function is chosen. Afterwards we show that wavelet shrinkage on multiple scales can be regarded as a single step diffusion filtering or regularization of the Laplacian pyramid of the signal. We analyse possibilities to avoid Gibbs-like artifacts for multiscale Haar wavelet shrinkage by scaling the thesholds. Finally we present experiments where hybrid methods are designed that combine the advantages of wavelets and PDE / variational approaches. These methods are based on iterated shift-invariant wavelet shrinkage at multiple scales with scaled thresholds

    Model Based Principal Component Analysis with Application to Functional Magnetic Resonance Imaging.

    Full text link
    Functional Magnetic Resonance Imaging (fMRI) has allowed better understanding of human brain organization and function by making it possible to record either autonomous or stimulus induced brain activity. After appropriate preprocessing fMRI produces a large spatio-temporal data set, which requires sophisticated signal processing. The aim of the signal processing is usually to produce spatial maps of statistics that capture the effects of interest, e.g., brain activation, time delay between stimulation and activation, or connectivity between brain regions. Two broad signal processing approaches have been pursued; univoxel methods and multivoxel methods. This proposal will focus on multivoxel methods and review Principal Component Analysis (PCA), and other closely related methods, and describe their advantages and disadvantages in fMRI research. These existing multivoxel methods have in common that they are exploratory, i.e., they are not based on a statistical model. A crucial observation which is central to this thesis, is that there is in fact an underlying model behind PCA, which we call noisy PCA (nPCA). In the main part of this thesis, we use nPCA to develop methods that solve three important problems in fMRI. 1) We introduce a novel nPCA based spatio-temporal model that combines the standard univoxel regression model with nPCA and automatically recognizes the temporal smoothness of the fMRI data. Furthermore, unlike standard univoxel methods, it can handle non-stationary noise. 2) We introduce a novel sparse variable PCA (svPCA) method that automatically excludes whole voxel timeseries, and yields sparse eigenimages. This is achieved by a novel nonlinear penalized likelihood function which is optimized. An iterative estimation algorithm is proposed that makes use of geodesic descent methods. 3) We introduce a novel method based on Stein’s Unbiased Risk Estimator (SURE) and Random Matrix Theory (RMT) to select the number of principal components for the increasingly important case where the number of observations is of similar order as the number of variables.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57638/2/mulfarss_1.pd

    Nonlocal smoothing and adaptive morphology for scalar- and matrix-valued images

    Get PDF
    In this work we deal with two classic degradation processes in image analysis, namely noise contamination and incomplete data. Standard greyscale and colour photographs as well as matrix-valued images, e.g. diffusion-tensor magnetic resonance imaging, may be corrupted by Gaussian or impulse noise, and may suffer from missing data. In this thesis we develop novel reconstruction approaches to image smoothing and image completion that are applicable to both scalar- and matrix-valued images. For the image smoothing problem, we propose discrete variational methods consisting of nonlocal data and smoothness constraints that penalise general dissimilarity measures. We obtain edge-preserving filters by the joint use of such measures rich in texture content together with robust non-convex penalisers. For the image completion problem, we introduce adaptive, anisotropic morphological partial differential equations modelling the dilation and erosion processes. They adjust themselves to the local geometry to adaptively fill in missing data, complete broken directional structures and even enhance flow-like patterns in an anisotropic manner. The excellent reconstruction capabilities of the proposed techniques are tested on various synthetic and real-world data sets.In dieser Arbeit beschäftigen wir uns mit zwei klassischen Störungsquellen in der Bildanalyse, nämlich mit Rauschen und unvollständigen Daten. Klassische Grauwert- und Farb-Fotografien wie auch matrixwertige Bilder, zum Beispiel Diffusionstensor-Magnetresonanz-Aufnahmen, können durch Gauß- oder Impulsrauschen gestört werden, oder können durch fehlende Daten gestört sein. In dieser Arbeit entwickeln wir neue Rekonstruktionsverfahren zum zur Bildglättung und zur Bildvervollständigung, die sowohl auf skalar- als auch auf matrixwertige Bilddaten anwendbar sind. Zur Lösung des Bildglättungsproblems schlagen wir diskrete Variationsverfahren vor, die aus nichtlokalen Daten- und Glattheitstermen bestehen und allgemeine auf Bildausschnitten definierte Unähnlichkeitsmaße bestrafen. Kantenerhaltende Filter werden durch die gemeinsame Verwendung solcher Maße in stark texturierten Regionen zusammen mit robusten nichtkonvexen Straffunktionen möglich. Für das Problem der Datenvervollständigung führen wir adaptive anisotrope morphologische partielle Differentialgleichungen ein, die Dilatations- und Erosionsprozesse modellieren. Diese passen sich der lokalen Geometrie an, um adaptiv fehlende Daten aufzufüllen, unterbrochene gerichtet Strukturen zu schließen und sogar flussartige Strukturen anisotrop zu verstärken. Die ausgezeichneten Rekonstruktionseigenschaften der vorgestellten Techniken werden anhand verschiedener synthetischer und realer Datensätze demonstriert

    Super-resolução simultânea para sequência de imagens

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia Elétrica.Esta tese apresenta duas contribuições: uma nova classe de algoritmos de super-resolução (SR) simultânea e um novo método de determinação dos coeficientes de regularização. Os algoritmos SR buscam produzir imagens com resolução superior à fornecida pelo dispositivo de aquisição. Os algoritmos de super-resolução simultânea, por sua vez, produzem toda uma seqüência de imagens de alta resolução num único processo. O algoritmo SR simultâneo existente \cite{Borman-1999} é capaz de produzir imagens de qualidade superior à dos métodos tradicionais \cite{Kang-2003}, porque explora a similaridade entre as imagens em alta resolução. Contudo, o custo computacional deste método é alto e ele apresenta baixa robustez aos grandes erros de movimento. A primeira contribuição desta tese foi o aprimoramento dos métodos SR simultâneos. As expressões que exploram similaridade entre as imagens em alta resolução foram preservadas e generalizadas, ao passo que algumas expressões relacionadas aos dados, que são sensíveis aos grandes erros de movimento e são redundantes para a solução do problema, foram removidas. Este aprimoramento reduziu o custo computacional e possibilitou um maior controle sobre os grandes erros de movimento, aumentando a robustez do método e mantendo a qualidade superior das estimativas. A determinação dos coeficientes de regularização é uma etapa necessária nos algoritmos SR estudados nesta tese. Os métodos clássicos de determinação dos coeficientes, que possuem boa qualidade de estimativa e grande estabilidade, têm alto custo computacional no problema de super-resolução; por outro lado, métodos rápidos de determinação dos coeficientes, ou têm pouca qualidade na estimativa, ou são instáveis. Logo, a segunda contribuição proposta nesta tese é um novo método determinação dos coeficientes de regularização. Este método, baseado na teoria estatística Bayesiana JMAP e nos métodos baseados na Curva-L, alcança boa qualidade e grande estabilidade de estimativa, além de obter um baixo custo computacional
    corecore