79 research outputs found

    MDL Denoising Revisited

    Full text link
    We refine and extend an earlier MDL denoising criterion for wavelet-based denoising. We start by showing that the denoising problem can be reformulated as a clustering problem, where the goal is to obtain separate clusters for informative and non-informative wavelet coefficients, respectively. This suggests two refinements, adding a code-length for the model index, and extending the model in order to account for subband-dependent coefficient distributions. A third refinement is derivation of soft thresholding inspired by predictive universal coding with weighted mixtures. We propose a practical method incorporating all three refinements, which is shown to achieve good performance and robustness in denoising both artificial and natural signals.Comment: Submitted to IEEE Transactions on Information Theory, June 200

    Confidence Sets in Time-Series Filtering

    Full text link
    The problem of filtering of finite-alphabet stationary ergodic time series is considered. A method for constructing a confidence set for the (unknown) signal is proposed, such that the resulting set has the following properties: First, it includes the unknown signal with probability γ\gamma, where γ\gamma is a parameter supplied to the filter. Second, the size of the confidence sets grows exponentially with the rate that is asymptotically equal to the conditional entropy of the signal given the data. Moreover, it is shown that this rate is optimal.Comment: some of the results were reported at ISIT2011, St. Petersburg, Russia, pp. 2436-243

    Statistical analysis and modeling for biomolecular structures

    Get PDF
    Most of the recent studies on biomolecules address their three dimensional structure since it is closely related to their functions in a biological system. Determination of structure of biomolecules can be done by using various methods, which rely on data from various experimental instruments or on computational approaches to previously obtained data or datasets. Single particle reconstruction using electron microscopic images of macromolecules has proven resource-wise to be useful and affordable for determining their molecular structure in increasing details. The main goal of this thesis is to contribute to the single particle reconstruction methodology, by adding a process of denoising in the analysis of the cryo-electron microscopic images. First, the denoising methods are briefly surveyed and their efficiencies for filtering cryo-electron microscopic images are evaluated. In this thesis, the focus has been set to information theoretic minimum description length (MDL) principle for coding efficiently the essential part of the signal. This approach can also be applied to reduce noise in signals and here it is used to develop a novel denoising method for cryo-electron microscopic images. An existing denoising method has been modified to suit the given problem in single particle reconstruction. In addition, a more general denoising method has been developed, discovering a novel way to find model class by using the MDL principle. This method was then thoroughly tested and compared with co-existing methods in order to evaluate the utility of denoising in single particle reconstruction. A secondary goal in the research for this thesis deals with studying protein oligomerisation, using computational approaches. The focus has been to recognize interacting residues in proteins for oligomerization and to model the interaction site for hantavirus N-protein. In order to unravel the interaction structure, the approach has been to understand the phenomenon of protein folding towards quaternary structure.reviewe

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio
    corecore