175 research outputs found

    Sharp Oracle Inequalities for Aggregation of Affine Estimators

    Get PDF
    We consider the problem of combining a (possibly uncountably infinite) set of affine estimators in non-parametric regression model with heteroscedastic Gaussian noise. Focusing on the exponentially weighted aggregate, we prove a PAC-Bayesian type inequality that leads to sharp oracle inequalities in discrete but also in continuous settings. The framework is general enough to cover the combinations of various procedures such as least square regression, kernel ridge regression, shrinking estimators and many other estimators used in the literature on statistical inverse problems. As a consequence, we show that the proposed aggregate provides an adaptive estimator in the exact minimax sense without neither discretizing the range of tuning parameters nor splitting the set of observations. We also illustrate numerically the good performance achieved by the exponentially weighted aggregate

    A review of domain adaptation without target labels

    Full text link
    Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the question: how can a classifier learn from a source domain and generalize to a target domain? We present a categorization of approaches, divided into, what we refer to as, sample-based, feature-based and inference-based methods. Sample-based methods focus on weighting individual observations during training based on their importance to the target domain. Feature-based methods revolve around on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure. Additionally, we review a number of conditions that allow for formulating bounds on the cross-domain generalization error. Our categorization highlights recurring ideas and raises questions important to further research.Comment: 20 pages, 5 figure

    Improved Image Quality Using Joint Image Reconstruction and Non-Local Means Filtering for Multi-Spectral SPECT

    Get PDF
    Department of Electrical EngineeringSingle-photon emission computed tomography (SPECT) is one of the major imaging modalities in medical imaging, including quantitative imaging for the evaluation of efficacy and toxicity in radionuclide therapy. Choosing optimal SPECT image reconstruction strategy for radionuclides with wide energy spectrum affects resulting image quality due to energy-dependent attenuation information in forward projection models and energy-dependent scatter information. A post-reconstruction filtering is also important to suppress noise propagated during reconstruction process. ?? Yttrium-90 (Y-90) is a commonly used radionuclide in targeted radionuclide therapy. Recently, bremsstrahlung in Y-90 has been successfully imaged for good quantification of radioactivity to predict therapy response more accurately. However, wide continuous energy spectrum of bremsstrahlung photons is challenging in Y-90 SPECT image reconstruction. Previously, forward projection models with narrow single-energy window were used for image reconstruction from a single acquisition energy window. We propose a new Y-90 SPECT joint image reconstruction method from multiple acquisitions windows, referred to as joint spectral reconstruction (JSR) using multi-energy window forward models. Our proposed method yielded significantly higher recovery coefficient and lower standard deviation than other methods that use a single acquisition window and single energy window for projection model with narrow and wide energy spectra. ?? We also investigated parameter selection methods for non-local mean (NLM) filter with SPECT. Self-weight estimation is an important factor to influence denoising performance of NLM. Recently introduced local James-Stein type center pixel weight method (LJS) outperformed other existing self-weight estimation methods in determining the contribution of the self-weight to NLM. However, the LJS method may result in excessively large self-weight estimates since no upper bound for self-weights was assumed. It also used relatively large local area for estimating self-weights, which may lead to strong bias. We propose novel local minimax self-weight estimation methods with direct bounds (LMM-DB) and re-parametrization (LMM-RP) using Baranchik???s minimax estimator. Our proposed methods yielded better bias-variance trade-off, higher peak signal-to-noise (PSNR) ratio, and less visual artifacts than the classical NLM method and the original LJS method. Our proposed methods also provide a heuristic way of choosing global smoothing parameters of NLM to yield PSNR values that are close to the optimal values without knowing the true image.ope

    Statistical Analysis of Audio Signals using Time-Frequency Analysis

    Get PDF
    In this thesis, we provide nonparametric estimation of signals corrupted by stationary noise in the white noise model. We derive adaptive and rate-optimal estimators of signals in modulation spaces by thresholding the coefficients obtained from the Gabor expansion. The rates obtained using the classical oracle inequalities of Donoho and Johnstone (1994) exhibit new features that reflect the inclusion of both time and frequency. The scope of our results is extended to alpha-modulation spaces in the one-dimensional setting, allowing a comparison with Sobolev and Besov spaces. To confirm the practical applicability of our methods, we perform extensive simulations. These simulations evaluate the performance of our methods in comparison to state-of-the-art methods over a range of scenarios

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio
    • 

    corecore