750 research outputs found
Effect of Denoising Dependent and Independent Variables On the Performances of NonLinear Regression Parameters of the Estimators
Denosing is any signal processing method which reconstructs a signal from a noisy one. Its goal is to remove noise and preserve useful information. In the study simulated data under three (3) sample sizes (i.e 32,256 and 1024) were used, applying Epanechnikov kernel Gaussian kernel, Wavelet and Polynomial Spline to denoise the variables in two approaches. The study revealed the performances of denoised nonlinear estimators under different sample sizes and comparison was made using the mean squared error criterion. The result of the studies showed that the approach of denoising both the dependent and independent variables enhance the performances of non-linear least squares estimator (DNLS) under each sample size for the different four smoothers considered
Peaks detection and alignment for mass spectrometry data
The goal of this paper is to review existing methods for protein mass spectrometry data analysis, and to present a new methodology for automatic extraction of significant peaks (biomarkers). For the pre-processing step required for data from MALDI-TOF or SELDI- TOF spectra, we use a purely nonparametric approach that combines stationary invariant wavelet transform for noise removal and penalized spline quantile regression for baseline correction. We further present a multi-scale spectra alignment technique that is based on identification of statistically significant peaks from a set of spectra. This method allows one to find common peaks in a set of spectra that can subsequently be mapped to individual proteins. This may serve as useful biomarkers in medical applications, or as individual features for further multidimensional statistical analysis. MALDI-TOF spectra obtained from serum samples are used throughout the paper to illustrate the methodology
Pointwise adaptive estimation for quantile regression
A nonparametric procedure for quantile regression, or more generally nonparametric M-estimation, is proposed which is completely data-driven and adapts locally to the regularity of the regression function. This is achieved by considering in each point M-estimators over different local neighbourhoods and by a local model selection procedure based on sequential testing. Non-asymptotic risk bounds are obtained, which yield rate-optimality for large sample asymptotics under weak conditions. Simulations for different univariate median regression models show good finite sample properties, also in comparison to traditional methods. The approach is the basis for denoising CT scans in cancer research.M-estimation, median regression, robust estimation, local model selection, unsupervised learning, local bandwidth selection, median filter, Lepski procedure, minimax rate, image denoising
Pointwise adaptive estimation for robust and quantile regression
A nonparametric procedure for robust regression estimation and for quantile
regression is proposed which is completely data-driven and adapts locally to
the regularity of the regression function. This is achieved by considering in
each point M-estimators over different local neighbourhoods and by a local
model selection procedure based on sequential testing. Non-asymptotic risk
bounds are obtained, which yield rate-optimality for large sample asymptotics
under weak conditions. Simulations for different univariate median regression
models show good finite sample properties, also in comparison to traditional
methods. The approach is extended to image denoising and applied to CT scans in
cancer research
BLADE: Filter Learning for General Purpose Computational Photography
The Rapid and Accurate Image Super Resolution (RAISR) method of Romano,
Isidoro, and Milanfar is a computationally efficient image upscaling method
using a trained set of filters. We describe a generalization of RAISR, which we
name Best Linear Adaptive Enhancement (BLADE). This approach is a trainable
edge-adaptive filtering framework that is general, simple, computationally
efficient, and useful for a wide range of problems in computational
photography. We show applications to operations which may appear in a camera
pipeline including denoising, demosaicing, and stylization
Partially Linear Estimation with Application to Sparse Signal Recovery From Measurement Pairs
We address the problem of estimating a random vector X from two sets of
measurements Y and Z, such that the estimator is linear in Y. We show that the
partially linear minimum mean squared error (PLMMSE) estimator does not require
knowing the joint distribution of X and Y in full, but rather only its
second-order moments. This renders it of potential interest in various
applications. We further show that the PLMMSE method is minimax-optimal among
all estimators that solely depend on the second-order statistics of X and Y. We
demonstrate our approach in the context of recovering a signal, which is sparse
in a unitary dictionary, from noisy observations of it and of a filtered
version of it. We show that in this setting PLMMSE estimation has a clear
computational advantage, while its performance is comparable to
state-of-the-art algorithms. We apply our approach both in static and dynamic
estimation applications. In the former category, we treat the problem of image
enhancement from blurred/noisy image pairs, where we show that PLMMSE
estimation performs only slightly worse than state-of-the art algorithms, while
running an order of magnitude faster. In the dynamic setting, we provide a
recursive implementation of the estimator and demonstrate its utility in the
context of tracking maneuvering targets from position and acceleration
measurements.Comment: 13 pages, 5 figure
- …