283 research outputs found
Global Saturation of Regularization Methods for Inverse Ill-Posed Problems
In this article the concept of saturation of an arbitrary regularization
method is formalized based upon the original idea of saturation for spectral
regularization methods introduced by A. Neubauer in 1994. Necessary and
sufficient conditions for a regularization method to have global saturation are
provided. It is shown that for a method to have global saturation the total
error must be optimal in two senses, namely as optimal order of convergence
over a certain set which at the same time, must be optimal (in a very precise
sense) with respect to the error. Finally, two converse results are proved and
the theory is applied to find sufficient conditions which ensure the existence
of global saturation for spectral methods with classical qualification of
finite positive order and for methods with maximal qualification. Finally,
several examples of regularization methods possessing global saturation are
shown.Comment: 29 page
Generalized Qualification and Qualification Levels for Spectral Regularization Methods
The concept of qualification for spectral regularization methods for inverse
ill-posed problems is strongly associated to the optimal order of convergence
of the regularization error. In this article, the definition of qualification
is extended and three different levels are introduced: weak, strong and
optimal. It is shown that the weak qualification extends the definition
introduced by Mathe and Pereverzev in 2003, mainly in the sense that the
functions associated to orders of convergence and source sets need not be the
same. It is shown that certain methods possessing infinite classical
qualification, e.g. truncated singular value decomposition (TSVD), Landweber's
method and Showalter's method, also have generalized qualification leading to
an optimal order of convergence of the regularization error. Sufficient
conditions for a SRM to have weak qualification are provided and necessary and
sufficient conditions for a given order of convergence to be strong or optimal
qualification are found. Examples of all three qualification levels are
provided and the relationships between them as well as with the classical
concept of qualification and the qualification introduced by Mathe and
Perevezev are shown. In particular, spectral regularization methods having
extended qualification in each one of the three levels and having zero or
infinite classical qualification are presented. Finally several implications of
this theory in the context of orders of convergence, converse results and
maximal source sets for inverse ill-posed problems, are shown.Comment: 20 pages, 1 figur
Convergence rates in expectation for Tikhonov-type regularization of Inverse Problems with Poisson data
In this paper we study a Tikhonov-type method for ill-posed nonlinear
operator equations \gdag = F(
ag) where \gdag is an integrable,
non-negative function. We assume that data are drawn from a Poisson process
with density t\gdag where may be interpreted as an exposure time. Such
problems occur in many photonic imaging applications including positron
emission tomography, confocal fluorescence microscopy, astronomic observations,
and phase retrieval problems in optics. Our approach uses a
Kullback-Leibler-type data fidelity functional and allows for general convex
penalty terms. We prove convergence rates of the expectation of the
reconstruction error under a variational source condition as both
for an a priori and for a Lepski{\u\i}-type parameter choice rule
Statistical analysis of the individual variability of 1D protein profiles as a tool in ecology: an application to parasitoid venom
International audienceUnderstanding the forces that shape eco-evolutionary patterns often requires linking phenotypes to genotypes, allowing characterization of these patterns at the molecular level. DNA-based markers are less informative in this aim compared to markers associated with gene expression and, more specifically, with protein quantities. The characterization of eco-evolutionary patterns also usually requires the analysis of large sample sizes to accurately estimate interindividual variability. However, the methods used to characterize and compare protein samples are generally expensive and time-consuming, which constrains the size of the produced data sets to few individuals. We present here a method that estimates the interindividual variability of protein quantities based on a global, semi-automatic analysis of 1D electrophoretic profiles, opening the way to rapid analysis and comparison of hundreds of individuals. The main original features of the method are the in silico normalization of sample protein quantities using pictures of electrophoresis gels at different staining levels, as well as a new method of analysis of electrophoretic profiles based on a median profile. We demonstrate that this method can accurately discriminate between species and between geographically distant or close populations, based on interindividual variation in venom protein profiles from three endoparasitoid wasps of two different genera (Psyttalia concolor, Psyttalia lounsburyi and Leptopili-na boulardi). Finally, we discuss the experimental designs that would benefit from the use of this method
General regularization schemes for signal detection in inverse problems
The authors discuss how general regularization schemes, in particular linear regularization schemes and projection schemes, can be used to design tests for signal detection in statistical inverse problems. It is shown that such tests can attain the minimax separation rates when the regularization parameter is chosen appropriately. It is also shown how to modify these tests in order to obtain (up to a factor) a test which adapts to the unknown smoothness in the alternative. Moreover, the authors discuss how the so-called \emph{direct} and \emph{indirect} tests are related via interpolation properties
Iteratively regularized Newton-type methods for general data misfit functionals and applications to Poisson data
We study Newton type methods for inverse problems described by nonlinear
operator equations in Banach spaces where the Newton equations
are regularized variationally using a general
data misfit functional and a convex regularization term. This generalizes the
well-known iteratively regularized Gauss-Newton method (IRGNM). We prove
convergence and convergence rates as the noise level tends to 0 both for an a
priori stopping rule and for a Lepski{\u\i}-type a posteriori stopping rule.
Our analysis includes previous order optimal convergence rate results for the
IRGNM as special cases. The main focus of this paper is on inverse problems
with Poisson data where the natural data misfit functional is given by the
Kullback-Leibler divergence. Two examples of such problems are discussed in
detail: an inverse obstacle scattering problem with amplitude data of the
far-field pattern and a phase retrieval problem. The performence of the
proposed method for these problems is illustrated in numerical examples
Digging into acceptor splice site prediction : an iterative feature selection approach
Feature selection techniques are often used to reduce data dimensionality, increase classification performance, and gain insight into the processes that generated the data. In this paper, we describe an iterative procedure of feature selection and feature construction steps, improving the classification of acceptor splice sites, an important subtask of gene prediction.
We show that acceptor prediction can benefit from feature selection, and describe how feature selection techniques can be used to gain new insights in the classification of acceptor sites. This is illustrated by the identification of a new, biologically motivated feature: the AG-scanning feature.
The results described in this paper contribute both to the domain of gene prediction, and to research in feature selection techniques, describing a new wrapper based feature weighting method that aids in knowledge discovery when dealing with complex datasets
Anomalous zipping dynamics and forced polymer translocation
We investigate by Monte Carlo simulations the zipping and unzipping dynamics
of two polymers connected by one end and subject to an attractive interaction
between complementary monomers. In zipping, the polymers are quenched from a
high temperature equilibrium configuration to a low temperature state, so that
the two strands zip up by closing up a "Y"-fork. In unzipping, the polymers are
brought from a low temperature double stranded configuration to high
temperatures, so that the two strands separate. Simulations show that the
unzipping time, , scales as a function of the polymer length as , while the zipping is characterized by anomalous dynamics with . This exponent is in good agreement with
simulation results and theoretical predictions for the scaling of the
translocation time of a forced polymer passing through a narrow pore. We find
that the exponent is robust against variations of parameters and
temperature, whereas the scaling of as a function of the driving force
shows the existence of two different regimes: the weak forcing () and strong forcing ( independent of ) regimes. The crossover
region is possibly characterized by a non-trivial scaling in , matching the
prediction of recent theories of polymer translocation. Although the
geometrical setup is different, zipping and translocation share thus the same
type of anomalous dynamics. Systems where this dynamics could be experimentally
investigated are DNA (or RNA) hairpins: our results imply an anomalous dynamics
for the hairpins closing times, but not for the opening times.Comment: 15 pages, 9 figure
Task-driven learned hyperspectral data reduction using end-to-end supervised deep learning
An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods
A tomographic workflow to enable deep learning for X-ray based foreign object detection
Detection of unwanted (âforeignâ) objects within products is a common procedure in many branches of industry for maintaining production quality. X-ray imaging is a fast, non-invasive and widely applicable method for foreign object detection. Deep learning has recently emerged as a powerful approach for recognizing patterns in radiographs (i.e., X-ray images), enabling automated X-ray based foreign object detection. However, these methods require a large number of training examples and manual annotation of these examples is a subjective and laborious task. In this work, we propose a Computed Tomography (CT) based method for producing training data for supervised learning of foreign object detection, with minimal labor requirements. In our approach, a few representative objects are CT scanned and reconstructed in 3D. The radiographs that are acquired as part of the CT-scan data serve as input for the machine learning method. High-quality ground truth locations of the foreign objects are obtained through accurate 3D reconstructions and segmentations. Using these segmented volumes, corresponding 2D segmentations are obtained by creating virtual projections. We outline the benefits of objectively and reproducibly generating training data in this way. In addition, we show how the accuracy depends on the number of objects used for the CT reconstructions. The results show that in this workflow generally only a relatively small number of representative objects (i.e., fewer than 10) are needed to achieve adequate detection performance in an industrial setting
- âŠ