32,876 research outputs found
Aggregation for Gaussian regression
This paper studies statistical aggregation procedures in the regression
setting. A motivating factor is the existence of many different methods of
estimation, leading to possibly competing estimators. We consider here three
different types of aggregation: model selection (MS) aggregation, convex (C)
aggregation and linear (L) aggregation. The objective of (MS) is to select the
optimal single estimator from the list; that of (C) is to select the optimal
convex combination of the given estimators; and that of (L) is to select the
optimal linear combination of the given estimators. We are interested in
evaluating the rates of convergence of the excess risks of the estimators
obtained by these procedures. Our approach is motivated by recently published
minimax results [Nemirovski, A. (2000). Topics in non-parametric statistics.
Lectures on Probability Theory and Statistics (Saint-Flour, 1998). Lecture
Notes in Math. 1738 85--277. Springer, Berlin; Tsybakov, A. B. (2003). Optimal
rates of aggregation. Learning Theory and Kernel Machines. Lecture Notes in
Artificial Intelligence 2777 303--313. Springer, Heidelberg]. There exist
competing aggregation procedures achieving optimal convergence rates for each
of the (MS), (C) and (L) cases separately. Since these procedures are not
directly comparable with each other, we suggest an alternative solution. We
prove that all three optimal rates, as well as those for the newly introduced
(S) aggregation (subset selection), are nearly achieved via a single
``universal'' aggregation procedure. The procedure consists of mixing the
initial estimators with weights obtained by penalized least squares. Two
different penalties are considered: one of them is of the BIC type, the second
one is a data-dependent -type penalty.Comment: Published in at http://dx.doi.org/10.1214/009053606000001587 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Consensus image method for unknown noise removal
Noise removal has been, and it is nowadays, an important task in computer vision. Usually, it is a previous task preceding other tasks, as segmentation or reconstruction. However, for most existing denoising algorithms the noise model has to be known in advance. In this paper, we introduce a new approach based on consensus to deal with unknown noise models. To do this, different filtered images are obtained, then combined using multifuzzy sets and averaging aggregation functions. The final decision is made by using a penalty function to deliver the compromised image. Results show that this approach is consistent and provides a good compromise between filters.This work is supported by the European Commission under Contract No. 238819 (MIBISOC Marie Curie ITN). H. Bustince was supported by Project TIN 2010-15055 of the Spanish Ministry of Science
Recursive Aggregation of Estimators by Mirror Descent Algorithm with Averaging
We consider a recursive algorithm to construct an aggregated estimator from a
finite number of base decision rules in the classification problem. The
estimator approximately minimizes a convex risk functional under the
l1-constraint. It is defined by a stochastic version of the mirror descent
algorithm (i.e., of the method which performs gradient descent in the dual
space) with an additional averaging. The main result of the paper is an upper
bound for the expected accuracy of the proposed estimator. This bound is of the
order with an explicit and small constant factor, where
is the dimension of the problem and stands for the sample size. A similar
bound is proved for a more general setting that covers, in particular, the
regression model with squared loss.Comment: 29 pages; mai 200
A fully-coupled discontinuous Galerkin method for two-phase flow in porous media with discontinuous capillary pressure
In this paper we formulate and test numerically a fully-coupled discontinuous
Galerkin (DG) method for incompressible two-phase flow with discontinuous
capillary pressure. The spatial discretization uses the symmetric interior
penalty DG formulation with weighted averages and is based on a wetting-phase
potential / capillary potential formulation of the two-phase flow system. After
discretizing in time with diagonally implicit Runge-Kutta schemes the resulting
systems of nonlinear algebraic equations are solved with Newton's method and
the arising systems of linear equations are solved efficiently and in parallel
with an algebraic multigrid method. The new scheme is investigated for various
test problems from the literature and is also compared to a cell-centered
finite volume scheme in terms of accuracy and time to solution. We find that
the method is accurate, robust and efficient. In particular no post-processing
of the DG velocity field is necessary in contrast to results reported by
several authors for decoupled schemes. Moreover, the solver scales well in
parallel and three-dimensional problems with up to nearly 100 million degrees
of freedom per time step have been computed on 1000 processors
- …