8,057 research outputs found

    Application of a primal-dual interior point algorithm using exact second order information with a novel non-monotone line search method to generally constrained minimax optimization problems

    Get PDF
    This work presents the application of a primal-dual interior point method to minimax optimisation problems. The algorithm differs significantly from previous approaches as it involves a novel non-monotone line search procedure, which is based on the use of standard penalty methods as the merit function used for line search. The crucial novel concept is the discretisation of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order. An implementation within a logarithmic barrier algorithm for bounds handling is presented with capabilities for large scale application. Case studies presented demonstrate the capabilities of the proposed methodology, which relies on the reformulation of minimax models into standard nonlinear optimisation models. Some previously reported case studies from the open literature have been solved, and with significantly better optimal solutions identified. We believe that the nature of the non-monotone line search scheme allows the search procedure to escape from local minima, hence the encouraging results obtained

    Penalized contrast estimator for adaptive density deconvolution

    Get PDF
    The authors consider the problem of estimating the density gg of independent and identically distributed variables X_iX\_i, from a sample Z_1,...,Z_nZ\_1, ..., Z\_n where Z_i=X_i+σϵ_iZ\_i=X\_i+\sigma\epsilon\_i, i=1,...,ni=1, ..., n, ϵ\epsilon is a noise independent of XX, with σϵ\sigma\epsilon having known distribution. They present a model selection procedure allowing to construct an adaptive estimator of gg and to find non-asymptotic bounds for its L_2(R)\mathbb{L}\_2(\mathbb{R})-risk. The estimator achieves the minimax rate of convergence, in most cases where lowers bounds are available. A simulation study gives an illustration of the good practical performances of the method

    A simple forward selection procedure based on false discovery rate control

    Full text link
    We propose the use of a new false discovery rate (FDR) controlling procedure as a model selection penalized method, and compare its performance to that of other penalized methods over a wide range of realistic settings: nonorthogonal design matrices, moderate and large pool of explanatory variables, and both sparse and nonsparse models, in the sense that they may include a small and large fraction of the potential variables (and even all). The comparison is done by a comprehensive simulation study, using a quantitative framework for performance comparisons in the form of empirical minimaxity relative to a "random oracle": the oracle model selection performance on data dependent forward selected family of potential models. We show that FDR based procedures have good performance, and in particular the newly proposed method, emerges as having empirical minimax performance. Interestingly, using FDR level of 0.05 is a global best.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS194 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Adaptive estimation of covariance matrices via Cholesky decomposition

    Full text link
    This paper studies the estimation of a large covariance matrix. We introduce a novel procedure called ChoSelect based on the Cholesky factor of the inverse covariance. This method uses a dimension reduction strategy by selecting the pattern of zero of the Cholesky factor. Alternatively, ChoSelect can be interpreted as a graph estimation procedure for directed Gaussian graphical models. Our approach is particularly relevant when the variables under study have a natural ordering (e.g. time series) or more generally when the Cholesky factor is approximately sparse. ChoSelect achieves non-asymptotic oracle inequalities with respect to the Kullback-Leibler entropy. Moreover, it satisfies various adaptive properties from a minimax point of view. We also introduce and study a two-stage procedure that combines ChoSelect with the Lasso. This last method enables the practitioner to choose his own trade-off between statistical efficiency and computational complexity. Moreover, it is consistent under weaker assumptions than the Lasso. The practical performances of the different procedures are assessed on numerical examples

    Statistical properties of the method of regularization with periodic Gaussian reproducing kernel

    Get PDF
    The method of regularization with the Gaussian reproducing kernel is popular in the machine learning literature and successful in many practical applications. In this paper we consider the periodic version of the Gaussian kernel regularization. We show in the white noise model setting, that in function spaces of very smooth functions, such as the infinite-order Sobolev space and the space of analytic functions, the method under consideration is asymptotically minimax; in finite-order Sobolev spaces, the method is rate optimal, and the efficiency in terms of constant when compared with the minimax estimator is reasonably high. The smoothing parameters in the periodic Gaussian regularization can be chosen adaptively without loss of asymptotic efficiency. The results derived in this paper give a partial explanation of the success of the Gaussian reproducing kernel in practice. Simulations are carried out to study the finite sample properties of the periodic Gaussian regularization.Comment: Published by the Institute of Mathematical Statistics (http://www.imstat.org) in the Annals of Statistics (http://www.imstat.org/aos/) at http://dx.doi.org/10.1214/00905360400000045
    corecore