104,582 research outputs found

    A first-order stochastic primal-dual algorithm with correction step

    Get PDF
    We investigate the convergence properties of a stochastic primal-dual splitting algorithm for solving structured monotone inclusions involving the sum of a cocoercive operator and a composite monotone operator. The proposed method is the stochastic extension to monotone inclusions of a proximal method studied in {\em Y. Drori, S. Sabach, and M. Teboulle, A simple algorithm for a class of nonsmooth convex-concave saddle-point problems, 2015} and {\em I. Loris and C. Verhoeven, On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty, 2011} for saddle point problems. It consists in a forward step determined by the stochastic evaluation of the cocoercive operator, a backward step in the dual variables involving the resolvent of the monotone operator, and an additional forward step using the stochastic evaluation of the cocoercive introduced in the first step. We prove weak almost sure convergence of the iterates by showing that the primal-dual sequence generated by the method is stochastic quasi Fej\'er-monotone with respect to the set of zeros of the considered primal and dual inclusions. Additional results on ergodic convergence in expectation are considered for the special case of saddle point models

    Data Driven Surrogate Based Optimization in the Problem Solving Environment WBCSim

    Get PDF
    Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations, that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate based optimization algorithm that uses a trust region based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling (LHS), and central composite design (CCD)—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process

    Nonconcave penalized composite conditional likelihood estimation of sparse Ising models

    Full text link
    The Ising model is a useful tool for studying complex interactions within a system. The estimation of such a model, however, is rather challenging, especially in the presence of high-dimensional parameters. In this work, we propose efficient procedures for learning a sparse Ising model based on a penalized composite conditional likelihood with nonconcave penalties. Nonconcave penalized likelihood estimation has received a lot of attention in recent years. However, such an approach is computationally prohibitive under high-dimensional Ising models. To overcome such difficulties, we extend the methodology and theory of nonconcave penalized likelihood to penalized composite conditional likelihood estimation. The proposed method can be efficiently implemented by taking advantage of coordinate-ascent and minorization--maximization principles. Asymptotic oracle properties of the proposed method are established with NP-dimensionality. Optimality of the computed local solution is discussed. We demonstrate its finite sample performance via simulation studies and further illustrate our proposal by studying the Human Immunodeficiency Virus type 1 protease structure based on data from the Stanford HIV drug resistance database. Our statistical learning results match the known biological findings very well, although no prior biological information is used in the data analysis procedure.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1017 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Convergence of iterative methods based on Neumann series for composite materials: theory and practice

    Full text link
    Iterative Fast Fourier Transform methods are useful for calculating the fields in composite materials and their macroscopic response. By iterating back and forth until convergence, the differential constraints are satisfied in Fourier space, and the constitutive law in real space. The methods correspond to series expansions of appropriate operators and to series expansions for the effective tensor as a function of the component moduli. It is shown that the singularity structure of this function can shed much light on the convergence properties of the iterative Fast Fourier Transform methods. We look at a model example of a square array of conducting square inclusions for which there is an exact formula for the effective conductivity (Obnosov). Theoretically some of the methods converge when the inclusions have zero or even negative conductivity. However, the numerics do not always confirm this extended range of convergence and show that accuracy is lost after relatively few iterations. There is little point in iterating beyond this. Accuracy improves when the grid size is reduced, showing that the discrepancy is linked to the discretization. Finally, it is shown that none of the three iterative schemes investigated over-performs the others for all possible microstructures and all contrasts.Comment: 41 pages, 14 figures, 1 tabl
    corecore