68,620 research outputs found

    Semidefinite relaxations for semi-infinite polynomial programming

    Full text link
    This paper studies how to solve semi-infinite polynomial programming (SIPP) problems by semidefinite relaxation method. We first introduce two SDP relaxation methods for solving polynomial optimization problems with finitely many constraints. Then we propose an exchange algorithm with SDP relaxations to solve SIPP problems with compact index set. At last, we extend the proposed method to SIPP problems with noncompact index set via homogenization. Numerical results show that the algorithm is efficient in practice.Comment: 23 pages, 4 figure

    Smoothing Proximal Gradient Method for General Structured Sparse Learning

    Full text link
    We study the problem of learning high dimensional regression models regularized by a structured-sparsity-inducing penalty that encodes prior structural information on either input or output sides. We consider two widely adopted types of such penalties as our motivating examples: 1) overlapping group lasso penalty, based on the l1/l2 mixed-norm penalty, and 2) graph-guided fusion penalty. For both types of penalties, due to their non-separability, developing an efficient optimization method has remained a challenging problem. In this paper, we propose a general optimization approach, called smoothing proximal gradient method, which can solve the structured sparse regression problems with a smooth convex loss and a wide spectrum of structured-sparsity-inducing penalties. Our approach is based on a general smoothing technique of Nesterov. It achieves a convergence rate faster than the standard first-order method, subgradient method, and is much more scalable than the most widely used interior-point method. Numerical results are reported to demonstrate the efficiency and scalability of the proposed method.Comment: arXiv admin note: substantial text overlap with arXiv:1005.471

    Metropolis Sampling

    Full text link
    Monte Carlo (MC) sampling methods are widely applied in Bayesian inference, system simulation and optimization problems. The Markov Chain Monte Carlo (MCMC) algorithms are a well-known class of MC methods which generate a Markov chain with the desired invariant distribution. In this document, we focus on the Metropolis-Hastings (MH) sampler, which can be considered as the atom of the MCMC techniques, introducing the basic notions and different properties. We describe in details all the elements involved in the MH algorithm and the most relevant variants. Several improvements and recent extensions proposed in the literature are also briefly discussed, providing a quick but exhaustive overview of the current Metropolis-based sampling's world.Comment: Wiley StatsRef-Statistics Reference Online, 201

    Generic closed loop controller for power regulation in dual active bridge DC-DC converter with current stress minimization

    Get PDF
    This paper presents a comprehensive and generalized analysis of the bidirectional dual active bridge (DAB) DC/DC converter using triple phase shift (TPS) control to enable closed loop power regulation while minimizing current stress. The key new achievements are: a generic analysis in terms of possible conversion ratios/converter voltage gains (i.e. Buck/Boost/Unity), per unit based equations regardless of DAB ratings, and a new simple closed loop controller implementable in real time to meet desired power transfer regulation at minimum current stress. Per unit based analytical expressions are derived for converter AC RMS current as well as power transferred. An offline particle swarm optimization (PSO) method is used to obtain an extensive set of TPS ratios for minimizing the RMS current in the entire bidirectional power range of - 1 to 1 per unit. The extensive set of results achieved from PSO presents a generic data pool which is carefully analyzed to derive simple useful relations. Such relations enabled a generic closed loop controller design that can be implemented in real time avoiding the extensive computational capacity that iterative optimization techniques require. A detailed Simulink DAB switching model is used to validate precision of the proposed closed loop controller under various operating conditions. An experimental prototype also substantiates the results achieved

    Reflection methods for user-friendly submodular optimization

    Get PDF
    Recently, it has become evident that submodularity naturally captures widely occurring concepts in machine learning, signal processing and computer vision. Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While general submodular minimization is challenging, we propose a new method that exploits existing decomposability of submodular functions. In contrast to previous approaches, our method is neither approximate, nor impractical, nor does it need any cumbersome parameter tuning. Moreover, it is easy to implement and parallelize. A key component of our method is a formulation of the discrete submodular minimization problem as a continuous best approximation problem that is solved through a sequence of reflections, and its solution can be easily thresholded to obtain an optimal discrete solution. This method solves both the continuous and discrete formulations of the problem, and therefore has applications in learning, inference, and reconstruction. In our experiments, we illustrate the benefits of our method on two image segmentation tasks.Comment: Neural Information Processing Systems (NIPS), \'Etats-Unis (2013
    • …
    corecore