182 research outputs found

    Assessment of Conventional Density Functional Schemes for Computing the Polarizabilities and Hyperpolarizabilities of Conjugated Oligomers: An Ab Initio Investigation of Polyacetylene Chains

    Get PDF
    DFT schemes based on conventional and less conventional exchange-correlation (XC) functionals have been employed to determine the polarizability and second hyperpolarizability of π-conjugated polyacetylene chains. These functionals fail in one or more of several ways: (i) the correlation correction to α is either much too small or in the wrong direction, leading to an overestimate; (ii) Îł is significantly overestimated; (iii) the chain length dependence is excessively large, particularly for Îł and for the more alternant system; and (iv) the bond length alternation effects on Îł are either underestimated or qualitatively incorrect. The poor results with the asymptotically correct van Leeuwen-Baerends XC potential show that the overestimations are not related to the asymptotic behavior of the potential. These failures are described in terms of the separate effects of the exchange and the correlation parts of the XC functionals. They are related to the short-sightedness of the XC potentials which are relatively insensitive to the polarization charge induced by the external electric field at the chain ends. © 1998 American Institute of Physics

    Robust and Stable Predictive Control with Bounded Uncertainties

    Get PDF
    [EN] Min-Max optimization is often used for improving robustness in Model Predictive Control (MPC). An analogy to this optimization could be the BDU (Bounded Data Uncertainties) method, which is a regularization technique for least-squares problems that takes into account the uncertainty bounds. Stability of MPC can be achieved by using terminal constraints, such as in the CRHPC (Constrained Receding-Horizon Predictive Control) algorithm. By combining both BDU and CRHPC methods, a robust and stable MPC is obtained, which is the aim of this work. BDU also offers a guided method of tuning the empirically tuned penalization parameter for the control effort in MPC. (C) 2008 Elsevier Inc. All rights reserved.This work has been partially financed by DPI2005-07835 and DPI2004-08383-C03-02 MEC-FEDER.Ramos FernĂĄndez, C.; MartĂ­nez Iranzo, MA.; SanchĂ­s Saez, J.; Herrero DurĂĄ, JM. (2008). Robust and Stable Predictive Control with Bounded Uncertainties. Journal of Mathematical Analysis and Applications. 342(2):1003-1014. https://doi.org/10.1016/j.jmaa.2007.12.073S10031014342

    Algorithm Engineering in Robust Optimization

    Full text link
    Robust optimization is a young and emerging field of research having received a considerable increase of interest over the last decade. In this paper, we argue that the the algorithm engineering methodology fits very well to the field of robust optimization and yields a rewarding new perspective on both the current state of research and open research directions. To this end we go through the algorithm engineering cycle of design and analysis of concepts, development and implementation of algorithms, and theoretical and experimental evaluation. We show that many ideas of algorithm engineering have already been applied in publications on robust optimization. Most work on robust optimization is devoted to analysis of the concepts and the development of algorithms, some papers deal with the evaluation of a particular concept in case studies, and work on comparison of concepts just starts. What is still a drawback in many papers on robustness is the missing link to include the results of the experiments again in the design

    Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent

    Get PDF
    International audience<p>We propose a new randomized coordinate descent method for minimizing the sum of convex functions each of which depends on a small number of coordinates only. Our method (APPROX) is simultaneously Accelerated, Parallel and PROXimal; this is the first time such a method is proposed. In the special case when the number of processors is equal to the number of coordinates, the method converges at the rate 2ωˉLˉR2/(k+1)22\bar{\omega}\bar{L} R^2/(k+1)^2 , where kk is the iteration counter, ωˉ\bar{\omega} is a data-weighted \emph{average} degree of separability of the loss function, Lˉ\bar{L} is the \emph{average} of Lipschitz constants associated with the coordinates and individual functions in the sum, and RR is the distance of the initial point from the minimizer. We show that the method can be implemented without the need to perform full-dimensional vector operations, which is the major bottleneck of accelerated coordinate descent, rendering it impractical. The fact that the method depends on the average degree of separability, and not on the maximum degree, can be attributed to the use of new safe large stepsizes, leading to improved expected separable overapproximation (ESO). These are of independent interest and can be utilized in all existing parallel randomized coordinate descent algorithms based on the concept of ESO. In special cases, our method recovers several classical and recent algorithms such as simple and accelerated proximal gradient descent, as well as serial, parallel and distributed versions of randomized block coordinate descent. \new{Due of this flexibility, APPROX had been used successfully by the authors in a graduate class setting as a modern introduction to deterministic and randomized proximal gradient methods. Our bounds match or improve on the best known bounds for each of the methods APPROX specializes to. Our method has applications in a number of areas, including machine learning, submodular optimization, linear and semidefinite programming.</p

    Medicinal plants – prophylactic and therapeutic options for gastrointestinal and respiratory diseases in calves and piglets? A systematic review

    Full text link
    • 

    corecore