193 research outputs found

    Acceleration and new analysis of convex optimization algorithms

    Full text link
    Ces dernières années ont vu une résurgence de l’algorithme de Frank-Wolfe (FW) (également connu sous le nom de méthodes de gradient conditionnel) dans l’optimisation clairsemée et les problèmes d’apprentissage automatique à grande échelle avec des objectifs convexes lisses. Par rapport aux méthodes de gradient projeté ou proximal, une telle méthode sans projection permet d’économiser le coût de calcul des projections orthogonales sur l’ensemble de contraintes. Parallèlement, FW propose également des solutions à structure clairsemée. Malgré ces propriétés prometteuses, FW ne bénéficie pas des taux de convergence optimaux obtenus par les méthodes accélérées basées sur la projection. Nous menons une enquête dé- taillée sur les essais récents pour accélérer FW dans différents contextes et soulignons où se situe la difficulté lorsque l’on vise des taux linéaires globaux en théorie. En outre, nous fournissons une direction prometteuse pour accélérer FW sur des ensembles fortement convexes en utilisant des techniques d’intervalle de dualité et une nouvelle notion de régularité. D’autre part, l’algorithme FW est une covariante affine et bénéficie de taux de convergence accélérés lorsque l’ensemble de contraintes est fortement convexe. Cependant, ces résultats reposent sur des hypothèses dépendantes de la norme, entraînant généralement des bornes invariantes non affines, en contradiction avec la propriété de covariante affine de FW. Dans ce travail, nous introduisons de nouvelles hypothèses structurelles sur le problème (comme la régularité directionnelle) et dérivons une analyse affine invariante et indépendante de la norme de Frank-Wolfe. Sur la base de notre analyse, nous proposons une recherche par ligne affine invariante. Fait intéressant, nous montrons que les recherches en ligne classiques utilisant la régularité de la fonction objectif convergent étonnamment vers une taille de pas invariante affine, malgré l’utilisation de normes dépendantes de l’affine dans le calcul des tailles de pas. Cela indique que nous n’avons pas nécessairement besoin de connaître à l’avance la structure des ensembles pour profiter du taux accéléré affine-invariant. Dans un autre axe de recherche, nous étudions les algorithmes au-delà des méthodes du premier ordre. Les techniques Quasi-Newton approchent le pas de Newton en estimant le Hessien en utilisant les équations dites sécantes. Certaines de ces méthodes calculent le Hessien en utilisant plusieurs équations sécantes mais produisent des mises à jour non symétriques. D’autres schémas quasi-Newton, tels que BFGS, imposent la symétrie mais ne peuvent pas satisfaire plus d’une équation sécante. Nous proposons un nouveau type de mise à jour symétrique quasi-Newton utilisant plusieurs équations sécantes au sens des moindres carrés. Notre approche généralise et unifie la conception de mises à jour quasi-Newton et satisfait des garanties de robustesse prouvables.Recent years have witnessed a resurgence of the Frank-Wolfe (FW) algorithm, also known as conditional gradient methods, in sparse optimization and large-scale machine learning problems with smooth convex objectives. Compared to projected or proximal gradient methods, such projection-free method saves the computational cost of orthogonal projections onto the constraint set. Meanwhile, FW also gives solutions with sparse structure. Despite of these promising properties, FW does not enjoy the optimal convergence rates achieved by projection-based accelerated methods. On the other hand, FW algorithm is affine-covariant, and enjoys accelerated convergence rates when the constraint set is strongly convex. However, these results rely on norm-dependent assumptions, usually incurring non-affine invariant bounds, in contradiction with FW’s affine-covariant property. In this work, we introduce new structural assumptions on the problem (such as the directional smoothness) and derive an affine in- variant, norm-independent analysis of Frank-Wolfe. Based on our analysis, we pro- pose an affine invariant backtracking line-search. Interestingly, we show that typical back-tracking line-search techniques using smoothness of the objective function surprisingly converge to an affine invariant stepsize, despite using affine-dependent norms in the computation of stepsizes. This indicates that we do not necessarily need to know the structure of sets in advance to enjoy the affine-invariant accelerated rate. Additionally, we provide a promising direction to accelerate FW over strongly convex sets using duality gap techniques and a new version of smoothness. In another line of research, we study algorithms beyond first-order methods. Quasi-Newton techniques approximate the Newton step by estimating the Hessian using the so-called secant equations. Some of these methods compute the Hessian using several secant equations but produce non-symmetric updates. Other quasi- Newton schemes, such as BFGS, enforce symmetry but cannot satisfy more than one secant equation. We propose a new type of quasi-Newton symmetric update using several secant equations in a least-squares sense. Our approach generalizes and unifies the design of quasi-Newton updates and satisfies provable robustness guarantees

    A Brief Review of Cuckoo Search Algorithm (CSA) Research Progression from 2010 to 2013

    Get PDF
    Cuckoo Search Algorithm is a new swarm intelligence algorithm which based on breeding behavior of the Cuckoo bird. This paper gives a brief insight of the advancement of the Cuckoo Search Algorithm from 2010 to 2013. The first half of this paper presents the publication trend of Cuckoo Search Algorithm. The remaining of this paper briefly explains the contribution of the individual publication related to Cuckoo Search Algorithm. It is believed that this paper will greatly benefit the reader who needs a bird-eyes view of the Cuckoo Search Algorithm’s publications trend

    Variant-oriented Planning Models for Parts/Products Grouping, Sequencing and Operations

    Get PDF
    This research aims at developing novel methods for utilizing the commonality between part/product variants to make modern manufacturing systems more flexible, adaptable, and agile for dealing with less volume per variant and minimizing total changes in the setup between variants. Four models are developed for use in four important domains of manufacturing systems: production sequencing, product family formation, production flow, and products operations sequences retrieval. In all these domains, capitalizing on commonality between the part/product variants has a pivotal role. For production sequencing; a new policy based on setup similarity between product variants is proposed and its results are compared with a developed mathematical model in a permutation flow shop. The results show the proposed algorithm is capable of finding solutions in less than 0.02 seconds with an average error of 1.2%. For product family formation; a novel operation flow based similarity coefficient is developed for variants having networked structures and integrated with two other similarity coefficients, operation and volume similarity, to provide a more comprehensive similarity coefficient. Grouping variants based on the proposed integrated similarity coefficient improves changeover time and utilization of the system. A sequencing method, as a secondary application of this approach, is also developed. For production flow; a new mixed integer programing (MIP) model is developed to assign operations of a family of product variants to candidate machines and also to select the best place for each machine among the candidate locations. The final sequence of performing operations for each variant having networked structures is also determined. The objective is to minimize the total backtracking distance leading to an improvement in total throughput of the system (7.79% in the case study of three engine blocks). For operations sequences retrieval; two mathematical models and an algorithm are developed to construct a master operation sequence from the information of the existing variants belonging to a family of parts/products. This master operation sequence is used to develop the operation sequences for new variants which are sufficiently similar to existing variants. Using the proposed algorithm decreases time of developing the operations sequences of new variants to the seconds

    A dynamic neighborhood learning-based gravitational search algorithm

    Get PDF
    Balancing exploration and exploitation according to evolutionary states is crucial to meta-heuristic search (M-HS) algorithms. Owing to its simplicity in theory and effectiveness in global optimization, gravitational search algorithm (GSA) has attracted increasing attention in recent years. However, the tradeoff between exploration and exploitation in GSA is achieved mainly by adjusting the size of an archive, named Kbest, which stores those superior agents after fitness sorting in each iteration. Since the global property of Kbest remains unchanged in the whole evolutionary process, GSA emphasizes exploitation over exploration and suffers from rapid loss of diversity and premature convergence. To address these problems, in this paper, we propose a dynamic neighborhood learning (DNL) strategy to replace the Kbest model and thereby present a DNL-based GSA (DNLGSA). The method incorporates the local and global neighborhood topologies for enhancing the exploration and obtaining adaptive balance between exploration and exploitation. The local neighborhoods are dynamically formed based on evolutionary states. To delineate the evolutionary states, two convergence criteria named limit value and population diversity, are introduced. Moreover, a mutation operator is designed for escaping from the local optima on the basis of evolutionary states. The proposed algorithm was evaluated on 27 benchmark problems with different characteristic and various difficulties. The results reveal that DNLGSA exhibits competitive performances when compared with a variety of state-of-the-art M-HS algorithms. Moreover, the incorporation of local neighborhood topology reduces the numbers of calculations of gravitational force and thus alleviates the high computational cost of GSA
    • …
    corecore