7 research outputs found

    Decomposition, Reformulation, and Diving in University Course Timetabling

    Full text link
    In many real-life optimisation problems, there are multiple interacting components in a solution. For example, different components might specify assignments to different kinds of resource. Often, each component is associated with different sets of soft constraints, and so with different measures of soft constraint violation. The goal is then to minimise a linear combination of such measures. This paper studies an approach to such problems, which can be thought of as multiphase exploitation of multiple objective-/value-restricted submodels. In this approach, only one computationally difficult component of a problem and the associated subset of objectives is considered at first. This produces partial solutions, which define interesting neighbourhoods in the search space of the complete problem. Often, it is possible to pick the initial component so that variable aggregation can be performed at the first stage, and the neighbourhoods to be explored next are guaranteed to contain feasible solutions. Using integer programming, it is then easy to implement heuristics producing solutions with bounds on their quality. Our study is performed on a university course timetabling problem used in the 2007 International Timetabling Competition, also known as the Udine Course Timetabling Problem. In the proposed heuristic, an objective-restricted neighbourhood generator produces assignments of periods to events, with decreasing numbers of violations of two period-related soft constraints. Those are relaxed into assignments of events to days, which define neighbourhoods that are easier to search with respect to all four soft constraints. Integer programming formulations for all subproblems are given and evaluated using ILOG CPLEX 11. The wider applicability of this approach is analysed and discussed.Comment: 45 pages, 7 figures. Improved typesetting of figures and table

    Proceedings of the XIII Global Optimization Workshop: GOW'16

    Get PDF
    [Excerpt] Preface: Past Global Optimization Workshop shave been held in Sopron (1985 and 1990), Szeged (WGO, 1995), Florence (GO’99, 1999), Hanmer Springs (Let’s GO, 2001), Santorini (Frontiers in GO, 2003), San JosĂ© (Go’05, 2005), Mykonos (AGO’07, 2007), Skukuza (SAGO’08, 2008), Toulouse (TOGO’10, 2010), Natal (NAGO’12, 2012) and MĂĄlaga (MAGO’14, 2014) with the aim of stimulating discussion between senior and junior researchers on the topic of Global Optimization. In 2016, the XIII Global Optimization Workshop (GOW’16) takes place in Braga and is organized by three researchers from the University of Minho. Two of them belong to the Systems Engineering and Operational Research Group from the Algoritmi Research Centre and the other to the Statistics, Applied Probability and Operational Research Group from the Centre of Mathematics. The event received more than 50 submissions from 15 countries from Europe, South America and North America. We want to express our gratitude to the invited speaker Panos Pardalos for accepting the invitation and sharing his expertise, helping us to meet the workshop objectives. GOW’16 would not have been possible without the valuable contribution from the authors and the International ScientiïŹc Committee members. We thank you all. This proceedings book intends to present an overview of the topics that will be addressed in the workshop with the goal of contributing to interesting and fruitful discussions between the authors and participants. After the event, high quality papers can be submitted to a special issue of the Journal of Global Optimization dedicated to the workshop. [...

    Strategic Surveillance System Design for Ports and Waterways

    Get PDF
    The purpose of this dissertation is to synthesize a methodology to prescribe a strategic design of a surveillance system to provide the required level of surveillance for ports and waterways. The method of approach to this problem is to formulate a linear integer programming model to prescribe a strategic surveillance system design (SSD) for ports or waterways, to devise branch-and-price decomposition (

    Decomposition and dynamic cut generation in integer linear programming

    No full text
    Decomposition algorithms such as Lagrangian relaxation and Dantzig-Wolfe decomposition are well-known methods that can be used to generate bounds for mixed-intege

    MĂ©thodes pour favoriser l’intĂ©gralitĂ© de l’amĂ©lioration dans le simplexe en nombres entiers - Application aux rotations d’équipages aĂ©riens

    Get PDF
    RÉSUMÉ : Dans son cadre le plus gĂ©nĂ©ral, le processus d’optimisation mathĂ©matique se scinde en trois grandes Ă©tapes. La premiĂšre consiste Ă  modĂ©liser le problĂšme, c’est-Ă -dire le reprĂ©senter sous la forme d’un programme mathĂ©matique, ensemble d’équations constituĂ© d’un objectif Ă  minimiser ou maximiser (typiquement, les coĂ»ts ou le bĂ©nĂ©fice de l’entreprise) et de contraintes Ă  satisfaire (contraintes opĂ©rationnelles, convention collective, etc.). Aux dĂ©cisions Ă  prendre correspondent les variables du problĂšme. S’il est une reprĂ©sentation parfaite de la rĂ©alitĂ©, ce modĂšle est dit exact, sinon il reste approximatif. La seconde Ă©tape du processus est la rĂ©solution de ce programme mathĂ©matique. Il s’agit de dĂ©terminer une solution respectant les contraintes et pour laquelle la valeur de l’objectif est la meilleure possible. Pour ce faire, on applique gĂ©nĂ©ralement un algorithme de rĂ©solution, ensemble de rĂšgles opĂ©ratoires dont l’application permet de rĂ©soudre le problĂšme Ă©noncĂ© au moyen d’un nombre fini d’opĂ©rations. Un algorithme peut ĂȘtre traduit grĂące Ă  un langage de programmation en un programme exĂ©cutable par un ordinateur. L’exĂ©cution d’un tel programme permet ainsi de rĂ©soudre le programme mathĂ©matique. Enfin, la derniĂšre Ă©tape consiste Ă  ajuster la solution obtenue Ă  la rĂ©alitĂ©. Dans le cas oĂč le modĂšle n’est qu’approximatif, cette solution peut ne pas convenir et nĂ©cessiter d’ĂȘtre modifiĂ©e a posteriori afin de s’accorder aux exigences de la rĂ©alitĂ© concrĂšte. Cette thĂšse se concentre sur la seconde de ces trois Ă©tapes, l’étape de rĂ©solution, en particulier sur le dĂ©veloppement d’un algorithme de rĂ©solution d’un programme mathĂ©matique prĂ©cis, le partitionnement d’ensemble. Le problĂšme de partitionnement d’ensemble permet de modĂ©liser des applications variĂ©es : planification d’emplois du temps, logistique, production d’électricitĂ©, partage Ă©quitable, reconnaissance de forme, etc. Pour chacun de ces exemples l’objectif et les contraintes prennent des significations physiques diffĂ©rentes, mais la structure du modĂšle est la mĂȘme. D’un point de vue mathĂ©matique, il s’agit d’un programme linĂ©aire en nombres entiers, dont les variables sont binaires, c’est-Ă -dire qu’elles ne peuvent prendre que les valeurs 0 et 1. Le programme est linĂ©aire car l’objectif et les contraintes sont reprĂ©sentĂ©s par des fonctions linĂ©aires des variables. Les algorithmes les plus couramment utilisĂ©s pour la rĂ©solution de tels problĂšmes sont basĂ©s sur le principe de sĂ©paration et Ă©valuation (branch-and-bound). Dans ces mĂ©thodes, les contraintes d’intĂ©gralitĂ© sont d’abord relĂąchĂ©es : les solutions peuvent alors ĂȘtre fractionnaires. La rĂ©solution du programme ainsi obtenu – appelĂ© relaxation linĂ©aire du programme en nombres entiers – est bien plus simple que celle du programme en nombres entiers. Pour obtenir l’intĂ©gralitĂ©, on sĂ©pare le problĂšme afin d’éliminer les solutions fractionnaires. Ces sĂ©parations donnent naissance Ă  un arbre de branchement oĂč, Ă  chaque noeud, la relaxation d’un problĂšme de partitionnement de la taille du problĂšme original est rĂ©solue. La taille de cet arbre, et donc le temps d’exĂ©cution, croissent exponentiellement avec la taille des instances. De plus, l’algorithme utilisĂ© pour rĂ©soudre la relaxation, le simplexe, fonctionne mal sur des problĂšmes dĂ©gĂ©nĂ©rĂ©s, c’est-Ă -dire dont trop de contraintes sont saturĂ©es. C’est malheureusement le cas de nombreux problĂšmes issus de l’industrie, particuliĂšrement du problĂšme de partitionnement dont le taux de dĂ©gĂ©nĂ©rescence est intrinsĂšquement Ă©levĂ©. Une autre approche de ce type de problĂšmes est celle des algorithmes primaux : il s’agit de partir d’une solution entiĂšre non optimale, de trouver une direction qui mĂšne vers une meilleure solution entiĂšre, puis d’itĂ©rer ce processus jusqu’à atteindre l’optimalitĂ©. À chaque Ă©tape, un sous-problĂšme d’augmentation est rĂ©solu : trouver une direction d’amĂ©lioration (ou d’augmentation) ou affirmer que la solution courante est optimale. Les travaux concernant les mĂ©thodes primales sont moins nombreux que ceux sur le branch-and-bound, qui reprĂ©sentent depuis quarante ans la filiĂšre dominante pour la rĂ©solution de problĂšmes en nombres entiers. DĂ©velopper une mĂ©thode primale efficace en pratique constituerait ainsi un changement majeur dans le domaine. Des travaux computationels sur des algorithmes primaux ressortent deux principaux dĂ©fis rencontrĂ©s lors de la conception et l’implĂ©mentation de ces mĂ©thodes. D’une part, de nombreuses directions d’amĂ©lioration sont irrĂ©alisables, c’est-Ă -dire qu’effectuer un pas, aussi petit soit-il, dans ces directions implique une violation des contraintes du problĂšme. On parle alors de dĂ©gĂ©nĂ©rescence ; c’est par exemple le cas des directions associĂ©es Ă  certains pivots de simplexe (pivots dĂ©gĂ©nĂ©rĂ©s). Les directions irrĂ©alisables ne permettent pas Ă  l’algorithme de progresser et peuvent mettre en pĂ©ril sa terminaison s’il est impossible de dĂ©terminer de direction rĂ©alisable. D’autre part, lorsqu’une direction d’amĂ©lioration rĂ©alisable pour la relaxation linĂ©aire a Ă©tĂ© dĂ©terminĂ©e, il est difficile de s’assurer que la solution vers laquelle elle mĂšne est entiĂšre. Parmi les algorithmes primaux existants, celui qui apparait comme le plus prometteur est le simplexe en nombres entiers avec dĂ©composition (Integral Simplex Using Decomposition, ISUD) car il intĂšgre au cadre primal des techniques de dĂ©composition permettant de se prĂ©munir des effets nĂ©fastes de la dĂ©gĂ©nĂ©rescence. Il s’agit Ă  notre connaissance du premier algorithme de type primal capable de battre le branch-and-bound sur des instances de grande taille ; par ailleurs, la diffĂ©rence est d’autant plus importante que le problĂšme est grand. Bien que fournissant des Ă©lĂ©ments de rĂ©ponse Ă  la problĂ©matique de la dĂ©gĂ©nĂ©rescence, cette mĂ©thode n’aborde pas pour autant la question de l’intĂ©gralitĂ© lors du passage Ă  une solution de meilleur coĂ»t ; et pour qu’ISUD puisse envisager de supplanter les mĂ©thodes de type branch-and-bound, il lui faut parcourir cette deuxiĂšme moitiĂ© du chemin. Il s’agit lĂ  de l’objectif de ce doctorat : augmenter le taux de directions entiĂšres trouvĂ©es par ISUD pour le rendre applicable aux instances industrielles de grande taille, de type planification de personnel. Pour aller dans cette direction, nous approfondissons tout d’abord les connaissances thĂ©oriques sur ISUD. Formuler ce dernier comme un algorithme primal, comprendre en quoi il se rattache Ă  cette famille, le traduire pour la premiĂšre fois dans un langage exclusivement primal sans faire appel Ă  la dualitĂ©, constituent le terreau de cette thĂšse. Cette analyse permet ensuite de mieux dĂ©crire la gĂ©omĂ©trie sous-jacente ainsi que les domaines de rĂ©alisabilitĂ© des diffĂ©rents problĂšmes linĂ©aires considĂ©rĂ©s. Quand bien mĂȘme ce pan majeur de notre travail n’est pas prĂ©sentĂ© dans cette thĂšse comme un chapitre Ă  part entiĂšre, il se situe indubitablement Ă  l’origine de chacune de nos idĂ©es, de nos approches et de nos contributions. Cette approche de l’algorithme sous un angle nouveau donne lieu Ă  de nombreuses simplifications, amĂ©liorations et extensions de rĂ©sultats dĂ©jĂ  connus. Dans un premier temps, nous gĂ©nĂ©ralisons la formulation du problĂšme d’augmentation afin d’augmenter la probabilitĂ© que la direction dĂ©terminĂ©e par l’algorithme mĂšne vers une nouvelle solution entiĂšre. Lors de l’exĂ©cution d’ISUD, pour dĂ©terminer la direction qui mĂšnera Ă  la solution suivante, on rĂ©sout un programme linĂ©aire dont la solution est une direction d’amĂ©lioration qui appartient au cĂŽne des directions rĂ©alisables. Pour s’assurer que ce programme est bornĂ© (les directions pourraient partir Ă  l’infini), on lui ajoute une contrainte de normalisation et on se restreint ainsi Ă  une section de ce cĂŽne. Dans la version originelle de l’algorithme, les coefficients de cette contrainte sont uniformes. Nous gĂ©nĂ©ralisons cette contrainte Ă  une section quelconque du cĂŽne et montrons que la direction rĂ©alisable dĂ©terminĂ©e par l’algorithme dĂ©pend fortement du choix des coefficients de cette contrainte ; il en va de mĂȘme pour la probabilitĂ© que la solution vers laquelle elle mĂšne soit entiĂšre. Nous Ă©tendons les propriĂ©tĂ©s thĂ©oriques liĂ©s Ă  la dĂ©composition dans l’algorihtme ISUD et montrons de nouveaux rĂ©sultats dans le cas d’un choix de coefficients quelconques. Nous dĂ©terminons de nouvelles propriĂ©tĂ©s spĂ©cifiques Ă  certains choix de normalisation et faisons des recommandations pour choisir les coefficients afin de pĂ©naliser les directions fractionnaires au profit des directions entiĂšres. Des rĂ©sultats numĂ©riques sur des instances de planification de personnel montrent le potentiel de notre approche. Alors que la version originale d’ISUD permet de rĂ©soudre 78% des instances de transport aĂ©rien du benchmark considĂ©rĂ©, 100% sont rĂ©solues grĂące Ă  l’un, au moins, des modĂšles que nous proposons. Dans un second temps, nous montrons qu’il est possible d’adapter des mĂ©thodes de plans coupants utilisĂ©s en programmation linĂ©aire en nombres entiers au cas d’ISUD. Nous montrons que des coupes peuvent ĂȘtres transfĂ©rĂ©es dans le problĂšme d’augmentation, et nous caractĂ©risons l’ensemble des coupes transfĂ©rables comme l’ensemble, non vide, des coupes primales saturĂ©es pour la solution courante du problĂšme de partitionnement. Nous montrons que de telles coupes existent toujours, proposons des algorithmes de sĂ©paration efficaces pour les coupes primales de cycle impair et de clique, et montrons que l’espace de recherche de ces coupes peut ĂȘtre restreint Ă  un petit nombre de variables, ce qui rend le processus efficace. Des rĂ©sultats numĂ©riques prouvent la validitĂ© de notre approche ; ces tests sont effectuĂ©s sur des instances de planification de personnel navigant et de chauffeurs d’autobus allant jusqu’à 1 600 contraintes et 570 000 variables. Sur les instances de transport aĂ©rien testĂ©es l’ajout de coupes primales permet de passer d’un taux de rĂ©solution de 70% Ă  92%. Sur de grandes instances d’horaires de chauffeurs d’autobus, les coupes prouvent l’optimalitĂ© locale de la solution dans plus de 80% des cas. Dans un dernier temps, nous modifions dynamiquement les coefficients de la contrainte de normalisation lorsque la direction trouvĂ©e par l’algorithme mĂšne vers une solution fractionnaire. Nous proposons plusieurs stratĂ©gies de mise-Ă -jour visant Ă  pĂ©naliser les directions fractionnaires basĂ©es sur des observations thĂ©oriques et pratiques. Certaines visent Ă  pĂ©naliser la direction choisie par l’algorithme, d’autres procĂšdent par perturbation des coefficients de normalisation en utilisant les Ă©quations des coupes mentionnĂ©es prĂ©cĂ©demment. Cette version de l’algorithme est testĂ©e sur un nouvel ensemble d’instances provenant de l’industrie du transport aĂ©rien. À notre connaissance, l’ensemble d’instances que nous proposons n’est comparable Ă  aucun autre. Il s’agit en effet de grands problĂšmes d’horaires de personnel navigant allant jusqu’à 1 700 vols et 115 000 rotations, donc autant de contraintes et de variables. Ils sont posĂ©s sous la forme de problĂšmes de partitionnement pour lesquels nous fournissons des solutions initiales comparables Ă  celles dont on disposerait en milieu industriel. Notre travail montre le potentiel qu’ont les algorithmes primaux pour rĂ©soudre des problĂšmes de planification de personnel navigant, problĂšmes clĂ©s pour les compagnies aĂ©riennes, tant par leur complexitĂ© intrinsĂšque que par les consĂ©quences Ă©conomiques et financiĂšres qu’ils entraĂźnent.----------ABSTRACT : Optimization is a three-step process. Step one models the problem and writes it as a mathematical program, i.e., a set of equations that includes an objective one seeks to minimize or maximize (typically the costs or benefit of a company) and constraints that must be satisfied by any acceptable solution (operational constraints, collective agreement, etc.). The unknowns of the model are the decision variables; they correspond to the quantities the decision-maker wants to infer. A model that perfectly represents reality is exact, otherwise it is approximate. The second step of the optimization process is the solution of the mathematical program, i.e., the determination of a solution that satisfies all constraints and for which the objective value is as good as possible. To this end, one generally uses an algorithm, a self-contained step-by-step set of operating rules that solves the problem in a finite number of operations. The algorithm is translated by means of a programming language into an executable program run by a computer; the execution of such software solves the mathematical program. Finally, the last step is the adaptation of the mathematical solution to reality. When the model is only approximate, the output solution may not fit the original requirements and therefore require a posteriori modifications. This thesis concentrates on the second of these three steps, the solution process. More specifically, we design and implement an algorithm that solves a specific mathematical program: set partitioning. The set partitioning problem models a very wide range of applications: workforce scheduling, logistics, electricity production planning, pattern recognition, etc. In each of these examples, the objective function and the constraints have different physical significations but the structure of the model is the same. From a mathematical point of view, it is an integer linear program whose decision variables can only take value 0 or 1. It is linear because both the objective and the constraints are linear functions of the variables. Most algorithms used to solve this family of programs are based on the principle called branch-and-bound. At first, the integrality constraints are relaxed; solutions may thus be fractional. The solution of the resulting program – called linear relaxation of the integer program – is significantly easier than that of the integer program. Then, to recover integrality, the problem is separated to eliminate fractional solutions. From the splitting a branching tree arises, in which, at each node, the relaxation of a set partitioning problem as big as the original one is solved. The size of that tree, and thus the solving time, grows exponentially with the size of the instance. Furthermore, the algorithm that solves the linear relaxations, the simplex, performs poorly on degenerate problems, i.e., problems for which too many constraints are tight. It is unfortunately the case of many industrial problems, and particularly of the set partitioning problem whose degeneracy rate is intrinsiquely high. An alternative approach is that of primal algorithms: start from a nonoptimal integer solution and find a direction that leads to a better one (also integer). That process is iterated until optimality is reached. At each step of the process one solves an augmentation subproblem which either outputs an augmenting direction or asserts that the current solution is optimal. The literature is significantly less abundant on primal algorithms than on branchand- bound and the latter has been the dominant method in integer programming for over forty years. The development of an efficient primal method would therefore stand as a major breakthrough in this field. From the computational works on primal algorithms, two main issues stand out concerning their design and implementation. On the one hand, many augmenting directions are infeasible, i.e., taking the smallest step in such a direction results in a violation of the constraints. This problem is strongly related to degeneracy and often affects simplex pivots (e.g., degenerate pivots). Infeasible directions prevent the algorithm from moving ahead and may jeopardize its performance, and even its termination when it is impossible to find a feasible direction. On the other hand, when a cost-improving direction has been succesfully determined, it may be hard to ensure that it leads to an integer solution. Among existing primal algorithms, the one appearing to be the most promising is the integral simplex using decomposition (ISUD) because it embeds decomposition techniques that palliate the unwanted effects of degeneracy into a primal framework. To our knowledge, it is the first primal algorithm to beat branch-and-bound on large scale industrial instances. Furthermore, its performances improve when the problem gets bigger. Despite its strong assets to counter degeneracy, however, this method does not handle the matter of integrality when reaching out for the next solution; and if ISUD is to compete with branch-and-bound, it is crucial that this issue be tackled. Therefore, the purpose of this thesis is the following: increasing the rate of integral directions found by ISUD to make it fully competitive with existing solvers on large-scale industrial workforce scheduling instances. To proceed in that direction, we first deepen the theoretical knowledge on ISUD. Formulating it as a primal algorithm, understanding how it belongs to that family, and translating it in a purely primal language that requires no notion of duality provide a fertile ground to our work. This analysis yields geometrical interpretations of the underlying structures and domains of the several mathematical programs involved in the solution process. Although no chapter specifically focuses on that facet of our work, most of our ideas, approaches and contributions stem from it. This groundbreaking approach of ISUD leads to simplifications, strengthening, and extensions of several theoretical results. In the first part of this work, we generalize the formulation of the augmentation problem in order to increase the likelihood that the direction found by the algorithm leads to a new integer solution. In ISUD, to find the edge leading to the next point, one solves a linear program to select an augmenting direction from a cone of feasible directions. To ensure that this linear program is bounded (the directions could go to infinity), a normalization constraint is added and the optimization is performed on a section of the cone. In the original version of the algorithm, all weights take the same value. We extend this constraint to the case of a generic normalization constraint and show that the output direction dĂ©pends strongly on the chosen normalization weights, and so does the likelihood that the next solution is integer. We extend the theoretical properties of ISUD, particularly those that are related to decomposition and we prove new results in the case of a generic normalization constraint. We explore the theoretical properties of some specific constraints, and discuss the design of the normalization constraint so as to penalize fractional directions. We also report computational results on workforce scheduling instances that show the potential behind our approach. While only 78% of aircrew scheduling instances from that benchmark are solved with the original version of ISUD, 100% of them are solved by at least one of the models we propose. In the second part, we show that cutting plane methods used in integer linear programming can be adapted to ISUD. We show that cutting planes can be transferred to the augmentation problem, and we characterize the set of transferable cuts as a nonempty subset of primal cuts that are tight to the current solution. We prove that these cutting planes always exist, we propose efficient separation procedures for primal clique and odd-cycle cuts, and we prove that their search space can be restricted to a small subset of the variables making the computation efficient. Numerical results demonstrate the effectiveness of adding cutting planes to the algorithm. Tests are performed on small- and large-scale set partitioning problems from aircrew and bus-driver scheduling instances up to 1,600 constraints and 570,000 variables. On the aircrew scheduling instances, the addition of primal cuts raises the rate of instances solved from 70% to 92%. On large bus drivers scheduling instances, primal cuts prove that the solution found by ISUD is optimal over a large subset of the domain for more than 80% of the instances. In the last part, we dynamically update the coefficients of the normalization constraint whenever the direction found by the algorithm leads to a fractional solution, to penalize that direction. We propose several update strategies based on theoretical and experimental results. Some penalize the very direction returned by the algorithm, others operate by perturbating the normalization coefficients with those of the aforementionned primal cuts. To prove the efficiency of our strategies, we show that our version of the algorithm yields better results than the former version and than classical branch-and-bound techniques on a benchmark of industrial aircrew scheduling instances. The benchmark that we propose is, to the best of our knowledge, comparable to no other from the literature. It provides largescale instances with up to 1,700 flights and 115,000 pairings, hence as many constraints and variables, and the instances are given in a set-partitioning form together with initial solutions that accurately mimic those of industrial applications. Our work shows the strong potential of primal algorithms for the crew scheduling problem, which is a key challenge for large airlines, both financially significant and notably hard to solve

    Exact rotamer optimization for computational protein design

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (leaves 235-244).The search for the global minimum energy conformation (GMEC) of protein side chains is an important computational challenge in protein structure prediction and design. Using rotamer models, the problem is formulated as a NP-hard optimization problem. Dead-end elimination (DEE) methods combined with systematic A* search (DEE/A*) have proven useful, but may not be strong enough as we attempt to solve protein design problems where a large number of similar rotamers is eligible and the network of interactions between residues is dense. In this thesis, we present an exact solution method, named BroMAP (branch-and-bound rotamer optimization using MAP estimation), for such protein design problems. The design goal of BroMAP is to be able to expand smaller search trees than conventional branch-and-bound methods while performing only a moderate amount of computation in each node, thereby reducing the total running time. To achieve that, BroMAP attempts reduction of the problem size within each node through DEE and elimination by energy lower bounds from approximate maximurn-a-posteriori (MAP) estimation. The lower bounds are also exploited in branching and subproblem selection for fast discovery of strong upper bounds. Our computational results show that BroMAP tends to be faster than DEE/A* for large protein design cases. BroMAP also solved cases that were not solvable by DEE/A* within the maximum allowed time, and did not incur significant disadvantage for cases where DEE/A* performed well. In the second part of the thesis, we explore several ways of improving the energy lower bounds by using Lagrangian relaxation. Through computational experiments, solving the dual problem derived from cyclic subgraphs, such as triplets, is shown to produce stronger lower bounds than using the tree-reweighted max-product algorithm.(cont.) In the second approach, the Lagrangian relaxation is tightened through addition of violated valid inequalities. Finally, we suggest a way of computing individual lower bounds using the dual method. The preliminary results from evaluating BroMAP employing the dual bounds suggest that the use of the strengthened bounds does not in general improve the running time of BroMAP due to the longer running time of the dual method.by Eun-Jong Hong.Ph.D
    corecore