215 research outputs found

    Selected Topics in Network Optimization: Aligning Binary Decision Diagrams for a Facility Location Problem and a Search Method for Dynamic Shortest Path Interdiction

    Get PDF
    This work deals with three different combinatorial optimization problems: minimizing the total size of a pair of binary decision diagrams (BDDs) under a certain structural property, a variant of the facility location problem, and a dynamic version of the Shortest-Path Interdiction (DSPI) problem. However, these problems all have the following core idea in common: They all stem from representing an optimization problem as a decision diagram. We begin from cases in which such a diagram representation of reasonable size might exist, but finding a small diagram is difficult to achieve. The first problem develops a heuristic for enforcing a structural property for a collection of BDDs, which allows them to be merged into a single one efficiently. In the second problem, we consider a specific combinatorial problem that allows for a natural representation by a pair of BDDs. We use the previous result and ideas developed earlier in the literature to reformulate this problem as a linear program over a single BDD. This approach enables us to obtain sensitivity information, while often enjoying runtimes comparable to a mixed integer program solved with a commercial solver, after we pay the computational overhead of building the diagram (e.g., when re-solving the problem using different costs, but the same graph topology). In the last part, we examine DSPI, for which building the full decision diagram is generally impractical. We formalize the concept of a game tree for the DSPI and design a heuristic based on the idea of building only selected parts of this exponentially-sized decision diagram (which is not binary any more). We use a Monte Carlo Tree Search framework to establish policies that are near optimal. To mitigate the size of the game tree, we leverage previously derived bounds for the DSPI and employ an alpha–beta pruning technique for minimax optimization. We highlight the practicality of these ideas in a series of numerical experiments

    Artificial evolution with Binary Decision Diagrams: a study in evolvability in neutral spaces

    Get PDF
    This thesis develops a new approach to evolving Binary Decision Diagrams, and uses it to study evolvability issues. For reasons that are not yet fully understood, current approaches to artificial evolution fail to exhibit the evolvability so readily exhibited in nature. To be able to apply evolvability to artificial evolution the field must first understand and characterise it; this will then lead to systems which are much more capable than they are currently. An experimental approach is taken. Carefully crafted, controlled experiments elucidate the mechanisms and properties that facilitate evolvability, focusing on the roles and interplay between neutrality, modularity, gradualism, robustness and diversity. Evolvability is found to emerge under gradual evolution as a biased distribution of functionality within the genotype-phenotype map, which serves to direct phenotypic variation. Neutrality facilitates fitness-conserving exploration, completely alleviating local optima. Population diversity, in conjunction with neutrality, is shown to facilitate the evolution of evolvability. The search is robust, scalable, and insensitive to the absence of initial diversity. The thesis concludes that gradual evolution in a search space that is free of local optima by way of neutrality can be a viable alternative to problematic evolution on multi-modal landscapes

    Advances in Functional Decomposition: Theory and Applications

    Get PDF
    Functional decomposition aims at finding efficient representations for Boolean functions. It is used in many applications, including multi-level logic synthesis, formal verification, and testing. This dissertation presents novel heuristic algorithms for functional decomposition. These algorithms take advantage of suitable representations of the Boolean functions in order to be efficient. The first two algorithms compute simple-disjoint and disjoint-support decompositions. They are based on representing the target function by a Reduced Ordered Binary Decision Diagram (BDD). Unlike other BDD-based algorithms, the presented ones can deal with larger target functions and produce more decompositions without requiring expensive manipulations of the representation, particularly BDD reordering. The third algorithm also finds disjoint-support decompositions, but it is based on a technique which integrates circuit graph analysis and BDD-based decomposition. The combination of the two approaches results in an algorithm which is more robust than a purely BDD-based one, and that improves both the quality of the results and the running time. The fourth algorithm uses circuit graph analysis to obtain non-disjoint decompositions. We show that the problem of computing non-disjoint decompositions can be reduced to the problem of computing multiple-vertex dominators. We also prove that multiple-vertex dominators can be found in polynomial time. This result is important because there is no known polynomial time algorithm for computing all non-disjoint decompositions of a Boolean function. The fifth algorithm provides an efficient means to decompose a function at the circuit graph level, by using information derived from a BDD representation. This is done without the expensive circuit re-synthesis normally associated with BDD-based decomposition approaches. Finally we present two publications that resulted from the many detours we have taken along the winding path of our research

    A contribution to the evaluation and optimization of networks reliability

    Get PDF
    L’évaluation de la fiabilité des réseaux est un problème combinatoire très complexe qui nécessite des moyens de calcul très puissants. Plusieurs méthodes ont été proposées dans la littérature pour apporter des solutions. Certaines ont été programmées dont notamment les méthodes d’énumération des ensembles minimaux et la factorisation, et d’autres sont restées à l’état de simples théories. Cette thèse traite le cas de l’évaluation et l’optimisation de la fiabilité des réseaux. Plusieurs problèmes ont été abordés dont notamment la mise au point d’une méthodologie pour la modélisation des réseaux en vue de l’évaluation de leur fiabilités. Cette méthodologie a été validée dans le cadre d’un réseau de radio communication étendu implanté récemment pour couvrir les besoins de toute la province québécoise. Plusieurs algorithmes ont aussi été établis pour générer les chemins et les coupes minimales pour un réseau donné. La génération des chemins et des coupes constitue une contribution importante dans le processus d’évaluation et d’optimisation de la fiabilité. Ces algorithmes ont permis de traiter de manière rapide et efficace plusieurs réseaux tests ainsi que le réseau de radio communication provincial. Ils ont été par la suite exploités pour évaluer la fiabilité grâce à une méthode basée sur les diagrammes de décision binaire. Plusieurs contributions théoriques ont aussi permis de mettre en place une solution exacte de la fiabilité des réseaux stochastiques imparfaits dans le cadre des méthodes de factorisation. A partir de cette recherche plusieurs outils ont été programmés pour évaluer et optimiser la fiabilité des réseaux. Les résultats obtenus montrent clairement un gain significatif en temps d’exécution et en espace de mémoire utilisé par rapport à beaucoup d’autres implémentations. Mots-clés: Fiabilité, réseaux, optimisation, diagrammes de décision binaire, ensembles des chemins et coupes minimales, algorithmes, indicateur de Birnbaum, systèmes de radio télécommunication, programmes.Efficient computation of systems reliability is required in many sensitive networks. Despite the increased efficiency of computers and the proliferation of algorithms, the problem of finding good and quickly solutions in the case of large systems remains open. Recently, efficient computation techniques have been recognized as significant advances to solve the problem during a reasonable period of time. However, they are applicable to a special category of networks and more efforts still necessary to generalize a unified method giving exact solution. Assessing the reliability of networks is a very complex combinatorial problem which requires powerful computing resources. Several methods have been proposed in the literature. Some have been implemented including minimal sets enumeration and factoring methods, and others remained as simple theories. This thesis treats the case of networks reliability evaluation and optimization. Several issues were discussed including the development of a methodology for modeling networks and evaluating their reliabilities. This methodology was validated as part of a radio communication network project. In this work, some algorithms have been developed to generate minimal paths and cuts for a given network. The generation of paths and cuts is an important contribution in the process of networks reliability and optimization. These algorithms have been subsequently used to assess reliability by a method based on binary decision diagrams. Several theoretical contributions have been proposed and helped to establish an exact solution of the stochastic networks reliability in which edges and nodes are subject to failure using factoring decomposition theorem. From this research activity, several tools have been implemented and results clearly show a significant gain in time execution and memory space used by comparison to many other implementations. Key-words: Reliability, Networks, optimization, binary decision diagrams, minimal paths set and cuts set, algorithms, Birnbaum performance index, Networks, radio-telecommunication systems, programs

    Concurrent optimization strategies for high-performance VLSI circuits

    Get PDF
    In the next generation of VLSI circuits, concurrent optimizations will be essential to achieve the performance challenges. In this dissertation, we present techniques for combining traditional timing optimization techniques to achieve a superior performance;The method of buffer insertion is used in timing optimization to either increase the driving power of a path in a circuit, or to isolate large capacitive loads that lie on noncritical or less critical paths. The procedure of transistor sizing selects the sizes of transistors within a circuit to achieve a given timing specification. Traditional design techniques perform these two optimizations as independent steps during synthesis, even though they are intimately linked and performing them in alternating steps is liable to lead to suboptimal solutions. The first part of this thesis presents a new approach for unifying transistor sizing with buffer insertion. Our algorithm achieve from 5% to 49% area reduction compared with the results of a standard transistor sizing algorithm;The next part of the thesis deals with the problem of collapsing gates for technology mapping. Two new techniques are proposed. The first method, the odd-level transistor replacement (OTR) method, performs technology mapping without the restriction of a fixed library size, and maps a circuit to a virtual library of complex static CMOS gates. The second technique, the Static CMOS/PTL method, uses a mix of static CMOS and pass transistor logic (PTL) to realize the circuit, using the relation between PTL and binary decision diagrams. The methods are very efficient and can handle all ISCAS\u2785 benchmark circuits in minutes. On average, it was found that the OTR method gave 40%, and the Static/PTL gave 50% delay reductions over SIS, with substantial area savings;Finally, we extend the technology mapping work to interleave it with placement in a single optimization. Conventional methods that perform these steps separately will not be adequate for next-generation circuits. Our approach presents an integrated solution to this problem, and shows an average of 28.19%, and a maximum of 78.42% improvement in the delay over a method that performs the two optimizations in separate steps

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Bayesian Approaches for Modelling Flood Damage Processes

    Get PDF
    Hochwasserschadensprozesse werden von den drei Komponenten des Hochwasserrisikos bestimmt – der Gefahr, der Exposition und der Vulnerabilität. Dabei bleiben wichtige Einflussgrößen auf die Vulnerabilität, wie die private Hochwasservorsorge aufgrund fehlender quantitativer Informationen unberücksichtigt. Diese Arbeit entwickelt daher eine robuste statistische Methode zur Quantifizierung des Einflusses von privater Hochwasservorsorge auf die Reduzierung der Vulnerabilität von Haushalten bei Hochwasser. Es konnte gezeigt werden, dass in Deutschland private Hochwasservorsorgemaßnahmen den durchschnittlichen Hochwasserschaden pro Wohngebäude um 11.000 bis 15.000 Euro reduzieren. Hochwasserschadensmodelle mit Expertenwissen und datengestützten Methoden sind dabei am besten in der Lage Unterschiede in der Vulnerabilität durch private Hochwasservorsorge zu erkennen. Die über Hochwasserschadenprozesse erhobenen Daten und Modellannahmen sind von Unsicherheit geprägt und so sind auch Schätzungen mit. Die Bayesschen Modelle, die in dieser Arbeit entwickelt und angewandt werden, nutzen Annahmen über Schadensprozesse als Prior und empirische Daten zur Aktualisierung der Wahrscheinlischkeitsverteilungen. Die Modelle bieten Hochwasserschadensschätzungen als Verteilung, welche die Bandbreite der Variabilität der Schadensprozesse und die Unsicherheit der Modellannahmen abbilden. Hochwasserschadensmodelle, hinsichtlich der Prognoseerstellung und Anwendbarkeit. Ins Besondere verbessert die Verwendung einer Beta–Verteilung die Zuverlässigkeit der Modellergebnisse im Vergleich zu den häufig genutzten Gaußschen oder nicht parametrischen Verteilungen. Der hierarchische Bayessche Ansatz schafft eine verbesserte Parametrisierung von Wasserstand-Schadens-Funktionen und ersetzt so die Notwendigkeit empirischer Daten durch regional- und Ereignis-spezifisches Expertenwissen. Auf diese Weise kann die Vorhersage bei einer zeitlich und räumlichen Übertragung des Models verbessert werden.Flood damage processes are influenced by the three components of flood risk - hazard, exposure and vulnerability. In comparison to hazard and exposure, the vulnerability component, though equally important is often generalized in many flood risk assessments by a simple depth-damage curve. Hence, this thesis developed a robust statistical method to quantify the role of private precaution in reducing flood vulnerability of households. In Germany, the role of private precaution was found to be very significant in reducing flood damage (11 - 15 thousand euros, per household). Also, flood loss models with structure, parameterization and choice of explanatory variables based on expert knowledge and data-driven methods were successful in capturing changes in vulnerability, which makes them suitable for future risk assessments. Due to significant uncertainty in the underlying data and model assumptions, flood loss models always carry uncertainty around their predictions. This thesis develops Bayesian approaches for flood loss modelling using assumptions regarding damage processes as priors and available empirical data as evidence for updating. Thus, these models provide flood loss predictions as a distribution, that potentially accounts for variability in damage processes and uncertainty in model assumptions. The models presented in this thesis are an improvement over the state-of-the-art flood loss models in terms of prediction capability and model applicability. In particular, the choice of the response (Beta) distribution improved the reliability of loss predictions compared to the popular Gaussian or non-parametric distributions; the Hierarchical Bayesian approach resulted in an improved parameterization of the common stage damage functions that replaces empirical data requirements with region and event-specific expert knowledge, thereby, enhancing its predictive capabilities during spatiotemporal transfer

    Exact Algorithms for Mixed-Integer Multilevel Programming Problems

    Get PDF
    We examine multistage optimization problems, in which one or more decision makers solve a sequence of interdependent optimization problems. In each stage the corresponding decision maker determines values for a set of variables, which in turn parameterizes the subsequent problem by modifying its constraints and objective function. The optimization literature has covered multistage optimization problems in the form of bilevel programs, interdiction problems, robust optimization, and two-stage stochastic programming. One of the main differences among these research areas lies in the relationship between the decision makers. We analyze the case in which the decision makers are self-interested agents seeking to optimize their own objective function (bilevel programming), the case in which the decision makers are opponents working against each other, playing a zero-sum game (interdiction), and the case in which the decision makers are cooperative agents working towards a common goal (two-stage stochastic programming). Traditional exact approaches for solving multistage optimization problems often rely on strong duality either for the purpose of achieving single-level reformulations of the original multistage problems, or for the development of cutting-plane approaches similar to Benders\u27 decomposition. As a result, existing solution approaches usually assume that the last-stage problems are linear or convex, and fail to solve problems for which the last-stage is nonconvex (e.g., because of the presence of discrete variables). We contribute exact finite algorithms for bilevel mixed-integer programs, three-stage defender-attacker-defender problems, and two-stage stochastic programs. Moreover, we do not assume linearity or convexity for the last-stage problem and allow the existence of discrete variables. We demonstrate how our proposed algorithms significantly outperform existing state-of-the-art algorithms. Additionally, we solve for the first time a class of interdiction and fortification problems in which the third-stage problem is NP-hard, opening a venue for new research and applications in the field of (network) interdiction

    Optimized Temporal Monitors for SystemC

    Get PDF
    SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead
    • …
    corecore