20 research outputs found

    The Language of Search

    Full text link
    This paper is concerned with a class of algorithms that perform exhaustive search on propositional knowledge bases. We show that each of these algorithms defines and generates a propositional language. Specifically, we show that the trace of a search can be interpreted as a combinational circuit, and a search algorithm then defines a propositional language consisting of circuits that are generated across all possible executions of the algorithm. In particular, we show that several versions of exhaustive DPLL search correspond to such well-known languages as FBDD, OBDD, and a precisely-defined subset of d-DNNF. By thus mapping search algorithms to propositional languages, we provide a uniform and practical framework in which successful search techniques can be harnessed for compilation of knowledge into various languages of interest, and a new methodology whereby the power and limitations of search algorithms can be understood by looking up the tractability and succinctness of the corresponding propositional languages

    Reduced Ordered Binary Decision Diagram with Implied Literals: A New knowledge Compilation Approach

    Full text link
    Knowledge compilation is an approach to tackle the computational intractability of general reasoning problems. According to this approach, knowledge bases are converted off-line into a target compilation language which is tractable for on-line querying. Reduced ordered binary decision diagram (ROBDD) is one of the most influential target languages. We generalize ROBDD by associating some implied literals in each node and the new language is called reduced ordered binary decision diagram with implied literals (ROBDD-L). Then we discuss a kind of subsets of ROBDD-L called ROBDD-i with precisely i implied literals (0 \leq i \leq \infty). In particular, ROBDD-0 is isomorphic to ROBDD; ROBDD-\infty requires that each node should be associated by the implied literals as many as possible. We show that ROBDD-i has uniqueness over some specific variables order, and ROBDD-\infty is the most succinct subset in ROBDD-L and can meet most of the querying requirements involved in the knowledge compilation map. Finally, we propose an ROBDD-i compilation algorithm for any i and a ROBDD-\infty compilation algorithm. Based on them, we implement a ROBDD-L package called BDDjLu and then get some conclusions from preliminary experimental results: ROBDD-\infty is obviously smaller than ROBDD for all benchmarks; ROBDD-\infty is smaller than the d-DNNF the benchmarks whose compilation results are relatively small; it seems that it is better to transform ROBDDs-\infty into FBDDs and ROBDDs rather than straight compile the benchmarks.Comment: 18 pages, 13 figure

    Symbolic search and abstraction heuristics for cost-optimal planning in automated planning

    Get PDF
    Mención Internacional en el título de doctorLa Planificación Automática puede ser definida como el problema de encontrar una secuencia de acciones (un plan) para conseguir una meta, desde un punto inicial, asumiendo que las acciones tienen efectos deterministas. La Planificación Automática es independiente de dominio porque los planificadores toman como información inicial una descripción del problema y deben resolverlo sin ninguna información adicional. Esta tesis trata en particular de planificación automática ´optima, en la cual las acciones tienen un coste asociado. Los planificadores óptimos deben encontrar un plan y probar que no existe ningún otro plan de menor coste. La mayoría de los planificadores óptimos están basados en la búsqueda de estados explícita. Sin lugar a dudas, esta aproximación ha sido la dominante en planificación automática óptima durante los últimos años. No obstante, la búsqueda simbólica se presenta como una alternativa interesante. En esta tesis, proponemos dos mejoras ortogonales para la planificación basada en búsqueda simbólica. En primer lugar, estudiamos diferentes métodos para mejorar la computación de la “imagen”, operación que calcula el conjunto de estados sucesores a partir de un conjunto de estados. Posteriormente, analizamos cómo explotar las invariantes de estado para mejorar el rendimiento de la búsqueda simbólica. Estas propuestas suponen una mejora significativa en el desempeño de los algoritmos simbólicos en la mayoría de los dominios analizados. Hemos analizado dos tipos de heurísticas de abstracción con el objetivo de extrapolar las mejoras que se han realizado en la búsqueda explícita durante los últimos años a la búsqueda simbólica. Las heurísticas analizadas son: las bases de datos de patrones (pattern databases, PDBs) y una generalización de estas, mergeand-shrink (M&S). Mientras que las PDBs se han utilizado con anterioridad en búsqueda simbólica, hemos estudiado el uso de M&S, que es más general. En esta tesis se muestra que determinados tipos de heurísticas de M&S (aquellas que son generadas mediante una estrategia de “merge” lineal) pueden ser representadas como BDDs, con un coste computacional polinomial en el tamaño de la abstracción y la descripción del problema; y por lo tanto, pueden ser utilizadas de forma eficiente en la búsqueda simbólica. También proponemos una nueva heurística”symbolic perimeter merge-andshrink” (SPM&S) que combina la fuerza de la búsqueda hacia atrás simbólica con la flexibilidad de M&S. Los resultados experimentales muestran que SPM&S es capaz de superar, no solo las dos técnicas que combina, sino también otras heurísticas del estado del arte. Finalmente, hemos integrado las abstracciones simbólicas de perímetro, SPM&S, en la búsqueda simbólica bidireccional. En resumen, esta tesis estudia diferentes propuestas para planificación óptima basada en Búsqueda simbólica. Hemos implementado diferentes planificadores simbólicos basados en la Búsqueda bidireccional y las abstracciones de perímetro. Los resultados experimentales muestran cómo los planificadores presentados como resultado de este trabajo son altamente competitivos y frecuentemente superan al resto de planificadores del estado del arte.Domain-independent planning is the problem of finding a sequence of actions for achieving a goal from an initial state assuming that actions have deterministic effects. It is domain-independent because planners take as input the description of a problem and must solve it without any additional information. In this thesis, we deal with cost-optimal planning problems, in which actions have an associated cost and the planner must find a plan and prove that no other plan of lower cost exists. Most cost-optimal planners are based on explicit-state search. While this has undoubtedly been the dominant approach to cost-optimal planning in the last years, symbolic search is an interesting alternative. In symbolic search, sets of states are succinctly represented as binary decision diagrams, BDDs. The BDD representation does not only reduce the memory needed to store sets of states, but also allows the planner to efficiently manipulate sets of states reducing the search time. We propose two orthogonal enhancements for symbolic search planning. On the one hand, we study different methods for image computation, which usually is the bottleneck of symbolic search planners. On the other hand, we analyze how to exploit state invariants to prune symbolic search. Our techniques significantly improve the performance of symbolic search algorithms in most benchmark domains. Moreover, the enhanced version of symbolic bidirectional search is one of the strongest approaches to domain-independent planning even though it does not use any heuristic. Explicit-state search planners are commonly guided with admissible heuristics, which optimistically estimate the cost from any state to the goal. Heuristics are automatically derived from the problem description and can be classified into different families according to their underlying ideas. In order to bring the improvements on heuristics that have been made in explicit-state search to symbolic search, we analyze two types of abstraction heuristics: pattern databases (PDBs) and a generalization of them, merge-and-shrink (M&S). While PDBs had already been used in symbolic search, we analyze the use of the more general M&S heuristics. We show that certain types of M&S heuristics (those generated with a linear merging strategy) can be represented as BDDs with at most a polynomial overhead and, thus, efficiently used in symbolic search. We also propose a new heuristic, symbolic perimeter merge-and-shrink (SPM&S) that combines the strength of symbolic regression search with the flexibility of M&S heuristics. Our experiments show that SPM&S is able to beat, not only the two techniques it combines, but also other state-of-the-art heuristics. Finally, we integrate our symbolic perimeter abstraction heuristics in symbolic bidirectional search. The heuristic used by the bidirectional search is computed by means of another symbolic bidirectional search in an abstract state space. We show how, even though the combination of symbolic bidirectional search and abstraction heuristics has an overall performance similar to the simpler symbolic bidirectional blind search, it can sometimes solve more problems in particular domains. In summary, this thesis studies different enhancements on symbolic search. We implement different symbolic search planners based on bidirectional search and perimeter abstraction heuristics. Experimental results show that the resulting planners are highly competitive and often outperform other state-of-the-art planners.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Manuel Molina López..- Vocal: Malte Helmert .- Secretario: Andrés Jonsso

    Approximate logic circuits: Theory and applications

    Get PDF
    CMOS technology scaling, the process of shrinking transistor dimensions based on Moore's law, has been the thrust behind increasingly powerful integrated circuits for over half a century. As dimensions are scaled to few tens of nanometers, process and environmental variations can significantly alter transistor characteristics, thus degrading reliability and reducing performance gains in CMOS designs with technology scaling. Although design solutions proposed in recent years to improve reliability of CMOS designs are power-efficient, the performance penalty associated with these solutions further reduces performance gains with technology scaling, and hence these solutions are not well-suited for high-performance designs. This thesis proposes approximate logic circuits as a new logic synthesis paradigm for reliable, high-performance computing systems. Given a specification, an approximate logic circuit is functionally equivalent to the given specification for a "significant" portion of the input space, but has a smaller delay and power as compared to a circuit implementation of the original specification. This contributions of this thesis include (i) a general theory of approximation and efficient algorithms for automated synthesis of approximations for unrestricted random logic circuits, (ii) logic design solutions based on approximate circuits to improve reliability of designs with negligible performance penalty, and (iii) efficient decomposition algorithms based on approxiiii mate circuits to improve performance of designs during logic synthesis. This thesis concludes with other potential applications of approximate circuits and identifies. open problems in logic decomposition and approximate circuit synthesis

    On Lower Bounds for Parity Branching Programs

    Get PDF
    Diese Arbeit beschaeftigt sich mit der Komplexität von parity Branching Programmen. Es werden superpolynomiale untere Schranken für verschiedene Varianten bewiesen, nämlich für well-structured graph-driven parity branching programs, general graph-driven parity branching programs und Summen von graph-driven parity branching programs

    Systematic delay-driven power optimisation and power-driven delay optimisation of combinational circuits

    Get PDF
    With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved

    Learning understandable classifier models.

    Get PDF
    The topic of this dissertation is the automation of the process of extracting understandable patterns and rules from data. An unprecedented amount of data is available to anyone with a computer connected to the Internet. The disciplines of Data Mining and Machine Learning have emerged over the last two decades to face this challenge. This has led to the development of many tools and methods. These tools often produce models that make very accurate predictions about previously unseen data. However, models built by the most accurate methods are usually hard to understand or interpret by humans. In consequence, they deliver only decisions, and are short of any explanations. Hence they do not directly lead to the acquisition of new knowledge. This dissertation contributes to bridging the gap between the accurate opaque models and those less accurate but more transparent for humans. This dissertation first defines the problem of learning from data. It surveys the state-of-the-art methods for supervised learning of both understandable and opaque models from data, as well as unsupervised methods that detect features present in the data. It describes popular methods of rule extraction from unintelligible models which rewrite them into an understandable form. Limitations of rule extraction are described. A novel definition of understandability which ties computational complexity and learning is provided to show that rule extraction is an NP-hard problem. Next, a discussion whether one can expect that even an accurate classifier has learned new knowledge. The survey ends with a presentation of two approaches to building of understandable classifiers. On the one hand, understandable models must be able to accurately describe relations in the data. On the other hand, often a description of the output of a system in terms of its input requires the introduction of intermediate concepts, called features. Therefore it is crucial to develop methods that describe the data with understandable features and are able to use those features to present the relation that describes the data. Novel contributions of this thesis follow the survey. Two families of rule extraction algorithms are considered. First, a method that can work with any opaque classifier is introduced. Artificial training patterns are generated in a mathematically sound way and used to train more accurate understandable models. Subsequently, two novel algorithms that require that the opaque model is a Neural Network are presented. They rely on access to the network\u27s weights and biases to induce rules encoded as Decision Diagrams. Finally, the topic of feature extraction is considered. The impact on imposing non-negativity constraints on the weights of a neural network is considered. It is proved that a three layer network with non-negative weights can shatter any given set of points and experiments are conducted to assess the accuracy and interpretability of such networks. Then, a novel path-following algorithm that finds robust sparse encodings of data is presented. In summary, this dissertation contributes to improved understandability of classifiers in several tangible and original ways. It introduces three distinct aspects of achieving this goal: infusion of additional patterns from the underlying pattern distribution into rule learners, the derivation of decision diagrams from neural networks, and achieving sparse coding with neural networks with non-negative weights

    Computing explanations for interactive constraint-based systems

    Get PDF
    Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions

    Activity of diverse chalcones against several targets: statistical analysis of a high-throughput virtual screen of a custom chalcone library

    Get PDF
    Chalcone family molecules are well known to have therapeutic proprieties (anti-inflammatory, anti-microbial or anti-cancer, etc). However the mechanism of action in some cases is not well known. A virtual library of this family of compounds was constructed using custom scripts, based on the aldol condensation, and this library was modified further to analogues by expansion of the α,β-unsaturated ketone linker. Acetophenone and benzaldehyde derivatives which are available and purchasable were used as a base to design the chalcone virtual library. 8063 chalcones were constructed and geometrically optimized with Gaussian 09. Their physicochemical characteristics linked to the Lipinski rules were analyzed with Knime and CDK. The entire library was after docked against several targets including HIV-1 integrase, MRSA pyruvate kinase, HSP90, COX-1, COX-2, ALR2, MAOA, MAOB, acetylcholinesterase, butyrylcholinesterase and PLA2. With the exception of MAOA, which does not have a crystal structure ligand, all dockings were validated by redocking the original ligand provided by the literature. These targets are known in the literature to be inhibited by chalcone-derivatives. However, specificity of the particular known chalcone inhibitors to the particular targets is not known. To this end the performance of the generated chalcone library against the list of targets was of interest. The binding energy of ligand-protein complexes was generally good across the library. Statistical analysis including principal component analysis and hierarchical clustering analysis were made in order to investigate for any physical/chemical characteristics which might explain what chalcone features affect the binding energy of the ligand-protein complexes. The spherical polar coordinates defining the orientation of the binding poses were also calculated and used in the statistical analysis. The statistical analysis has allowed us to hypothesize the importance of these radial distances and the polar angles of key atoms in the chalcones in binding to the pyruvate kinase crystal structure. This was validated by the docking of another small library of compound models in which the α,β-unsaturated ketone chain of the chalcone was replaced by incrementally longer conjugated chains. Further studies on the chalcones themselves reveal rotameric systems in both cis and trans-configurations (which may impact binding), and also studied was the effect of Topliss-based modification and its impact of binding to HSP90. Molecular dynamics confirmed good binding of identified chalcone hits
    corecore