1,947 research outputs found

    Data Correcting Algorithms in Combinatorial Optimization

    Get PDF
    This paper describes data correcting algorithms. It provides the theory behind the algorithms and presents the implementation details and computational experience with these algorithms on the asymmetric traveling salesperson problem, the problem of maximizing submodular functions, and the simple plant location problem.

    A Combined Gate Replacement and Input Vector Control Approach

    Get PDF
    Due to the increasing role of leakage power in CMOS circuit's total power dissipation, leakage reduction has attracted a lot of attention recently. Input vector control (IVC) takes advantage of the transistor stack effect to apply the minimum leakage vector (MLV) to the primary inputs of the circuit during the standby mode. However, IVC techniques become less effective for circuits of large logic depth because theMLV at primary inputs has little impact on internal gates at high logic level. In this paper, we propose a technique to overcome this limitation by directly controlling the inputs to the internal gates that are in their worst leakage states. Specifically, we propose a gate replacement technique that replaces such gates by other library gates while maintaining the circuit's correct functionality at the active mode. This modification of the circuit does not require changes of the design flow, but it opens the door for further leakage reduction, when the MLV is not effective. We then describe a divideand- conquer approach that combines the gate replacement and input vector control techniques. It integrates an algorithm that finds the optimal MLV for tree circuits, a fast gate replacement heuristic, and a genetic algorithm that connects the tree circuits. We have conducted experiments on all the MCNC91 benchmark circuits. The results reveal that 1) the gate replacement technique itself can provide 10% more leakage current reduction over the best known IVC methods with no delay penalty and little area increase; 2) the divide-and-conquer approach outperforms the best pure IVC method by 24% and the existing control point insertion method by 12%; 3) when we obtain the optimal MLV for small circuits from exhaustive search, the proposed gate replacement alone can still reduce leakage current by 13% while the divide-and-conquer approach reduces 17%

    Invariant Generation through Strategy Iteration in Succinctly Represented Control Flow Graphs

    Full text link
    We consider the problem of computing numerical invariants of programs, for instance bounds on the values of numerical program variables. More specifically, we study the problem of performing static analysis by abstract interpretation using template linear constraint domains. Such invariants can be obtained by Kleene iterations that are, in order to guarantee termination, accelerated by widening operators. In many cases, however, applying this form of extrapolation leads to invariants that are weaker than the strongest inductive invariant that can be expressed within the abstract domain in use. Another well-known source of imprecision of traditional abstract interpretation techniques stems from their use of join operators at merge nodes in the control flow graph. The mentioned weaknesses may prevent these methods from proving safety properties. The technique we develop in this article addresses both of these issues: contrary to Kleene iterations accelerated by widening operators, it is guaranteed to yield the strongest inductive invariant that can be expressed within the template linear constraint domain in use. It also eschews join operators by distinguishing all paths of loop-free code segments. Formally speaking, our technique computes the least fixpoint within a given template linear constraint domain of a transition relation that is succinctly expressed as an existentially quantified linear real arithmetic formula. In contrast to previously published techniques that rely on quantifier elimination, our algorithm is proved to have optimal complexity: we prove that the decision problem associated with our fixpoint problem is in the second level of the polynomial-time hierarchy.Comment: 35 pages, conference version published at ESOP 2011, this version is a CoRR version of our submission to Logical Methods in Computer Scienc

    A contribution to the evaluation and optimization of networks reliability

    Get PDF
    L’évaluation de la fiabilitĂ© des rĂ©seaux est un problĂšme combinatoire trĂšs complexe qui nĂ©cessite des moyens de calcul trĂšs puissants. Plusieurs mĂ©thodes ont Ă©tĂ© proposĂ©es dans la littĂ©rature pour apporter des solutions. Certaines ont Ă©tĂ© programmĂ©es dont notamment les mĂ©thodes d’énumĂ©ration des ensembles minimaux et la factorisation, et d’autres sont restĂ©es Ă  l’état de simples thĂ©ories. Cette thĂšse traite le cas de l’évaluation et l’optimisation de la fiabilitĂ© des rĂ©seaux. Plusieurs problĂšmes ont Ă©tĂ© abordĂ©s dont notamment la mise au point d’une mĂ©thodologie pour la modĂ©lisation des rĂ©seaux en vue de l’évaluation de leur fiabilitĂ©s. Cette mĂ©thodologie a Ă©tĂ© validĂ©e dans le cadre d’un rĂ©seau de radio communication Ă©tendu implantĂ© rĂ©cemment pour couvrir les besoins de toute la province quĂ©bĂ©coise. Plusieurs algorithmes ont aussi Ă©tĂ© Ă©tablis pour gĂ©nĂ©rer les chemins et les coupes minimales pour un rĂ©seau donnĂ©. La gĂ©nĂ©ration des chemins et des coupes constitue une contribution importante dans le processus d’évaluation et d’optimisation de la fiabilitĂ©. Ces algorithmes ont permis de traiter de maniĂšre rapide et efficace plusieurs rĂ©seaux tests ainsi que le rĂ©seau de radio communication provincial. Ils ont Ă©tĂ© par la suite exploitĂ©s pour Ă©valuer la fiabilitĂ© grĂące Ă  une mĂ©thode basĂ©e sur les diagrammes de dĂ©cision binaire. Plusieurs contributions thĂ©oriques ont aussi permis de mettre en place une solution exacte de la fiabilitĂ© des rĂ©seaux stochastiques imparfaits dans le cadre des mĂ©thodes de factorisation. A partir de cette recherche plusieurs outils ont Ă©tĂ© programmĂ©s pour Ă©valuer et optimiser la fiabilitĂ© des rĂ©seaux. Les rĂ©sultats obtenus montrent clairement un gain significatif en temps d’exĂ©cution et en espace de mĂ©moire utilisĂ© par rapport Ă  beaucoup d’autres implĂ©mentations. Mots-clĂ©s: FiabilitĂ©, rĂ©seaux, optimisation, diagrammes de dĂ©cision binaire, ensembles des chemins et coupes minimales, algorithmes, indicateur de Birnbaum, systĂšmes de radio tĂ©lĂ©communication, programmes.Efficient computation of systems reliability is required in many sensitive networks. Despite the increased efficiency of computers and the proliferation of algorithms, the problem of finding good and quickly solutions in the case of large systems remains open. Recently, efficient computation techniques have been recognized as significant advances to solve the problem during a reasonable period of time. However, they are applicable to a special category of networks and more efforts still necessary to generalize a unified method giving exact solution. Assessing the reliability of networks is a very complex combinatorial problem which requires powerful computing resources. Several methods have been proposed in the literature. Some have been implemented including minimal sets enumeration and factoring methods, and others remained as simple theories. This thesis treats the case of networks reliability evaluation and optimization. Several issues were discussed including the development of a methodology for modeling networks and evaluating their reliabilities. This methodology was validated as part of a radio communication network project. In this work, some algorithms have been developed to generate minimal paths and cuts for a given network. The generation of paths and cuts is an important contribution in the process of networks reliability and optimization. These algorithms have been subsequently used to assess reliability by a method based on binary decision diagrams. Several theoretical contributions have been proposed and helped to establish an exact solution of the stochastic networks reliability in which edges and nodes are subject to failure using factoring decomposition theorem. From this research activity, several tools have been implemented and results clearly show a significant gain in time execution and memory space used by comparison to many other implementations. Key-words: Reliability, Networks, optimization, binary decision diagrams, minimal paths set and cuts set, algorithms, Birnbaum performance index, Networks, radio-telecommunication systems, programs

    On the non-termination of MDG-based abstract state enumeration

    Get PDF
    AbstractMultiway decision graphs are a new class of decision graphs for representing abstract states machines. This yields a new verification technique that can deal with the data-width problem by using abstract sorts and uninterpreted functions to represent data value and data operations, respectively. However, in many cases, it may suffer from the non-termination of the state enumeration procedure. This paper presents a novel approach to solving the non-termination problem when the generated set of states, even infinite, represents a structured domain where terms (states) share certain repetitive patterns. The approach is based on the schematization method developed by Chen and Hsiang, namely ρ-terms. Schematization provides a suitable formalism for finitely manipulating infinite sets of terms. We illustrate the effectiveness of our method by several examples

    A Contextual Approach To Learning Collaborative Behavior Via Observation

    Get PDF
    This dissertation describes a novel technique to creating a simulated team of agents through observation. Simulated human teamwork can be used for a number of purposes, such as expert examples, automated teammates for training purposes and realistic opponents in games and training simulation. Current teamwork simulations require the team member behaviors be programmed into the simulation, often requiring a great deal of time and effort. None are able to observe a team at work and replicate the teamwork behaviors. Machine learning techniques for learning by observation and learning by demonstration have proven successful at observing behavior of humans or other software agents and creating a behavior function for a single agent. The research described here combines current research in teamwork simulations and learning by observation to effectively train a multi-agent system in effective team behavior. The dissertation describes the background and work by others as well as a detailed description of the learning method. A prototype built to evaluate the developed approach as well as the extensive experimentation conducted is also described

    Program Synthesis and Linear Operator Semantics

    Full text link
    For deterministic and probabilistic programs we investigate the problem of program synthesis and program optimisation (with respect to non-functional properties) in the general setting of global optimisation. This approach is based on the representation of the semantics of programs and program fragments in terms of linear operators, i.e. as matrices. We exploit in particular the fact that we can automatically generate the representation of the semantics of elementary blocks. These can then can be used in order to compositionally assemble the semantics of a whole program, i.e. the generator of the corresponding Discrete Time Markov Chain (DTMC). We also utilise a generalised version of Abstract Interpretation suitable for this linear algebraic or functional analytical framework in order to formulate semantical constraints (invariants) and optimisation objectives (for example performance requirements).Comment: In Proceedings SYNT 2014, arXiv:1407.493

    MaxSAT Evaluation 2020 : Solver and Benchmark Descriptions

    Get PDF
    Non peer reviewe
    • 

    corecore