8,350 research outputs found

    Open-ended evolution to discover analogue circuits for beyond conventional applications

    Get PDF
    This is the author's accepted manuscript. The final publication is available at Springer via http://dx.doi.org/10.1007/s10710-012-9163-8. Copyright @ Springer 2012.Analogue circuits synthesised by means of open-ended evolutionary algorithms often have unconventional designs. However, these circuits are typically highly compact, and the general nature of the evolutionary search methodology allows such designs to be used in many applications. Previous work on the evolutionary design of analogue circuits has focused on circuits that lie well within analogue application domain. In contrast, our paper considers the evolution of analogue circuits that are usually synthesised in digital logic. We have developed four computational circuits, two voltage distributor circuits and a time interval metre circuit. The approach, despite its simplicity, succeeds over the design tasks owing to the employment of substructure reuse and incremental evolution. Our findings expand the range of applications that are considered suitable for evolutionary electronics

    First principles planning in BDI systems

    Get PDF
    BDI (Belief, Desire, Intention) agent systems are very powerful, but they lack the ability to incorporate planning. There has been some previous work to incorporate planning within such systems. However, this has either focussed on producing low-level plan sequences, losing much of the domain knowledge inherent in BDI systems, or has been limited to HTN (Hierarchical Task Network) planning, which cannot find plans other than those specified by the programmer. In this work, we incorporate classical planning into a BDI agent, but in a way that respects and makes use of the procedural domain knowledge available, by producing abstract plans that can be executed using such knowledge. In doing so, we recognize an intrinsic tension between striving for abstract plans and, at the same time, ensuring that unnecessary actions, unrelated to the specific goal to be achieved, are avoided. We explore this tension, by first characterizing the set of 'ideal' abstract plans that are non-redundant while maximally abstract, and then developing a more limited but feasible account in which an abstract plan is 'specialized' into a new abstract plan that is non-redundant and preserves abstraction as much as possible. We describe an algorithm to compute such a plan specialization, as well as algorithms for the production of a valid high level plan, by deriving abstract planning operators from the BDI program

    Highly Automated Formal Verification of Arithmetic Circuits

    Get PDF
    This dissertation investigates the problems of two distinctive formal verification techniques for verifying large scale multiplier circuits and proposes two approaches to overcome some of these problems. The first technique is equivalence checking based on recurrence relations, while the second one is the symbolic computation technique which is based on the theory of Gröbner bases. This investigation demonstrates that approaches based on symbolic computation have better scalability and more robustness than state-of-the-art equivalence checking techniques for verification of arithmetic circuits. According to this conclusion, the thesis leverages the symbolic computation technique to verify floating-point designs. It proposes a new algebraic equivalence checking, in contrast to classical combinational equivalence checking, the proposed technique is capable of checking the equivalence of two circuits which have different architectures of arithmetic units as well as control logic parts, e.g., floating-point multipliers

    Plethysm and lattice point counting

    Full text link
    We apply lattice point counting methods to compute the multiplicities in the plethysm of GL(n)GL(n). Our approach gives insight into the asymptotic growth of the plethysm and makes the problem amenable to computer algebra. We prove an old conjecture of Howe on the leading term of plethysm. For any partition Ό\mu of 3,4, or 5 we obtain an explicit formula in λ\lambda and kk for the multiplicity of SλS^\lambda in SΌ(Sk)S^\mu(S^k).Comment: 25 pages including appendix, 1 figure, computational results and code available at http://thomas-kahle.de/plethysm.html, v2: various improvements, v3: final version appeared in JFoC

    The Design and Implementation of a High-Performance Polynomial System Solver

    Get PDF
    This thesis examines the algorithmic and practical challenges of solving systems of polynomial equations. We discuss the design and implementation of triangular decomposition to solve polynomials systems exactly by means of symbolic computation. Incremental triangular decomposition solves one equation from the input list of polynomials at a time. Each step may produce several different components (points, curves, surfaces, etc.) of the solution set. Independent components imply that the solving process may proceed on each component concurrently. This so-called component-level parallelism is a theoretical and practical challenge characterized by irregular parallelism. Parallelism is not an algorithmic property but rather a geometrical property of the particular input system’s solution set. Despite these challenges, we have effectively applied parallel computing to triangular decomposition through the layering and cooperation of many parallel code regions. This parallel computing is supported by our generic object-oriented framework based on the dynamic multithreading paradigm. Meanwhile, the required polynomial algebra is sup- ported by an object-oriented framework for algebraic types which allows type safety and mathematical correctness to be determined at compile-time. Our software is implemented in C/C++ and have extensively tested the implementation for correctness and performance on over 3000 polynomial systems that have arisen in practice. The parallel framework has been re-used in the implementation of Hensel factorization as a parallel pipeline to compute roots of a polynomial with multivariate power series coeïŹ€icients. Hensel factorization is one step toward computing the non-trivial limit points of quasi-components

    Software Testing and Verification in Climate Model Development

    Get PDF
    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties
    • 

    corecore