954 research outputs found

    Application of evolutionary computing in the design of high throughput digital filters.

    Get PDF

    Engineering Resilient Collective Adaptive Systems by Self-Stabilisation

    Get PDF
    Collective adaptive systems are an emerging class of networked computational systems, particularly suited in application domains such as smart cities, complex sensor networks, and the Internet of Things. These systems tend to feature large scale, heterogeneity of communication model (including opportunistic peer-to-peer wireless interaction), and require inherent self-adaptiveness properties to address unforeseen changes in operating conditions. In this context, it is extremely difficult (if not seemingly intractable) to engineer reusable pieces of distributed behaviour so as to make them provably correct and smoothly composable. Building on the field calculus, a computational model (and associated toolchain) capturing the notion of aggregate network-level computation, we address this problem with an engineering methodology coupling formal theory and computer simulation. On the one hand, functional properties are addressed by identifying the largest-to-date field calculus fragment generating self-stabilising behaviour, guaranteed to eventually attain a correct and stable final state despite any transient perturbation in state or topology, and including highly reusable building blocks for information spreading, aggregation, and time evolution. On the other hand, dynamical properties are addressed by simulation, empirically evaluating the different performances that can be obtained by switching between implementations of building blocks with provably equivalent functional properties. Overall, our methodology sheds light on how to identify core building blocks of collective behaviour, and how to select implementations that improve system performance while leaving overall system function and resiliency properties unchanged.Comment: To appear on ACM Transactions on Modeling and Computer Simulatio

    Automated Synthesis of Tableau Calculi

    Full text link
    This paper presents a method for synthesising sound and complete tableau calculi. Given a specification of the formal semantics of a logic, the method generates a set of tableau inference rules that can then be used to reason within the logic. The method guarantees that the generated rules form a calculus which is sound and constructively complete. If the logic can be shown to admit finite filtration with respect to a well-defined first-order semantics then adding a general blocking mechanism provides a terminating tableau calculus. The process of generating tableau rules can be completely automated and produces, together with the blocking mechanism, an automated procedure for generating tableau decision procedures. For illustration we show the workability of the approach for a description logic with transitive roles and propositional intuitionistic logic.Comment: 32 page

    GENETIC ALGORITHM CONTROLLED COMMON SUBEXPRESSION ELIMINATION FOR SPILL-FREE REGISTER ALLOCATION

    Get PDF
    As code complexity increases, maxlive increases. This is especially true in the case of the Kentucky If-Then-Else architecture proposed for Nanocontrollers. To achieve low circuit complexity, computations are decomposed to bit-level operations, thus generating large blocks of code with complex dependence structures. Additionally, the Nanocontroller architecture allows for only a small number of single bit registers and no extra memory. The assumption of an infinite number of registers made during code generation becomes a huge problem during register allocation because the small number of registers and no additional memory. The large basic blocks mean that maxlive almost always exceeds the number of registers and the traditional methods of register allocation such as instruction re-ordering and register spill/reload cannot be applied trivially. This thesis deals with finding a solution to reduce maxlive for successful register allocation using Genetic Algorithms

    Overfitting in Synthesis: Theory and Practice (Extended Version)

    Get PDF
    In syntax-guided synthesis (SyGuS), a synthesizer's goal is to automatically generate a program belonging to a grammar of possible implementations that meets a logical specification. We investigate a common limitation across state-of-the-art SyGuS tools that perform counterexample-guided inductive synthesis (CEGIS). We empirically observe that as the expressiveness of the provided grammar increases, the performance of these tools degrades significantly. We claim that this degradation is not only due to a larger search space, but also due to overfitting. We formally define this phenomenon and prove no-free-lunch theorems for SyGuS, which reveal a fundamental tradeoff between synthesizer performance and grammar expressiveness. A standard approach to mitigate overfitting in machine learning is to run multiple learners with varying expressiveness in parallel. We demonstrate that this insight can immediately benefit existing SyGuS tools. We also propose a novel single-threaded technique called hybrid enumeration that interleaves different grammars and outperforms the winner of the 2018 SyGuS competition (Inv track), solving more problems and achieving a 5×5\times mean speedup.Comment: 24 pages (5 pages of appendices), 7 figures, includes proofs of theorem

    Implementing Explicit and Finding Implicit Sharing in Embedded DSLs

    Full text link
    Aliasing, or sharing, is prominent in many domains, denoting that two differently-named objects are in fact identical: a change in one object (memory cell, circuit terminal, disk block) is instantly reflected in the other. Languages for modelling such domains should let the programmer explicitly define the sharing among objects or expressions. A DSL compiler may find other identical expressions and share them, implicitly. Such common subexpression elimination is crucial to the efficient implementation of DSLs. Sharing is tricky in embedded DSL, since host aliasing may correspond to copying of the underlying objects rather than their sharing. This tutorial summarizes discussions of implementing sharing in Haskell DSLs for automotive embedded systems and hardware description languages. The technique has since been used in a Haskell SAT solver and the DSL for music synthesis. We demonstrate the embedding in pure Haskell of a simple DSL with a language form for explicit sharing. The DSL also has implicit sharing, implemented via hash-consing. Explicit sharing greatly speeds up hash-consing. The seemingly imperative nature of hash-consing is hidden beneath a simple combinator language. The overall implementation remains pure functional and easy to reason about.Comment: In Proceedings DSL 2011, arXiv:1109.032

    Automatic Optimizations for Stream-based Monitoring Languages

    Get PDF
    Runtime monitors that are specified in a stream-based monitoring language tend to be easier to understand, maintain, and reuse than those written in a standard programming language. Because of their formal semantics, such specification languages are also a natural choice for safety-critical applications. Unlike for standard programming languages, there is, however, so far very little support for automatic code optimization. In this paper, we present the first collection of code transformations for the stream-based monitoring language RTLola. We show that classic compiler optimizations, such as Sparse Conditional Constant Propagation and Common Subexpression Elimination, can be adapted to monitoring specifications. We also develop new transformations -- Pacing Type Refinement and Filter Refinement -- which exploit the specific modular structure of RTLola as well as the implementation freedom afforded by a declarative specification language. We demonstrate the significant impact of the code transformations on benchmarks from the monitoring of unmanned aircraft systems (UAS).Comment: 20th International Conference on Runtime Verification (2020
    • …
    corecore