600 research outputs found

    Transfer Function Synthesis without Quantifier Elimination

    Get PDF
    Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.Comment: 37 pages, extended version of ESOP 2011 pape

    Algorithmic Verification of Continuous and Hybrid Systems

    Get PDF
    We provide a tutorial introduction to reachability computation, a class of computational techniques that exports verification technology toward continuous and hybrid systems. For open under-determined systems, this technique can sometimes replace an infinite number of simulations.Comment: In Proceedings INFINITY 2013, arXiv:1402.661

    Optimization-based domain reduction in guaranteed parameter estimation of nonlinear dynamic systems

    Get PDF
    This paper is concerned with guaranteed parameter estimation in nonlinear dynamic systems in a context of bounded measurement error. The problem consists of finding-or approximating as closely as possible-the set of all possible parameter values such that the predicted outputs match the corresponding measurements within prescribed error bounds. An exhaustive search procedure is applied, whereby the parameter set is successively partitioned into smaller boxes and exclusion tests are performed to eliminate some of these boxes, until a prespecified threshold on the approximation level is met. In order to enhance the convergence of this procedure, we investigate the use of optimization-based domain reduction techniques for tightening the parameter boxes before partitioning. We construct such bound-reduction problems as linear programs from the polyhedral relaxation of Taylor models of the predicted outputs. When applied to a simple case study, the proposed approach is found to reduce the computational burden significantly, both in terms of CPU time and number of iterations. © IFAC

    Coping with lists in the ifcOWL ontology

    Get PDF
    Over the past few years, several suggestions have been made of how to convert an EXPRESS schema into an OWL ontology. The conversion from EXPRESS to OWL is of particular use to architectural design and construction industry, because one of the key data models in architectural design and construction industry, namely the Industry Foundation Classes (IFC) is represented using the EXPRESS information modelling language. In each of these conversion options, the way in which lists are converted (e.g. lists of coordinates, lists of spaces in a floor) is key to the structure and eventual strength of the resulting ontology. In this article, we outline and discuss the main decisions that can be made in converting LIST concepts in EXPRESS to equivalent OWL expressions. This allows one to identify which conversion option is appropriate to support proper and efficient information reuse in the domain of architecture and construction

    Indexed dependence metadata and its applications in software performance optimisation

    No full text
    To achieve continued performance improvements, modern microprocessor design is tending to concentrate an increasing proportion of hardware on computation units with less automatic management of data movement and extraction of parallelism. As a result, architectures increasingly include multiple computation cores and complicated, software-managed memory hierarchies. Compilers have difficulty characterizing the behaviour of a kernel in a general enough manner to enable automatic generation of efficient code in any but the most straightforward of cases. We propose the concept of indexed dependence metadata to improve application development and mapping onto such architectures. The metadata represent both the iteration space of a kernel and the mapping of that iteration space from a given index to the set of data elements that iteration might use: thus the dependence metadata is indexed by the kernel’s iteration space. This explicit mapping allows the compiler or runtime to optimise the program more efficiently, and improves the program structure for the developer. We argue that this form of explicit interface specification reduces the need for premature, architecture-specific optimisation. It improves program portability, supports intercomponent optimisation and enables generation of efficient data movement code. We offer the following contributions: an introduction to the concept of indexed dependence metadata as a generalisation of stream programming, a demonstration of its advantages in a component programming system, the decoupled access/execute model for C++ programs, and how indexed dependence metadata might be used to improve the programming model for GPU-based designs. Our experimental results with prototype implementations show that indexed dependence metadata supports automatic synthesis of double-buffered data movement for the Cell processor and enables aggressive loop fusion optimisations in image processing, linear algebra and multigrid application case studies

    The Convex Hull Problem in Practice : Improving the Running Time of the Double Description Method

    Get PDF
    The double description method is a simple but widely used algorithm for computation of extreme points in polyhedral sets. One key aspect of its implementation is the question of how to efficiently test extreme points for adjacency. In this dissertation, two significant contributions related to adjacency testing are presented. First, the currently used data structures are revisited and various optimizations are proposed. Empirical evidence is provided to demonstrate their competitiveness. Second, a new adjacency test is introduced. It is a refinement of the well known algebraic test featuring a technique for avoiding redundant computations. Its correctness is formally proven. Its superiority in multiple degenerate scenarios is demonstrated through experimental results. Parallel computation is one further aspect of the double description method covered in this work. A recently introduced divide-and-conquer technique is revisited and considerable practical limitations are demonstrated
    • …
    corecore