10 research outputs found

    PURRS: Towards Computer Algebra Support for Fully Automatic Worst-Case Complexity Analysis

    Full text link
    Fully automatic worst-case complexity analysis has a number of applications in computer-assisted program manipulation. A classical and powerful approach to complexity analysis consists in formally deriving, from the program syntax, a set of constraints expressing bounds on the resources required by the program, which are then solved, possibly applying safe approximations. In several interesting cases, these constraints take the form of recurrence relations. While techniques for solving recurrences are known and implemented in several computer algebra systems, these do not completely fulfill the needs of fully automatic complexity analysis: they only deal with a somewhat restricted class of recurrence relations, or sometimes require user intervention, or they are restricted to the computation of exact solutions that are often so complex to be unmanageable, and thus useless in practice. In this paper we briefly describe PURRS, a system and software library aimed at providing all the computer algebra services needed by applications performing or exploiting the results of worst-case complexity analyses. The capabilities of the system are illustrated by means of examples derived from the analysis of programs written in a domain-specific functional programming language for real-time embedded systems.Comment: 6 page

    A switch convergence for a small perturbation of a linear recurrence equation

    Full text link
    In this article we study a small random perturbation of a linear recurrence equation. If all the roots of its corresponding characteristic equation have modulus strictly less than one, the random linear recurrence goes exponentially fast to its limiting distribution in the total variation distance as time increases. By assuming that all the roots of its corresponding characteristic equation have modulus strictly less than one and some suitable conditions, we prove that this convergence happens as a switch-type, i.e., there is a sharp transition in the convergence to its limiting distribution. This fact is known as a cut-off phenomenon in the context of stochastic processes.Comment: 19 pages. Brazilian Journal of Probability and Statistics 2020

    The Maximum Degree-and-Diameter-Bounded Subgraph in the Mesh

    Full text link
    The problem of finding the largest connected subgraph of a given undirected host graph, subject to constraints on the maximum degree Δ\Delta and the diameter DD, was introduced in \cite{maxddbs}, as a generalization of the Degree-Diameter Problem. A case of special interest is when the host graph is a common parallel architecture. Here we discuss the case when the host graph is a kk-dimensional mesh. We provide some general bounds for the order of the largest subgraph in arbitrary dimension kk, and for the particular cases of k=3,Δ=4k=3, \Delta = 4 and k=2,Δ=3k=2, \Delta = 3, we give constructions that result in sharper lower bounds.Comment: accepted, 18 pages, 7 figures; Discrete Applied Mathematics, 201

    Divide-and-conquer algorithms for multiprocessors

    Get PDF
    During the past decade there has been a tremendous surge in understanding the nature of parallel computation. A number of parallel computers are commercially available. However, there are some problems in developing application programs on these computers;This dissertation considers various issues involved in implementing parallel algorithms on Multiple Instruction Multiple Data (MIMD) machines with a bounded number of processors. Strategies for implementing divide-and-conquer algorithms on MIMD machines are proposed. Results linking time complexity, communication complexity and the complexity of divide-and-combine functions of divide-and-conquer algorithms are analyzed. An efficient criterion for partitioning a parallel program is proposed and a method for obtaining a closed form expression for time complexity of a parallel program in terms of problem size and number of processors is developed

    The formal generation of models for scientific simulations

    Get PDF
    It is now commonplace for complex physical systems such as the climate system to be studied indirectly via computer simulations. Often, the equations that govern the underlying physical system are known but detailed or highresolution computer models of these equations (“governing models”) are not practical because of limited computational resources; so the models are simplified or “parameterised”. However, if the output of a simplified model is to lead to conclusions about a physical system, we must prove that these outputs reflect reality and are not merely artifacts of the simplifications. At present, simplifications are usually based on informal, ad-hoc methods making it difficult or impossible to provide such a proof rigorously. Here we introduce a set of formal methods for generating computer models. We present a newly developed computer program, “iGen”, which syntactically analyses the computer code of a high-resolution, governing model and, without executing it, automatically produces a much faster, simplified model with provable bounds on error compared to the governing model. These bounds allow scientists to rigorously distinguish real world phenomena from artifact in subsequent numerical experiments using the simplified model. Using simple physical systems as examples, we illustrate that iGen produces simplified models that execute typically orders of magnitude faster than their governing models. Finally, iGen is used to generate a model of entrainment in marine stratocumulus. The resulting simplified model is appropriate for use as part of a parameterisation of marine stratocumulus in a Global Climate Model
    corecore