63,275 research outputs found
Type-based cost analysis for lazy functional languages
We present a static analysis for determining the execution costs of lazily evaluated functional languages, such as Haskell. Time- and space-behaviour of lazy functional languages can be hard to predict, creating a significant barrier to their broader acceptance. This paper applies a type-based analysis employing amortisation and cost effects to statically determine upper bounds on evaluation costs. While amortisation performs well with finite recursive data, we significantly improve the precision of our analysis for co-recursive programs (i.e. dealing with potentially infinite data structures) by tracking self-references. Combining these two approaches gives a fully automatic static analysis for both recursive and co-recursive definitions. The analysis is formally proven correct against an operational semantic that features an exchangeable parametric cost-model. An arbitrary measure can be assigned to all syntactic constructs, allowing to bound, for example, evaluation steps, applications, allocations, etc. Moreover, automatic inference only relies on first-order unification and standard linear programming solving. Our publicly available implementation demonstrates the practicability of our technique on editable non-trivial examples.PostprintPeer reviewe
Type Soundness for Path Polymorphism
Path polymorphism is the ability to define functions that can operate
uniformly over arbitrary recursively specified data structures. Its essence is
captured by patterns of the form which decompose a compound data
structure into its parts. Typing these kinds of patterns is challenging since
the type of a compound should determine the type of its components. We propose
a static type system (i.e. no run-time analysis) for a pattern calculus that
captures this feature. Our solution combines type application, constants as
types, union types and recursive types. We address the fundamental properties
of Subject Reduction and Progress that guarantee a well-behaved dynamics. Both
these results rely crucially on a notion of pattern compatibility and also on a
coinductive characterisation of subtyping
Modular Construction of Shape-Numeric Analyzers
The aim of static analysis is to infer invariants about programs that are
precise enough to establish semantic properties, such as the absence of
run-time errors. Broadly speaking, there are two major branches of static
analysis for imperative programs. Pointer and shape analyses focus on inferring
properties of pointers, dynamically-allocated memory, and recursive data
structures, while numeric analyses seek to derive invariants on numeric values.
Although simultaneous inference of shape-numeric invariants is often needed,
this case is especially challenging and is not particularly well explored.
Notably, simultaneous shape-numeric inference raises complex issues in the
design of the static analyzer itself.
In this paper, we study the construction of such shape-numeric, static
analyzers. We set up an abstract interpretation framework that allows us to
reason about simultaneous shape-numeric properties by combining shape and
numeric abstractions into a modular, expressive abstract domain. Such a modular
structure is highly desirable to make its formalization and implementation
easier to do and get correct. To achieve this, we choose a concrete semantics
that can be abstracted step-by-step, while preserving a high level of
expressiveness. The structure of abstract operations (i.e., transfer, join, and
comparison) follows the structure of this semantics. The advantage of this
construction is to divide the analyzer in modules and functors that implement
abstractions of distinct features.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
A sound abstract interpreter for dynamic code
Dynamic languages, such as JavaScript, employ string-to-code primitives to turn dynamically generated text into executable code at run-time. These features make standard static analysis extremely hard if not impossible because its essential data structures, i.e., the control-flow graph and the system of recursive equations associated with the program to analyze, are themselves dynamically mutating objects. Hence, the need to handle string-to-code statements approximating what they can execute, and therefore allowing the analysis to continue (even in presence of string-to-code statements) with an acceptable degree of precision. In order to reach this goal, we propose a static analysis allowing us to collect string values and allowing us to soundly over-approximate and analyze the code potentially executed by a string-to-code statement
A practical mode system for recursive definitions
In call-by-value languages, some mutually-recursive value definitions can be
safely evaluated to build recursive functions or cyclic data structures, but
some definitions (let rec x = x + 1) contain vicious circles and their
evaluation fails at runtime. We propose a new static analysis to check the
absence of such runtime failures.
We present a set of declarative inference rules, prove its soundness with
respect to the reference source-level semantics of Nordlander, Carlsson, and
Gill (2008), and show that it can be (right-to-left) directed into an
algorithmic check in a surprisingly simple way.
Our implementation of this new check replaced the existing check used by the
OCaml programming language, a fragile syntactic/grammatical criterion which let
several subtle bugs slip through as the language kept evolving. We document
some issues that arise when advanced features of a real-world functional
language (exceptions in first-class modules, GADTs, etc.) interact with safety
checking for recursive definitions
Analysis of measurement and simulation errors in structural system identification by observability techniques
This is the peer reviewed version of the following article: [Lei, J., Lozano-Galant, J. A., Nogal, M., Xu, D., and Turmo, J. (2017) Analysis of measurement and simulation errors in structural system identification by observability techniques. Struct. Control Health Monit., 24: . doi: 10.1002/stc.1923.], which has been published in final form at http://onlinelibrary.wiley.com/wol1/doi/10.1002/stc.1923/full. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.During the process of structural system identification, errors are unavoidable. This paper analyzes the effects of measurement and simulation errors in structural system identification based on observability techniques. To illustrate the symbolic approach of this method a simply supported beam is analyzed step-by-step. This analysis provides, for the very first time in the literature, the parametric equations of the estimated parameters. The effects of several factors, such as errors in a particular measurement or in the whole measurement set, load location, measurement location or sign of the errors, on the accuracy of the identification results are also investigated. It is found that error in a particular measurement increases the errors of individual estimations, and this effect can be significantly mitigated by introducing random errors in the whole measurement set. The propagation of simulation errors when using observability techniques is illustrated by two structures with different measurement sets and loading cases. A fluctuation of the observed parameters around the real values is proved to be a characteristic of this method. Also, it is suggested that a sufficient combination of different load cases should be utilized to avoid the inaccurate estimation at the location of low curvature zones.Peer ReviewedPostprint (author's final draft
Structural Analysis: Shape Information via Points-To Computation
This paper introduces a new hybrid memory analysis, Structural Analysis,
which combines an expressive shape analysis style abstract domain with
efficient and simple points-to style transfer functions. Using data from
empirical studies on the runtime heap structures and the programmatic idioms
used in modern object-oriented languages we construct a heap analysis with the
following characteristics: (1) it can express a rich set of structural, shape,
and sharing properties which are not provided by a classic points-to analysis
and that are useful for optimization and error detection applications (2) it
uses efficient, weakly-updating, set-based transfer functions which enable the
analysis to be more robust and scalable than a shape analysis and (3) it can be
used as the basis for a scalable interprocedural analysis that produces precise
results in practice.
The analysis has been implemented for .Net bytecode and using this
implementation we evaluate both the runtime cost and the precision of the
results on a number of well known benchmarks and real world programs. Our
experimental evaluations show that the domain defined in this paper is capable
of precisely expressing the majority of the connectivity, shape, and sharing
properties that occur in practice and, despite the use of weak updates, the
static analysis is able to precisely approximate the ideal results. The
analysis is capable of analyzing large real-world programs (over 30K bytecodes)
in less than 65 seconds and using less than 130MB of memory. In summary this
work presents a new type of memory analysis that advances the state of the art
with respect to expressive power, precision, and scalability and represents a
new area of study on the relationships between and combination of concepts from
shape and points-to analyses
- …