5 research outputs found

    Twinning automata and regular expressions for string static analysis

    Get PDF
    In this paper we formalize and prove the soundness of Tarsis, a new abstract domain based on the abstract interpretation theory that approximates string values through finite state automata. The main novelty of Tarsis is that it works over an alphabet of strings instead of single characters. On the one hand, such approach requires a more complex and refined definition of the widening operator, and the abstract semantics of string operators. On the other hand, it is in position to obtain strictly more precise results than than state-of-the-art approaches. We implemented a prototype of Tarsis, and we applied it on some case studies taken from some of the most popular Java libraries manipulating string values. The experimental results confirm that Tarsis is in position to obtain strictly more precise results than existing analyses

    Parallel parsing made practical

    Get PDF
    The property of local parsability allows to parse inputs through inspecting only a bounded-length string around the current token. This in turn enables the construction of a scalable, data-parallel parsing algorithm, which is presented in this work. Such an algorithm is easily amenable to be automatically generated via a parser generator tool, which was realized, and is also presented in the following. Furthermore, to complete the framework of a parallel input analysis, a parallel scanner can also combined with the parser. To prove the practicality of a parallel lexing and parsing approach, we report the results of the adaptation of JSON and Lua to a form fit for parallel parsing (i.e. an operator-precedence grammar) through simple grammar changes and scanning transformations. The approach is validated with performance figures from both high performance and embedded multicore platforms, obtained analyzing real-world inputs as a test-bench. The results show that our approach matches or dominates the performances of production-grade LR parsers in sequential execution, and achieves significant speedups and good scaling on multi-core machines. The work is concluded by a broad and critical survey of the past work on parallel parsing and future directions on the integration with semantic analysis and incremental parsing

    Resource Polymorphism

    Get PDF
    We present a resource-management model for ML-style programming languages, designed to be compatible with the OCaml philosophy and runtime model. This is a proposal to extend the OCaml language with destructors, move semantics, and resource polymorphism, to improve its safety, efficiency, interoperability, and expressiveness. It builds on the ownership-and-borrowing models of systems programming languages (Cyclone, C++11, Rust) and on linear types in functional programming (Linear Lisp, Clean, Alms). It continues a synthesis of resources from systems programming and resources in linear logic initiated by Baker.It is a combination of many known and some new ideas. On the novel side, it highlights the good mathematical structure of Stroustrup's “Resource acquisition is initialisation” (RAII) idiom for resource management based on destructors, a notion sometimes confused with finalizers, and builds on it a notion of resource polymorphism, inspired by polarisation in proof theory, that mixes C++'s RAII and a tracing garbage collector (GC). In particular, it proposes to identify the types of GCed values with types with trivial destructor: from this definition it deduces a model in which GC is the default allocation mode, and where GCed values can be used without restriction both in owning and borrowing contexts.The proposal targets a new spot in the design space, with an automatic and predictable resource-management model, at the same time based on lightweight and expressive language abstractions. It is backwards-compatible: current code is expected to run with the same performance, the new abstractions fully combine with the current ones, and it supports a resource-polymorphic extension of libraries. It does so with only a few additions to the runtime, and it integrates with the current GC implementation. It is also compatible with the upcoming multicore extension, and suggests that the Rust model for eliminating data-races applies.Interesting questions arise for a safe and practical type system, many of which have already been thoroughly investigated in the languages and prototypes Cyclone, Rust, and Alms

    TOWARDS EFFICIENT METADATA-HIDING CRYPTOGRAPHY

    Get PDF
    Although cryptography for confidential communications, computing on private data, and proving properties of secret values has seen much progress over recent years, there has been a noticeable lack of corresponding systems for hiding metadata. While many outside of the field believe that this is a problem that cannot be solved by technical means, to the credit of the cryptographic community, many cryptographic constructions have been proposed for various meta-data related problems, achieving strong security guarantees. However, these existing solutions either only work for limited settings or are too inefficient to implement in practice. In this work, we propose new custom cryptographic primitives that can be used to hide three different types of metadata in three different settings: receiver identity in store-and-forward systems, sender identity in verifiable email communications, and user device location in offline finding networks. Moreover, we show that these primitives can be efficiently constructed and instantiated: at least one construction for each primitive has been implemented and micro-benchmarks are present, with computation and time complexity that appears reasonable for each given application. This work motivates the exploration of other types of efficient metadata hiding cryptography to solve practical, real-world problems
    corecore