279 research outputs found
Incrementalizing Lattice-Based Program Analyses in Datalog
Program analyses detect errors in code, but when code changes frequently as in an IDE, repeated re-analysis from-scratch is unnecessary: It leads to poor performance unless we give up on precision and recall. Incremental program analysis promises to deliver fast feedback without giving up on precision or recall by deriving a new analysis result from the previous one. However, Datalog and other existing frameworks for incremental program analysis are limited in expressive power: They only support the powerset lattice as representation of analysis results, whereas many practically relevant analyses require custom lattices and aggregation over lattice values. To this end, we present a novel algorithm called DRedL that supports incremental maintenance of recursive lattice-value aggregation in Datalog. The key insight of DRedL is to dynamically recognize increasing replacements of old lattice values by new ones, which allows us to avoid the expensive deletion of the old value. We integrate DRedL into the analysis framework IncA and use IncA to realize incremental implementations of strong-update points-to analysis and string analysis for Java. As our performance evaluation demonstrates, both analyses react to code changes within milliseconds
A Type System for Julia
The Julia programming language was designed to fill the needs of scientific
computing by combining the benefits of productivity and performance languages.
Julia allows users to write untyped scripts easily without needing to worry
about many implementation details, as do other productivity languages. If one
just wants to get the work done-regardless of how efficient or general the
program might be, such a paradigm is ideal. Simultaneously, Julia also allows
library developers to write efficient generic code that can run as fast as
implementations in performance languages such as C or Fortran. This combination
of user-facing ease and library developer-facing performance has proven quite
attractive, and the language has increasing adoption.
With adoption comes combinatorial challenges to correctness. Multiple
dispatch -- Julia's key mechanism for abstraction -- allows many libraries to
compose "out of the box." However, it creates bugs where one library's
requirements do not match what another provides. Typing could address this at
the cost of Julia's flexibility for scripting.
I developed a "best of both worlds" solution: gradual typing for Julia. My
system forms the core of a gradual type system for Julia, laying the foundation
for improving the correctness of Julia programs while not getting in the way of
script writers. My framework allows methods to be individually typed or
untyped, allowing users to write untyped code that interacts with typed library
code and vice versa. Typed methods then get a soundness guarantee that is
robust in the presence of both dynamically typed code and dynamically generated
definitions. I additionally describe protocols, a mechanism for typing
abstraction over concrete implementation that accommodates one common pattern
in Julia libraries, and describe its implementation into my typed Julia
framework.Comment: PhD thesi
Semantic optimisation in datalog programs
Bibliography: leaves 138-142.Datalog is the fusion of Prolog and Database technologies aimed at producing an efficient, logic-based, declarative language for databases. This fusion takes the best of logic programming for the syntax of Datalog, and the best of database systems for the operational part of Datalog. As is the case with all declarative languages, optimisation is necessary to improve the efficiency of programs. Semantic optimisation uses meta-knowledge describing the data in the database to optimise queries and rules, aiming to reduce the resources required to answer queries. In this thesis, I analyse prior work that has been done on semantic optimisation and then propose an optimisation system for Datalog that includes optimisation of recursive programs and a semantic knowledge management module. A language, DatalogiC, which is an extension of Datalog that allows semantic knowledge to be expressed, has also been devised as an implementation vehicle. Finally, empirical results concerning the benefits of semantic optimisation are reported
Recommended from our members
A unifying approach for queries and updates in deductive databases
This dissertation presents a unifying approach to process (recursive) queries and updates in a deductive database. To improve query performance, a combined top-down and bottom-up evaluation method is used to compile rules into iterative programs that contain relational algebra operators. This method is based on the lemma resolution that retains previous results to guarantee termination.Due to locality in database processing, it is desirable to materialize frequently used queries against views of the database. Unfortunately, if updates are allowed, maintaining materialized view tables becomes a major problem. We propose to materialize views incrementally, as queries are being answered. Hence views in our approach are only partially materialized. For such views, we design algorithms to perform updates only when the underlying view tables are actually affected.We compare our approach to two conventional methods for dealing with views: total materialization and query-modification. The first method materializes the entire view when it is defined while the second recomputes the view on the fly without maintaining any physical view tables. We demonstrate that our approach is a compromise between these two methods and performs better than either one in many situations.It is also desirable to be able to update views just like updating base tables. However, view updates are inherently ambiguous and the semantics of update propagation on recursively defined views were not well understood in the past. Using dynamic logic programming and lemma resolution, we are able to define the semantics of recursive view updates. These are expressed in the form of update translators specified by the database administrator when the view is defined. To guarantee completeness, we identify a subset of safe update translators. We prove that this subset of translators always terminate and are complete
Scalable Automated Incrementalization for Real-Time Static Analyses
This thesis proposes a framework for easy development of static analyses, whose results are incrementalized to provide instantaneous feedback in an integrated development environment (IDE).
Today, IDEs feature many tools that have static analyses as their foundation to assess software quality and catch correctness problems.
Yet, these tools often fail to provide instantaneous feedback and are thus restricted to nightly build processes. This precludes developers from fixing issues at their inception time, i.e., when the problem and the developed solution are both still fresh in mind.
In order to provide instantaneous feedback, incrementalization is a well-known technique that utilizes the fact that developers make only small changes to the code and, hence, analysis results can be re-computed fast based on these changes. Yet, incrementalization requires carefully crafted static analyses. Thus, a manual approach to incrementalization is unattractive. Automated incrementalization can alleviate these problems and allows analyses writers to formulate their analyses as queries with the full data set in mind, without worrying over the semantics of incremental changes.
Existing approaches to automated incrementalization utilize standard technologies, such as deductive databases, that provide declarative query languages, yet also require to materialize the full dataset in main-memory, i.e., the memory is permanently blocked by the data required for the analyses. Other standard technologies such as relational databases offer better scalability due to persistence, yet require large transaction times for data. Both technologies are not a perfect match for integrating static analyses into an IDE, since the underlying data, i.e., the code base, is already persisted and managed by the IDE. Hence, transitioning the data into a database is redundant work.
In this thesis a novel approach is proposed that provides a declarative query language and automated incrementalization, yet retains in memory only a necessary minimum of data, i.e., only the data that is required for the incrementalization. The approach allows to declare static analyses as incrementally maintained views, where the underlying formalism for incrementalization is the relational algebra with extensions for object-orientation and recursion. The algebra allows to deduce which data is the necessary minimum for incremental maintenance and indeed shows that many views are self-maintainable, i.e., do not require to materialize memory at all. In addition an optimization for the algebra is proposed that allows to widen the range of self-maintainable views, based on domain knowledge of the underlying data. The optimization works similar to declaring primary keys for databases, i.e., the optimization is declared on the schema of the data, and defines which data is incrementally maintained in the same scope. The scope makes all analyses (views) that correlate only data within the boundaries of the scope self-maintainable.
The approach is implemented as an embedded domain specific language in a general-purpose programming language. The implementation can be understood as a database-like engine with an SQL-style query language and the execution semantics of the relational algebra. As such the system is a general purpose database-like query engine and can be used to incrementalize other domains than static analyses. To evaluate the approach a large variety of static analyses were sampled from real-world tools and formulated as incrementally maintained views in the implemented engine
- …