14,095 research outputs found
Gradual Program Analysis
Dataflow analysis and gradual typing are both well-studied methods to gain information about computer programs in a finite amount of time. The gradual program analysis project seeks to combine those two techniques in order to gain the benefits of both. This thesis explores the background information necessary to understand gradual program analysis, and then briefly discusses the research itself, with reference to publication of work done so far. The background topics include essential aspects of programming language theory, such as syntax, semantics, and static typing; dataflow analysis concepts, such as abstract interpretation, semilattices, and fixpoint computations; and gradual typing theory, such as the concept of an unknown type, liftings of predicates, and liftings of functions
Array operators using multiple dispatch: a design methodology for array implementations in dynamic languages
Arrays are such a rich and fundamental data type that they tend to be built
into a language, either in the compiler or in a large low-level library.
Defining this functionality at the user level instead provides greater
flexibility for application domains not envisioned by the language designer.
Only a few languages, such as C++ and Haskell, provide the necessary power to
define -dimensional arrays, but these systems rely on compile-time
abstraction, sacrificing some flexibility. In contrast, dynamic languages make
it straightforward for the user to define any behavior they might want, but at
the possible expense of performance.
As part of the Julia language project, we have developed an approach that
yields a novel trade-off between flexibility and compile-time analysis. The
core abstraction we use is multiple dispatch. We have come to believe that
while multiple dispatch has not been especially popular in most kinds of
programming, technical computing is its killer application. By expressing key
functions such as array indexing using multi-method signatures, a surprising
range of behaviors can be obtained, in a way that is both relatively easy to
write and amenable to compiler analysis. The compact factoring of concerns
provided by these methods makes it easier for user-defined types to behave
consistently with types in the standard library.Comment: 6 pages, 2 figures, workshop paper for the ARRAY '14 workshop, June
11, 2014, Edinburgh, United Kingdo
The Python user interface of the elsA cfd software: a coupling framework for external steering layers
The Python--elsA user interface of the elsA cfd (Computational Fluid
Dynamics) software has been developed to allow users to specify simulations
with confidence, through a global context of description objects grouped inside
scripts. The software main features are generated documentation, context
checking and completion, and helpful error management. Further developments
have used this foundation as a coupling framework, allowing (thanks to the
descriptive approach) the coupling of external algorithms with the cfd solver
in a simple and abstract way, leading to more success in complex simulations.
Along with the description of the technical part of the interface, we try to
gather the salient points pertaining to the psychological viewpoint of user
experience (ux). We point out the differences between user interfaces and pure
data management systems such as cgns
Interprocedural Type Specialization of JavaScript Programs Without Type Analysis
Dynamically typed programming languages such as Python and JavaScript defer
type checking to run time. VM implementations can improve performance by
eliminating redundant dynamic type checks. However, type inference analyses are
often costly and involve tradeoffs between compilation time and resulting
precision. This has lead to the creation of increasingly complex multi-tiered
VM architectures.
Lazy basic block versioning is a simple JIT compilation technique which
effectively removes redundant type checks from critical code paths. This novel
approach lazily generates type-specialized versions of basic blocks on-the-fly
while propagating context-dependent type information. This approach does not
require the use of costly program analyses, is not restricted by the precision
limitations of traditional type analyses.
This paper extends lazy basic block versioning to propagate type information
interprocedurally, across function call boundaries. Our implementation in a
JavaScript JIT compiler shows that across 26 benchmarks, interprocedural basic
block versioning eliminates more type tag tests on average than what is
achievable with static type analysis without resorting to code transformations.
On average, 94.3% of type tag tests are eliminated, yielding speedups of up to
56%. We also show that our implementation is able to outperform Truffle/JS on
several benchmarks, both in terms of execution time and compilation time.Comment: 10 pages, 10 figures, submitted to CGO 201
Speculative Staging for Interpreter Optimization
Interpreters have a bad reputation for having lower performance than
just-in-time compilers. We present a new way of building high performance
interpreters that is particularly effective for executing dynamically typed
programming languages. The key idea is to combine speculative staging of
optimized interpreter instructions with a novel technique of incrementally and
iteratively concerting them at run-time.
This paper introduces the concepts behind deriving optimized instructions
from existing interpreter instructions---incrementally peeling off layers of
complexity. When compiling the interpreter, these optimized derivatives will be
compiled along with the original interpreter instructions. Therefore, our
technique is portable by construction since it leverages the existing
compiler's backend. At run-time we use instruction substitution from the
interpreter's original and expensive instructions to optimized instruction
derivatives to speed up execution.
Our technique unites high performance with the simplicity and portability of
interpreters---we report that our optimization makes the CPython interpreter up
to more than four times faster, where our interpreter closes the gap between
and sometimes even outperforms PyPy's just-in-time compiler.Comment: 16 pages, 4 figures, 3 tables. Uses CPython 3.2.3 and PyPy 1.
- …