106,041 research outputs found
Loop Quasi-Invariant Chunk Motion by peeling with statement composition
Several techniques for analysis and transformations are used in compilers.
Among them, the peeling of loops for hoisting quasi-invariants can be used to
optimize generated code, or simply ease developers' lives. In this paper, we
introduce a new concept of dependency analysis borrowed from the field of
Implicit Computational Complexity (ICC), allowing to work with composed
statements called Chunks to detect more quasi-invariants. Based on an
optimization idea given on a WHILE language, we provide a transformation method
- reusing ICC concepts and techniques - to compilers. This new analysis
computes an invariance degree for each statement or chunks of statements by
building a new kind of dependency graph, finds the maximum or worst dependency
graph for loops, and recognizes if an entire block is Quasi-Invariant or not.
This block could be an inner loop, and in that case the computational
complexity of the overall program can be decreased. We already implemented a
proof of concept on a toy C parser 1 analysing and transforming the AST
representation. In this paper, we introduce the theory around this concept and
present a prototype analysis pass implemented on LLVM. In a very near future,
we will implement the corresponding transformation and provide benchmarks
comparisons.Comment: In Proceedings DICE-FOPARA 2017, arXiv:1704.0516
A type system for complexity flow analysis
International audienceWe propose a type system for an imperative programming language, which certifies program time bounds. This type system is based on secure flow information analysis. Each program variable has a level and we prevent information from flowing from low level to higher level variables. We also introduce a downgrading mechanism in order to delineate a broader class of programs. Thus, we propose a relation between security-typed language and implicit computational complexity. We establish a characterization of the class of polynomial time functions
Implicit complexity for coinductive data: a characterization of corecurrence
We propose a framework for reasoning about programs that manipulate
coinductive data as well as inductive data. Our approach is based on using
equational programs, which support a seamless combination of computation and
reasoning, and using productivity (fairness) as the fundamental assertion,
rather than bi-simulation. The latter is expressible in terms of the former. As
an application to this framework, we give an implicit characterization of
corecurrence: a function is definable using corecurrence iff its productivity
is provable using coinduction for formulas in which data-predicates do not
occur negatively. This is an analog, albeit in weaker form, of a
characterization of recurrence (i.e. primitive recursion) in [Leivant, Unipolar
induction, TCS 318, 2004].Comment: In Proceedings DICE 2011, arXiv:1201.034
Complexity Information Flow in a Multi-threaded Imperative Language
We propose a type system to analyze the time consumed by multi-threaded
imperative programs with a shared global memory, which delineates a class of
safe multi-threaded programs. We demonstrate that a safe multi-threaded program
runs in polynomial time if (i) it is strongly terminating wrt a
non-deterministic scheduling policy or (ii) it terminates wrt a deterministic
and quiet scheduling policy. As a consequence, we also characterize the set of
polynomial time functions. The type system presented is based on the
fundamental notion of data tiering, which is central in implicit computational
complexity. It regulates the information flow in a computation. This aspect is
interesting in that the type system bears a resemblance to typed based
information flow analysis and notions of non-interference. As far as we know,
this is the first characterization by a type system of polynomial time
multi-threaded programs
Robust learning with implicit residual networks
In this effort, we propose a new deep architecture utilizing residual blocks
inspired by implicit discretization schemes. As opposed to the standard
feed-forward networks, the outputs of the proposed implicit residual blocks are
defined as the fixed points of the appropriately chosen nonlinear
transformations. We show that this choice leads to the improved stability of
both forward and backward propagations, has a favorable impact on the
generalization power and allows to control the robustness of the network with
only a few hyperparameters. In addition, the proposed reformulation of ResNet
does not introduce new parameters and can potentially lead to a reduction in
the number of required layers due to improved forward stability. Finally, we
derive the memory-efficient training algorithm, propose a stochastic
regularization technique and provide numerical results in support of our
findings
On Constructor Rewrite Systems and the Lambda Calculus
We prove that orthogonal constructor term rewrite systems and lambda-calculus
with weak (i.e., no reduction is allowed under the scope of a
lambda-abstraction) call-by-value reduction can simulate each other with a
linear overhead. In particular, weak call-by- value beta-reduction can be
simulated by an orthogonal constructor term rewrite system in the same number
of reduction steps. Conversely, each reduction in a term rewrite system can be
simulated by a constant number of beta-reduction steps. This is relevant to
implicit computational complexity, because the number of beta steps to normal
form is polynomially related to the actual cost (that is, as performed on a
Turing machine) of normalization, under weak call-by-value reduction.
Orthogonal constructor term rewrite systems and lambda-calculus are thus both
polynomially related to Turing machines, taking as notion of cost their natural
parameters.Comment: 27 pages. arXiv admin note: substantial text overlap with
arXiv:0904.412
- …