2,140 research outputs found
Sequential decomposition of operations and compilers optimization
Code optimization is an important area of research that has remarkable contributions in addressing the challenges of information technology. It has introduced a new trend in hardware as well as in software. Efforts that have been made in this context led to introduce a new foundation, both for compilers and processors. In this report we study different techniques used for sequential decomposition of mappings without using extra variables. We focus on finding and improving these techniques of computations. Especially, we are interested in developing methods and efficient heuristic algorithms to find the decompositions and implementing these methods in particular cases. We want to implement these methods in a compiler with an aim of optimizing code in machine language. It is always possible to calculate an operation related to K registers by a sequence of assignments using only these K registers. We verified the results and introduced new methods. We described In Situ computation of linear mapping by a sequence of linear assignments over the set of integers and investigated bound for the algorithm. We introduced a method for the case of boolean bijective mappings via algebraic operations over polynomials in GF(2). We implemented these methods using Mapl
Invariant Generation through Strategy Iteration in Succinctly Represented Control Flow Graphs
We consider the problem of computing numerical invariants of programs, for
instance bounds on the values of numerical program variables. More
specifically, we study the problem of performing static analysis by abstract
interpretation using template linear constraint domains. Such invariants can be
obtained by Kleene iterations that are, in order to guarantee termination,
accelerated by widening operators. In many cases, however, applying this form
of extrapolation leads to invariants that are weaker than the strongest
inductive invariant that can be expressed within the abstract domain in use.
Another well-known source of imprecision of traditional abstract interpretation
techniques stems from their use of join operators at merge nodes in the control
flow graph. The mentioned weaknesses may prevent these methods from proving
safety properties. The technique we develop in this article addresses both of
these issues: contrary to Kleene iterations accelerated by widening operators,
it is guaranteed to yield the strongest inductive invariant that can be
expressed within the template linear constraint domain in use. It also eschews
join operators by distinguishing all paths of loop-free code segments. Formally
speaking, our technique computes the least fixpoint within a given template
linear constraint domain of a transition relation that is succinctly expressed
as an existentially quantified linear real arithmetic formula. In contrast to
previously published techniques that rely on quantifier elimination, our
algorithm is proved to have optimal complexity: we prove that the decision
problem associated with our fixpoint problem is in the second level of the
polynomial-time hierarchy.Comment: 35 pages, conference version published at ESOP 2011, this version is
a CoRR version of our submission to Logical Methods in Computer Scienc
Improving Strategies via SMT Solving
We consider the problem of computing numerical invariants of programs by
abstract interpretation. Our method eschews two traditional sources of
imprecision: (i) the use of widening operators for enforcing convergence within
a finite number of iterations (ii) the use of merge operations (often, convex
hulls) at the merge points of the control flow graph. It instead computes the
least inductive invariant expressible in the domain at a restricted set of
program points, and analyzes the rest of the code en bloc. We emphasize that we
compute this inductive invariant precisely. For that we extend the strategy
improvement algorithm of [Gawlitza and Seidl, 2007]. If we applied their method
directly, we would have to solve an exponentially sized system of abstract
semantic equations, resulting in memory exhaustion. Instead, we keep the system
implicit and discover strategy improvements using SAT modulo real linear
arithmetic (SMT). For evaluating strategies we use linear programming. Our
algorithm has low polynomial space complexity and performs for contrived
examples in the worst case exponentially many strategy improvement steps; this
is unsurprising, since we show that the associated abstract reachability
problem is Pi-p-2-complete
Large substitution boxes with efficient combinational implementations
At a fundamental level, the security of symmetric key cryptosystems ties back to Claude Shannon\u27s properties of confusion and diffusion. Confusion can be defined as the complexity of the relationship between the secret key and ciphertext, and diffusion can be defined as the degree to which the influence of a single input plaintext bit is spread throughout the resulting ciphertext. In constructions of symmetric key cryptographic primitives, confusion and diffusion are commonly realized with the application of nonlinear and linear operations, respectively. The Substitution-Permutation Network design is one such popular construction adopted by the Advanced Encryption Standard, among other block ciphers, which employs substitution boxes, or S-boxes, for nonlinear behavior. As a result, much research has been devoted to improving the cryptographic strength and implementation efficiency of S-boxes so as to prohibit cryptanalysis attacks that exploit weak constructions and enable fast and area-efficient hardware implementations on a variety of platforms. To date, most published and standardized S-boxes are bijective functions on elements of 4 or 8 bits. In this work, we explore the cryptographic properties and implementations of 8 and 16 bit S-boxes. We study the strength of these S-boxes in the context of Boolean functions and investigate area-optimized combinational hardware implementations. We then present a variety of new 8 and 16 bit S-boxes that have ideal cryptographic properties and enable low-area combinational implementations
Reinforcement Learning: A Survey
This paper surveys the field of reinforcement learning from a
computer-science perspective. It is written to be accessible to researchers
familiar with machine learning. Both the historical basis of the field and a
broad selection of current work are summarized. Reinforcement learning is the
problem faced by an agent that learns behavior through trial-and-error
interactions with a dynamic environment. The work described here has a
resemblance to work in psychology, but differs considerably in the details and
in the use of the word ``reinforcement.'' The paper discusses central issues of
reinforcement learning, including trading off exploration and exploitation,
establishing the foundations of the field via Markov decision theory, learning
from delayed reinforcement, constructing empirical models to accelerate
learning, making use of generalization and hierarchy, and coping with hidden
state. It concludes with a survey of some implemented systems and an assessment
of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file
Constant-Delay Enumeration for Nondeterministic Document Spanners
We consider the information extraction framework known as document spanners,
and study the problem of efficiently computing the results of the extraction
from an input document, where the extraction task is described as a sequential
variable-set automaton (VA). We pose this problem in the setting of enumeration
algorithms, where we can first run a preprocessing phase and must then produce
the results with a small delay between any two consecutive results. Our goal is
to have an algorithm which is tractable in combined complexity, i.e., in the
sizes of the input document and the VA; while ensuring the best possible data
complexity bounds in the input document size, i.e., constant delay in the
document size. Several recent works at PODS'18 proposed such algorithms but
with linear delay in the document size or with an exponential dependency in
size of the (generally nondeterministic) input VA. In particular, Florenzano et
al. suggest that our desired runtime guarantees cannot be met for general
sequential VAs. We refute this and show that, given a nondeterministic
sequential VA and an input document, we can enumerate the mappings of the VA on
the document with the following bounds: the preprocessing is linear in the
document size and polynomial in the size of the VA, and the delay is
independent of the document and polynomial in the size of the VA. The resulting
algorithm thus achieves tractability in combined complexity and the best
possible data complexity bounds. Moreover, it is rather easy to describe, in
particular for the restricted case of so-called extended VAs. Finally, we
evaluate our algorithm empirically using a prototype implementation.Comment: 29 pages. Extended version of arXiv:1807.09320. Integrates all
corrections following reviewer feedback. Outside of some minor formatting
differences and tweaks, this paper is the same as the paper to appear in the
ACM TODS journa
Novel Approach to Real Polynomial Root-finding and Matrix Eigen-solving
Univariate polynomial root-finding is both classical and important for modern
computing. Frequently one seeks just the real roots of a polynomial with real
coefficients. They can be approximated at a low computational cost if the
polynomial has no nonreal roots, but typically nonreal roots are much more
numerous than the real ones. We dramatically accelerate the known algorithms in
this case by exploiting the correlation between the computations with matrices
and polynomials, extending the techniques of the matrix sign iteration, and
exploiting the structure of the companion matrix of the input polynomial. We
extend some of the proposed techniques to the approximation of the real
eigenvalues of a real nonsymmetric matrix.Comment: 17 pages, added algorithm
- âŠ