36,953 research outputs found
Symbolic Computation via Program Transformation
Symbolic computation is an important approach in automated program analysis.
Most state-of-the-art tools perform symbolic computation as interpreters and
directly maintain symbolic data. In this paper, we show that it is feasible,
and in fact practical, to use a compiler-based strategy instead. Using compiler
tooling, we propose and implement a transformation which takes a standard
program and outputs a program that performs semantically equivalent, but
partially symbolic, computation. The transformed program maintains symbolic
values internally and operates directly on them hence the program can be
processed by a tool without support for symbolic manipulation.
The main motivation for the transformation is in symbolic verification, but
there are many other possible use-cases, including test generation and concolic
testing. Moreover using the transformation simplifies tools, since the symbolic
computation is handled by the program directly. We have implemented the
transformation at the level of LLVM bitcode. The paper includes an experimental
evaluation, based on an explicit-state software model checker as a verification
backend
A Complexity Preserving Transformation from Jinja Bytecode to Rewrite Systems
We revisit known transformations from Jinja bytecode to rewrite systems from
the viewpoint of runtime complexity. Suitably generalising the constructions
proposed in the literature, we define an alternative representation of Jinja
bytecode (JBC) executions as "computation graphs" from which we obtain a novel
representation of JBC executions as "constrained rewrite systems". We prove
non-termination and complexity preservation of the transformation. We restrict
to well-formed JBC programs that only make use of non-recursive methods and
expect tree-shaped objects as input. Our approach allows for simplified
correctness proofs and provides a framework for the combination of the
computation graph method with standard techniques from static program analysis
like for example "reachability analysis".Comment: 36 page
Computational Soundness for Dalvik Bytecode
Automatically analyzing information flow within Android applications that
rely on cryptographic operations with their computational security guarantees
imposes formidable challenges that existing approaches for understanding an
app's behavior struggle to meet. These approaches do not distinguish
cryptographic and non-cryptographic operations, and hence do not account for
cryptographic protections: f(m) is considered sensitive for a sensitive message
m irrespective of potential secrecy properties offered by a cryptographic
operation f. These approaches consequently provide a safe approximation of the
app's behavior, but they mistakenly classify a large fraction of apps as
potentially insecure and consequently yield overly pessimistic results.
In this paper, we show how cryptographic operations can be faithfully
included into existing approaches for automated app analysis. To this end, we
first show how cryptographic operations can be expressed as symbolic
abstractions within the comprehensive Dalvik bytecode language. These
abstractions are accessible to automated analysis, and they can be conveniently
added to existing app analysis tools using minor changes in their semantics.
Second, we show that our abstractions are faithful by providing the first
computational soundness result for Dalvik bytecode, i.e., the absence of
attacks against our symbolically abstracted program entails the absence of any
attacks against a suitable cryptographic program realization. We cast our
computational soundness result in the CoSP framework, which makes the result
modular and composable.Comment: Technical report for the ACM CCS 2016 conference pape
Inference of termination conditions for numerical loops
We present a new approach to termination analysis of numerical computations
in logic programs. Traditional approaches fail to analyse them due to non
well-foundedness of the integers. We present a technique that allows to
overcome these difficulties. Our approach is based on transforming a program in
way that allows integrating and extending techniques originally developed for
analysis of numerical computations in the framework of query-mapping pairs with
the well-known framework of acceptability. Such an integration not only
contributes to the understanding of termination behaviour of numerical
computations, but also allows to perform a correct analysis of such
computations automatically, thus, extending previous work on a
constraints-based approach to termination. In the last section of the paper we
discuss possible extensions of the technique, including incorporating general
term orderings.Comment: Presented at WST200
Slavnov-Taylor1.0: A Mathematica package for computation in BRST formalism
Slavnov-Taylor 1.0 is a Mathematica package which allows us to perform
automatic symbolic computation in BRST formalism. This article serves as a
self-contained guide to prospective users, and indicates the conventions and
approximations used.Comment: 1+11 pages, 2 figures, LaTeX graphicx package used; accepted for
publication in Computer Physics Communication
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
- …