1,342 research outputs found
Complexity of Equivalence and Learning for Multiplicity Tree Automata
We consider the complexity of equivalence and learning for multiplicity tree
automata, i.e., weighted tree automata over a field. We first show that the
equivalence problem is logspace equivalent to polynomial identity testing, the
complexity of which is a longstanding open problem. Secondly, we derive lower
bounds on the number of queries needed to learn multiplicity tree automata in
Angluin's exact learning model, over both arbitrary and fixed fields.
Habrard and Oncina (2006) give an exact learning algorithm for multiplicity
tree automata, in which the number of queries is proportional to the size of
the target automaton and the size of a largest counterexample, represented as a
tree, that is returned by the Teacher. However, the smallest
tree-counterexample may be exponential in the size of the target automaton.
Thus the above algorithm does not run in time polynomial in the size of the
target automaton, and has query complexity exponential in the lower bound.
Assuming a Teacher that returns minimal DAG representations of
counterexamples, we give a new exact learning algorithm whose query complexity
is quadratic in the target automaton size, almost matching the lower bound, and
improving the best previously-known algorithm by an exponential factor
Quantitative multi-objective verification for probabilistic systems
We present a verification framework for analysing multiple quantitative objectives of systems that exhibit both nondeterministic and stochastic behaviour. These systems are modelled as probabilistic automata, enriched with cost or reward structures that capture, for example, energy usage or performance metrics. Quantitative properties of these models are expressed in a specification language that incorporates probabilistic safety and liveness properties, expected total cost or reward, and supports multiple objectives of these types. We propose and implement an efficient verification framework for such properties and then present two distinct applications of it: firstly, controller synthesis subject to multiple quantitative objectives; and, secondly, quantitative compositional verification. The practical applicability of both approaches is illustrated with experimental results from several large case studies
Model Counting of Query Expressions: Limitations of Propositional Methods
Query evaluation in tuple-independent probabilistic databases is the problem
of computing the probability of an answer to a query given independent
probabilities of the individual tuples in a database instance. There are two
main approaches to this problem: (1) in `grounded inference' one first obtains
the lineage for the query and database instance as a Boolean formula, then
performs weighted model counting on the lineage (i.e., computes the probability
of the lineage given probabilities of its independent Boolean variables); (2)
in methods known as `lifted inference' or `extensional query evaluation', one
exploits the high-level structure of the query as a first-order formula.
Although it is widely believed that lifted inference is strictly more powerful
than grounded inference on the lineage alone, no formal separation has
previously been shown for query evaluation. In this paper we show such a formal
separation for the first time.
We exhibit a class of queries for which model counting can be done in
polynomial time using extensional query evaluation, whereas the algorithms used
in state-of-the-art exact model counters on their lineages provably require
exponential time. Our lower bounds on the running times of these exact model
counters follow from new exponential size lower bounds on the kinds of d-DNNF
representations of the lineages that these model counters (either explicitly or
implicitly) produce. Though some of these queries have been studied before, no
non-trivial lower bounds on the sizes of these representations for these
queries were previously known.Comment: To appear in International Conference on Database Theory (ICDT) 201
Hiding secrets in public random functions
Constructing advanced cryptographic applications often requires the ability of privately embedding messages or functions in the code of a program. As an example, consider the task of building a searchable encryption scheme, which allows the users to search over the encrypted data and learn nothing other than the search result. Such a task is achievable if it is possible to embed the secret key of an encryption scheme into the code of a program that performs the "decrypt-then-search" functionality, and guarantee that the code hides everything except its functionality.
This thesis studies two cryptographic primitives that facilitate the capability of hiding secrets in the program of random functions.
1. We first study the notion of a private constrained pseudorandom function (PCPRF). A PCPRF allows the PRF master secret key holder to derive a public constrained key that changes the functionality of the original key without revealing the constraint description. Such a notion closely captures the goal of privately embedding functions in the code of a random function.
Our main contribution is in constructing single-key secure PCPRFs for NC^1 circuit constraints based on the learning with errors assumption. Single-key secure PCPRFs were known to support a wide range of cryptographic applications, such as private-key deniable encryption and watermarking. In addition, we build reusable garbled circuits from PCPRFs.
2. We then study how to construct cryptographic hash functions that satisfy strong random oracle-like properties. In particular, we focus on the notion of correlation intractability, which requires that given the description of a function, it should be hard to find an input-output pair that satisfies any sparse relations.
Correlation intractability captures the security properties required for, e.g., the soundness of the Fiat-Shamir heuristic, where the Fiat-Shamir transformation is a practical method of building signature schemes from interactive proof protocols. However, correlation intractability was shown to be impossible to achieve for certain length parameters, and was widely considered to be unobtainable.
Our contribution is in building correlation intractable functions from various cryptographic assumptions. The security analyses of the constructions use the techniques of secretly embedding constraints in the code of random functions
Truth Table Minimization of Computational Models
Complexity theory offers a variety of concise computational models for
computing boolean functions - branching programs, circuits, decision trees and
ordered binary decision diagrams to name a few. A natural question that arises
in this context with respect to any such model is this:
Given a function f:{0,1}^n \to {0,1}, can we compute the optimal complexity
of computing f in the computational model in question? (according to some
desirable measure).
A critical issue regarding this question is how exactly is f given, since a
more elaborate description of f allows the algorithm to use more computational
resources. Among the possible representations are black-box access to f (such
as in computational learning theory), a representation of f in the desired
computational model or a representation of f in some other model. One might
conjecture that if f is given as its complete truth table (i.e., a list of f's
values on each of its 2^n possible inputs), the most elaborate description
conceivable, then any computational model can be efficiently computed, since
the algorithm computing it can run poly(2^n) time. Several recent studies show
that this is far from the truth - some models have efficient and simple
algorithms that yield the desired result, others are believed to be hard, and
for some models this problem remains open.
In this thesis we will discuss the computational complexity of this question
regarding several common types of computational models. We shall present
several new hardness results and efficient algorithms, as well as new proofs
and extensions for known theorems, for variants of decision trees, formulas and
branching programs
Foundations and applications of program obfuscation
Code is said to be obfuscated if it is intentionally difficult for humans to understand.
Obfuscating a program conceals its sensitive implementation details and
protects it from reverse engineering and hacking. Beyond software protection, obfuscation
is also a powerful cryptographic tool, enabling a variety of advanced applications.
Ideally, an obfuscated program would hide any information about the original
program that cannot be obtained by simply executing it. However, Barak et al.
[CRYPTO 01] proved that for some programs, such ideal obfuscation is impossible.
Nevertheless, Garg et al. [FOCS 13] recently suggested a candidate general-purpose
obfuscator which is conjectured to satisfy a weaker notion of security called indistinguishability
obfuscation.
In this thesis, we study the feasibility and applicability of secure obfuscation:
- What notions of secure obfuscation are possible and under what assumptions?
- How useful are weak notions like indistinguishability obfuscation?
Our first result shows that the applications of indistinguishability obfuscation go
well beyond cryptography. We study the tractability of computing a Nash equilibrium
vii
of a game { a central problem in algorithmic game theory and complexity theory.
Based on indistinguishability obfuscation, we construct explicit games where a Nash
equilibrium cannot be found efficiently.
We also prove the following results on the feasibility of obfuscation. Our starting
point is the Garg at el. obfuscator that is based on a new algebraic encoding scheme
known as multilinear maps [Garg et al. EUROCRYPT 13].
1. Building on the work of Brakerski and Rothblum [TCC 14], we provide the first
rigorous security analysis for obfuscation. We give a variant of the Garg at el.
obfuscator and reduce its security to that of the multilinear maps. Specifically,
modeling the multilinear encodings as ideal boxes with perfect security, we prove
ideal security for our obfuscator. Our reduction shows that the obfuscator resists
all generic attacks that only use the encodings' permitted interface and do not
exploit their algebraic representation.
2. Going beyond generic attacks, we study the notion of virtual-gray-box obfusca-
tion [Bitansky et al. CRYPTO 10]. This relaxation of ideal security is stronger
than indistinguishability obfuscation and has several important applications
such as obfuscating password protected programs. We formulate a security
requirement for multilinear maps which is sufficient, as well as necessary for
virtual-gray-box obfuscation.
3. Motivated by the question of basing obfuscation on ideal objects that are simpler
than multilinear maps, we give a negative result showing that ideal obfuscation
is impossible, even in the random oracle model, where the obfuscator is given access
to an ideal random function. This is the first negative result for obfuscation
in a non-trivial idealized model
Bayesian Logic Programs
Bayesian networks provide an elegant formalism for representing and reasoning
about uncertainty using probability theory. Theyare a probabilistic extension
of propositional logic and, hence, inherit some of the limitations of
propositional logic, such as the difficulties to represent objects and
relations. We introduce a generalization of Bayesian networks, called Bayesian
logic programs, to overcome these limitations. In order to represent objects
and relations it combines Bayesian networks with definite clause logic by
establishing a one-to-one mapping between ground atoms and random variables. We
show that Bayesian logic programs combine the advantages of both definite
clause logic and Bayesian networks. This includes the separation of
quantitative and qualitative aspects of the model. Furthermore, Bayesian logic
programs generalize both Bayesian networks as well as logic programs. So, many
ideas developedComment: 52 page
- …