610 research outputs found
IMITATOR II: A Tool for Solving the Good Parameters Problem in Timed Automata
We present here Imitator II, a new version of Imitator, a tool implementing
the "inverse method" for parametric timed automata: given a reference valuation
of the parameters, it synthesizes a constraint such that, for any valuation
satisfying this constraint, the system behaves the same as under the reference
valuation in terms of traces, i.e., alternating sequences of locations and
actions. Imitator II also implements the "behavioral cartography algorithm",
allowing us to solve the following good parameters problem: find a set of
valuations within a given bounded parametric domain for which the system
behaves well. We present new features and optimizations of the tool, and give
results of applications to various examples of asynchronous circuits and
communication protocols.Comment: In Proceedings INFINITY 2010, arXiv:1010.611
Polylogarithmic Cuts in Models of V^0
We study initial cuts of models of weak two-sorted Bounded Arithmetics with
respect to the strength of their theories and show that these theories are
stronger than the original one. More explicitly we will see that
polylogarithmic cuts of models of are models of
by formalizing a proof of Nepomnjascij's Theorem in such cuts. This is a
strengthening of a result by Paris and Wilkie. We can then exploit our result
in Proof Complexity to observe that Frege proof systems can be sub
exponentially simulated by bounded depth Frege proof systems. This result has
recently been obtained by Filmus, Pitassi and Santhanam in a direct proof. As
an interesting observation we also obtain an average case separation of
Resolution from AC0-Frege by applying a recent result with Tzameret.Comment: 16 page
Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics
Quantum computing is powerful because unitary operators describing the
time-evolution of a quantum system have exponential size in terms of the number
of qubits present in the system. We develop a new "Singular value
transformation" algorithm capable of harnessing this exponential advantage,
that can apply polynomial transformations to the singular values of a block of
a unitary, generalizing the optimal Hamiltonian simulation results of Low and
Chuang. The proposed quantum circuits have a very simple structure, often give
rise to optimal algorithms and have appealing constant factors, while usually
only use a constant number of ancilla qubits. We show that singular value
transformation leads to novel algorithms. We give an efficient solution to a
certain "non-commutative" measurement problem and propose a new method for
singular value estimation. We also show how to exponentially improve the
complexity of implementing fractional queries to unitaries with a gapped
spectrum. Finally, as a quantum machine learning application we show how to
efficiently implement principal component regression. "Singular value
transformation" is conceptually simple and efficient, and leads to a unified
framework of quantum algorithms incorporating a variety of quantum speed-ups.
We illustrate this by showing how it generalizes a number of prominent quantum
algorithms, including: optimal Hamiltonian simulation, implementing the
Moore-Penrose pseudoinverse with exponential precision, fixed-point amplitude
amplification, robust oblivious amplitude amplification, fast QMA
amplification, fast quantum OR lemma, certain quantum walk results and several
quantum machine learning algorithms. In order to exploit the strengths of the
presented method it is useful to know its limitations too, therefore we also
prove a lower bound on the efficiency of singular value transformation, which
often gives optimal bounds.Comment: 67 pages, 1 figur
Consistency of circuit lower bounds with bounded theories
Proving that there are problems in that require
boolean circuits of super-linear size is a major frontier in complexity theory.
While such lower bounds are known for larger complexity classes, existing
results only show that the corresponding problems are hard on infinitely many
input lengths. For instance, proving almost-everywhere circuit lower bounds is
open even for problems in . Giving the notorious difficulty of
proving lower bounds that hold for all large input lengths, we ask the
following question: Can we show that a large set of techniques cannot prove
that is easy infinitely often? Motivated by this and related
questions about the interaction between mathematical proofs and computations,
we investigate circuit complexity from the perspective of logic.
Among other results, we prove that for any parameter it is
consistent with theory that computational class , where is one of
the pairs: and , and , and
. In other words, these theories cannot establish
infinitely often circuit upper bounds for the corresponding problems. This is
of interest because the weaker theory already formalizes
sophisticated arguments, such as a proof of the PCP Theorem. These consistency
statements are unconditional and improve on earlier theorems of [KO17] and
[BM18] on the consistency of lower bounds with
Polynomial time ultrapowers and the consistency of circuit lower bounds
A polynomial time ultrapower is a structure given by the set of polynomial time computable functions modulo some ultrafilter. They model the universal theory ∀PV of all polynomial time functions. Generalizing a theorem of Hirschfeld (Israel J Math 20(2):111–126, 1975), we show that every countable model of ∀PV is isomorphic to an existentially closed substructure of a polynomial time ultrapower. Moreover, one can take a substructure of a special form, namely a limit polynomial time ultrapower in the classical sense of Keisler (in: Bergelson, V., Blass, A., Di Nasso, M., Jin, R. (eds.) Ultrafilters across mathematics, contemporary mathematics vol 530, pp 163–179. AMS, New York, 1963). Using a polynomial time ultrapower over a nonstandard Herbrand saturated model of ∀PV we show that ∀PV is consistent with a formal statement of a polynomial size circuit lower bound for a polynomial time computable function. This improves upon a recent result of Krajíček and Oliveira (Logical methods in computer science 13 (1:4), 2017).Peer ReviewedPostprint (author's final draft
Robust explicit MPC design under finite precision arithmetic
We propose a design methodology for explicit Model Predictive Control (MPC) that guarantees hard constraint satisfaction in the presence of finite precision arithmetic errors. The implementation of complex digital control techniques, like MPC, is becoming increasingly adopted in embedded systems, where reduced precision computation techniques are embraced to achieve fast execution and low power consumption. However, in a low precision implementation, constraint satisfaction is not guaranteed if infinite precision is assumed during the algorithm design. To enforce constraint satisfaction under numerical errors, we use forward error analysis to compute an error bound on the output of the embedded controller. We treat this error as a state disturbance and use this to inform the design of a constraint-tightening robust controller. Benchmarks with a classical control problem, namely an inverted pendulum, show how it is possible to guarantee, by design, constraint satisfaction for embedded systems featuring low precision, fixed-point computations
On the Complexity of Nonrecursive XQuery and Functional Query Languages on Complex Values
This paper studies the complexity of evaluating functional query languages
for complex values such as monad algebra and the recursion-free fragment of
XQuery.
We show that monad algebra with equality restricted to atomic values is
complete for the class TA[2^{O(n)}, O(n)] of problems solvable in linear
exponential time with a linear number of alternations. The monotone fragment of
monad algebra with atomic value equality but without negation is complete for
nondeterministic exponential time. For monad algebra with deep equality, we
establish TA[2^{O(n)}, O(n)] lower and exponential-space upper bounds.
Then we study a fragment of XQuery, Core XQuery, that seems to incorporate
all the features of a query language on complex values that are traditionally
deemed essential. A close connection between monad algebra on lists and Core
XQuery (with ``child'' as the only axis) is exhibited, and it is shown that
these languages are expressively equivalent up to representation issues. We
show that Core XQuery is just as hard as monad algebra w.r.t. combined
complexity, and that it is in TC0 if the query is assumed fixed.Comment: Long version of PODS 2005 pape
- …