4,801 research outputs found
Inkdots as advice for finite automata
We examine inkdots placed on the input string as a way of providing advice to
finite automata, and establish the relations between this model and the
previously studied models of advised finite automata. The existence of an
infinite hierarchy of classes of languages that can be recognized with the help
of increasing numbers of inkdots as advice is shown. The effects of different
forms of advice on the succinctness of the advised machines are examined. We
also study randomly placed inkdots as advice to probabilistic finite automata,
and demonstrate the superiority of this model over its deterministic version.
Even very slowly growing amounts of space can become a resource of meaningful
use if the underlying advised model is extended with access to secondary
memory, while it is famously known that such small amounts of space are not
useful for unadvised one-way Turing machines.Comment: 14 page
Quantum Branching Programs and Space-Bounded Nonuniform Quantum Complexity
In this paper, the space complexity of nonuniform quantum computations is
investigated. The model chosen for this are quantum branching programs, which
provide a graphic description of sequential quantum algorithms. In the first
part of the paper, simulations between quantum branching programs and
nonuniform quantum Turing machines are presented which allow to transfer lower
and upper bound results between the two models. In the second part of the
paper, different variants of quantum OBDDs are compared with their
deterministic and randomized counterparts. In the third part, quantum branching
programs are considered where the performed unitary operation may depend on the
result of a previous measurement. For this model a simulation of randomized
OBDDs and exponential lower bounds are presented.Comment: 45 pages, 3 Postscript figures. Proofs rearranged, typos correcte
PageRank Optimization by Edge Selection
The importance of a node in a directed graph can be measured by its PageRank.
The PageRank of a node is used in a number of application contexts - including
ranking websites - and can be interpreted as the average portion of time spent
at the node by an infinite random walk. We consider the problem of maximizing
the PageRank of a node by selecting some of the edges from a set of edges that
are under our control. By applying results from Markov decision theory, we show
that an optimal solution to this problem can be found in polynomial time. Our
core solution results in a linear programming formulation, but we also provide
an alternative greedy algorithm, a variant of policy iteration, which runs in
polynomial time, as well. Finally, we show that, under the slight modification
for which we are given mutually exclusive pairs of edges, the problem of
PageRank optimization becomes NP-hard.Comment: 30 pages, 3 figure
Criticality and Universality in the Unit-Propagation Search Rule
The probability Psuccess(alpha, N) that stochastic greedy algorithms
successfully solve the random SATisfiability problem is studied as a function
of the ratio alpha of constraints per variable and the number N of variables.
These algorithms assign variables according to the unit-propagation (UP) rule
in presence of constraints involving a unique variable (1-clauses), to some
heuristic (H) prescription otherwise. In the infinite N limit, Psuccess
vanishes at some critical ratio alpha\_H which depends on the heuristic H. We
show that the critical behaviour is determined by the UP rule only. In the case
where only constraints with 2 and 3 variables are present, we give the phase
diagram and identify two universality classes: the power law class, where
Psuccess[alpha\_H (1+epsilon N^{-1/3}), N] ~ A(epsilon)/N^gamma; the stretched
exponential class, where Psuccess[alpha\_H (1+epsilon N^{-1/3}), N] ~
exp[-N^{1/6} Phi(epsilon)]. Which class is selected depends on the
characteristic parameters of input data. The critical exponent gamma is
universal and calculated; the scaling functions A and Phi weakly depend on the
heuristic H and are obtained from the solutions of reaction-diffusion equations
for 1-clauses. Computation of some non-universal corrections allows us to match
numerical results with good precision. The critical behaviour for constraints
with >3 variables is given. Our results are interpreted in terms of dynamical
graph percolation and we argue that they should apply to more general
situations where UP is used.Comment: 30 pages, 13 figure
Bounded Model Checking for Probabilistic Programs
In this paper we investigate the applicability of standard model checking
approaches to verifying properties in probabilistic programming. As the
operational model for a standard probabilistic program is a potentially
infinite parametric Markov decision process, no direct adaption of existing
techniques is possible. Therefore, we propose an on-the-fly approach where the
operational model is successively created and verified via a step-wise
execution of the program. This approach enables to take key features of many
probabilistic programs into account: nondeterminism and conditioning. We
discuss the restrictions and demonstrate the scalability on several benchmarks
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
In this paper, we propose a novel stochastic gradient estimator --
ProbAbilistic Gradient Estimator (PAGE) -- for nonconvex optimization. PAGE is
easy to implement as it is designed via a small adjustment to vanilla SGD: in
each iteration, PAGE uses the vanilla minibatch SGD update with probability
or reuses the previous gradient with a small adjustment, at a much lower
computational cost, with probability . We give a simple formula for the
optimal choice of . Moreover, we prove the first tight lower bound
for nonconvex finite-sum problems,
which also leads to a tight lower bound
for nonconvex online problems, where . Then, we show that PAGE obtains the optimal convergence results
(finite-sum) and
(online) matching our lower bounds for both
nonconvex finite-sum and online problems. Besides, we also show that for
nonconvex functions satisfying the Polyak-\L{}ojasiewicz (PL) condition, PAGE
can automatically switch to a faster linear convergence rate . Finally, we conduct several deep learning experiments
(e.g., LeNet, VGG, ResNet) on real datasets in PyTorch showing that PAGE not
only converges much faster than SGD in training but also achieves the higher
test accuracy, validating the optimal theoretical results and confirming the
practical superiority of PAGE.Comment: 25 pages; accepted by ICML 2021 (long talk
- …