6,891 research outputs found
Incremental Recompilation of Knowledge
Approximating a general formula from above and below by Horn formulas (its
Horn envelope and Horn core, respectively) was proposed by Selman and Kautz
(1991, 1996) as a form of ``knowledge compilation,'' supporting rapid
approximate reasoning; on the negative side, this scheme is static in that it
supports no updates, and has certain complexity drawbacks pointed out by
Kavvadias, Papadimitriou and Sideri (1993). On the other hand, the many
frameworks and schemes proposed in the literature for theory update and
revision are plagued by serious complexity-theoretic impediments, even in the
Horn case, as was pointed out by Eiter and Gottlob (1992), and is further
demonstrated in the present paper. More fundamentally, these schemes are not
inductive, in that they may lose in a single update any positive properties of
the represented sets of formulas (small size, Horn structure, etc.). In this
paper we propose a new scheme, incremental recompilation, which combines Horn
approximation and model-based updates; this scheme is inductive and very
efficient, free of the problems facing its constituents. A set of formulas is
represented by an upper and lower Horn approximation. To update, we replace the
upper Horn formula by the Horn envelope of its minimum-change update, and
similarly the lower one by the Horn core of its update; the key fact which
enables this scheme is that Horn envelopes and cores are easy to compute when
the underlying formula is the result of a minimum-change update of a Horn
formula by a clause. We conjecture that efficient algorithms are possible for
more complex updates.Comment: See http://www.jair.org/ for any accompanying file
Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy
We consider quantum computations comprising only commuting gates, known as
IQP computations, and provide compelling evidence that the task of sampling
their output probability distributions is unlikely to be achievable by any
efficient classical means. More specifically we introduce the class post-IQP of
languages decided with bounded error by uniform families of IQP circuits with
post-selection, and prove first that post-IQP equals the classical class PP.
Using this result we show that if the output distributions of uniform IQP
circuit families could be classically efficiently sampled, even up to 41%
multiplicative error in the probabilities, then the infinite tower of classical
complexity classes known as the polynomial hierarchy, would collapse to its
third level. We mention some further results on the classical simulation
properties of IQP circuit families, in particular showing that if the output
distribution results from measurements on only O(log n) lines then it may in
fact be classically efficiently sampled.Comment: 13 page
First-order transition in small-world networks
The small-world transition is a first-order transition at zero density of
shortcuts, whereby the normalized shortest-path distance undergoes a
discontinuity in the thermodynamic limit. On finite systems the apparent
transition is shifted by . Equivalently a ``persistence
size'' can be defined in connection with finite-size
effects. Assuming , simple rescaling arguments imply that
. We confirm this result by extensive numerical simulation in one to
four dimensions, and argue that implies that this transition is
first-order.Comment: 4 pages, 3 figures, To appear in Europhysics Letter
High-Performance Bioinstrumentation for Real-Time Neuroelectrochemical Traumatic Brain Injury Monitoring
Traumatic brain injury (TBI) has been identified as an important cause of death and severe disability in all age groups and particularly in children and young adults. Central to TBIs devastation is a delayed secondary injury that occurs in 30–40% of TBI patients each year, while they are in the hospital Intensive Care Unit (ICU). Secondary injuries reduce survival rate after TBI and usually occur within 7 days post-injury. State-of-art monitoring of secondary brain injuries benefits from the acquisition of high-quality and time-aligned electrical data i.e., ElectroCorticoGraphy (ECoG) recorded by means of strip electrodes placed on the brains surface, and neurochemical data obtained via rapid sampling microdialysis and microfluidics-based biosensors measuring brain tissue levels of glucose, lactate and potassium. This article progresses the field of multi-modal monitoring of the injured human brain by presenting the design and realization of a new, compact, medical-grade amperometry, potentiometry and ECoG recording bioinstrumentation. Our combined TBI instrument enables the high-precision, real-time neuroelectrochemical monitoring of TBI patients, who have undergone craniotomy neurosurgery and are treated sedated in the ICU. Electrical and neurochemical test measurements are presented, confirming the high-performance of the reported TBI bioinstrumentation
Tetris is Hard, Even to Approximate
In the popular computer game of Tetris, the player is given a sequence of
tetromino pieces and must pack them into a rectangular gameboard initially
occupied by a given configuration of filled squares; any completely filled row
of the gameboard is cleared and all pieces above it drop by one row. We prove
that in the offline version of Tetris, it is NP-complete to maximize the number
of cleared rows, maximize the number of tetrises (quadruples of rows
simultaneously filled and cleared), minimize the maximum height of an occupied
square, or maximize the number of pieces placed before the game ends. We
furthermore show the extreme inapproximability of the first and last of these
objectives to within a factor of p^(1-epsilon), when given a sequence of p
pieces, and the inapproximability of the third objective to within a factor of
(2 - epsilon), for any epsilon>0. Our results hold under several variations on
the rules of Tetris, including different models of rotation, limitations on
player agility, and restricted piece sets.Comment: 56 pages, 11 figure
Matching Kasteleyn Cities for Spin Glass Ground States
As spin glass materials have extremely slow dynamics, devious numerical
methods are needed to study low-temperature states. A simple and fast
optimization version of the classical Kasteleyn treatment of the Ising model is
described and applied to two-dimensional Ising spin glasses. The algorithm
combines the Pfaffian and matching approaches to directly strip droplet
excitations from an excited state. Extended ground states in Ising spin glasses
on a torus, which are optimized over all boundary conditions, are used to
compute precise values for ground state energy densities.Comment: 4 pages, 2 figures; minor clarification
The computational difficulty of finding MPS ground states
We determine the computational difficulty of finding ground states of
one-dimensional (1D) Hamiltonians which are known to be Matrix Product States
(MPS). To this end, we construct a class of 1D frustration free Hamiltonians
with unique MPS ground states and a polynomial gap above, for which finding the
ground state is at least as hard as factoring. By lifting the requirement of a
unique ground state, we obtain a class for which finding the ground state
solves an NP-complete problem. Therefore, for these Hamiltonians it is not even
possible to certify that the ground state has been found. Our results thus
imply that in order to prove convergence of variational methods over MPS, as
the Density Matrix Renormalization Group, one has to put more requirements than
just MPS ground states and a polynomial spectral gap.Comment: 5 pages. v2: accepted version, Journal-Ref adde
Positivity of energy for asymptotically locally AdS spacetimes
We derive necessary conditions for the spinorial Witten-Nester energy to be
well-defined for asymptotically locally AdS spacetimes. We find that the
conformal boundary should admit a spinor satisfying certain differential
conditions and in odd dimensions the boundary metric should be conformally
Einstein. We show that these conditions are satisfied by asymptotically AdS
spacetimes. The gravitational energy (obtained using the holographic stress
energy tensor) and the spinorial energy are equal in even dimensions and differ
by a bounded quantity related to the conformal anomaly in odd dimensions.Comment: 36 pages, 1 figure; minor corrections, JHEP versio
Exactly solvable models of adaptive networks
A satisfiability (SAT-UNSAT) transition takes place for many optimization
problems when the number of constraints, graphically represented by links
between variables nodes, is brought above some threshold. If the network of
constraints is allowed to adapt by redistributing its links, the SAT-UNSAT
transition may be delayed and preceded by an intermediate phase where the
structure self-organizes to satisfy the constraints. We present an analytic
approach, based on the recently introduced cavity method for large deviations,
which exactly describes the two phase transitions delimiting this adaptive
intermediate phase. We give explicit results for random bond models subject to
the connectivity or rigidity percolation transitions, and compare them with
numerical simulations.Comment: 4 pages, 4 figure
Phase transition for cutting-plane approach to vertex-cover problem
We study the vertex-cover problem which is an NP-hard optimization problem
and a prototypical model exhibiting phase transitions on random graphs, e.g.,
Erdoes-Renyi (ER) random graphs. These phase transitions coincide with changes
of the solution space structure, e.g, for the ER ensemble at connectivity
c=e=2.7183 from replica symmetric to replica-symmetry broken. For the
vertex-cover problem, also the typical complexity of exact branch-and-bound
algorithms, which proceed by exploring the landscape of feasible
configurations, change close to this phase transition from "easy" to "hard". In
this work, we consider an algorithm which has a completely different strategy:
The problem is mapped onto a linear programming problem augmented by a
cutting-plane approach, hence the algorithm operates in a space OUTSIDE the
space of feasible configurations until the final step, where a solution is
found. Here we show that this type of algorithm also exhibits an "easy-hard"
transition around c=e, which strongly indicates that the typical hardness of a
problem is fundamental to the problem and not due to a specific representation
of the problem.Comment: 4 pages, 3 figure
- …