190,276 research outputs found
The complexity of weighted boolean #CSP*
This paper gives a dichotomy theorem for the complexity of computing the partition
function of an instance of a weighted Boolean constraint satisfaction problem. The problem
is parameterized by a finite set F of nonnegative functions that may be used to assign weights to
the configurations (feasible solutions) of a problem instance. Classical constraint satisfaction problems
correspond to the special case of 0,1-valued functions. We show that computing the partition
function, i.e., the sum of the weights of all configurations, is FP#P-complete unless either (1) every
function in F is of “product type,” or (2) every function in F is “pure affine.” In the remaining cases,
computing the partition function is in P
Exclusion statistics: A resolution of the problem of negative weights
We give a formulation of the single particle occupation probabilities for a
system of identical particles obeying fractional exclusion statistics of
Haldane. We first derive a set of constraints using an exactly solvable model
which describes an ideal exclusion statistics system and deduce the general
counting rules for occupancy of states obeyed by these particles. We show that
the problem of negative probabilities may be avoided with these new counting
rules.Comment: REVTEX 3.0, 14 page
Black hole state counting in loop quantum gravity
The two ways of counting microscopic states of black holes in the U(1)
formulation of loop quantum gravity, one counting all allowed spin network
labels j,m and the other only m labels, are discussed in some detail. The
constraints on m are clarified and the map between the flux quantum numbers and
m discussed. Configurations with |m|=j, which are sometimes sought after, are
shown to be important only when large areas are involved. The discussion is
extended to the SU(2) formulation.Comment: 5 page
Hardness of decoding quantum stabilizer codes
In this article we address the computational hardness of optimally decoding a
quantum stabilizer code. Much like classical linear codes, errors are detected
by measuring certain check operators which yield an error syndrome, and the
decoding problem consists of determining the most likely recovery given the
syndrome. The corresponding classical problem is known to be NP-complete, and a
similar decoding problem for quantum codes is also known to be NP-complete.
However, this decoding strategy is not optimal in the quantum setting as it
does not take into account error degeneracy, which causes distinct errors to
have the same effect on the code. Here, we show that optimal decoding of
stabilizer codes is computationally much harder than optimal decoding of
classical linear codes, it is #P
Multicellular rosettes drive fluid-solid transition in epithelial tissues
Models for confluent biological tissues often describe the network formed by
cells as a triple-junction network, similar to foams. However, higher order
vertices or multicellular rosettes are prevalent in developmental and {\it in
vitro} processes and have been recognized as crucial in many important aspects
of morphogenesis, disease, and physiology. In this work, we study the influence
of rosettes on the mechanics of a confluent tissue. We find that the existence
of rosettes in a tissue can greatly influence its rigidity. Using a generalized
vertex model and effective medium theory we find a fluid-to-solid transition
driven by rosette density and intracellular tensions. This transition exhibits
several hallmarks of a second-order phase transition such as a growing
correlation length and a universal critical scaling in the vicinity a critical
point. Further, we elucidate the nature of rigidity transitions in dense
biological tissues and other cellular structures using a generalized Maxwell
constraint counting approach. This answers a long-standing puzzle of the origin
of solidity in these systems.Comment: 11 pages, 5 figures + 8 pages, 7 figures in Appendix. To be appear in
PR
Consistent Searches for SMEFT Effects in Non-Resonant Dijet Events
We investigate the bounds which can be placed on generic new-physics
contributions to dijet production at the LHC using the framework of the
Standard Model Effective Field Theory, deriving the first consistently-treated
EFT bounds from non-resonant high-energy data. We recast an analysis searching
for quark compositeness, equivalent to treating the SM with one
higher-dimensional operator as a complete UV model. In order to reach
consistent, model-independent EFT conclusions, it is necessary to truncate the
EFT effects consistently at order and to include the possibility
of multiple operators simultaneously contributing to the observables, neither
of which has been done in previous searches of this nature. Furthermore, it is
important to give consistent error estimates for the theoretical predictions of
the signal model, particularly in the region of phase space where the probed
energy is approaching the cutoff scale of the EFT. There are two linear
combinations of operators which contribute to dijet production in the SMEFT
with distinct angular behavior; we identify those linear combinations and
determine the ability of LHC searches to constrain them simultaneously.
Consistently treating the EFT generically leads to weakened bounds on
new-physics parameters. These constraints will be a useful input to future
global analyses in the SMEFT framework, and the techniques used here to
consistently search for EFT effects are directly applicable to other
off-resonance signals.Comment: v1: 23 pages, 9 figures, 3 tables; v2: references added, typos
corrected, matches version published in JHE
Bit-Vector Model Counting using Statistical Estimation
Approximate model counting for bit-vector SMT formulas (generalizing \#SAT)
has many applications such as probabilistic inference and quantitative
information-flow security, but it is computationally difficult. Adding random
parity constraints (XOR streamlining) and then checking satisfiability is an
effective approximation technique, but it requires a prior hypothesis about the
model count to produce useful results. We propose an approach inspired by
statistical estimation to continually refine a probabilistic estimate of the
model count for a formula, so that each XOR-streamlined query yields as much
information as possible. We implement this approach, with an approximate
probability model, as a wrapper around an off-the-shelf SMT solver or SAT
solver. Experimental results show that the implementation is faster than the
most similar previous approaches which used simpler refinement strategies. The
technique also lets us model count formulas over floating-point constraints,
which we demonstrate with an application to a vulnerability in differential
privacy mechanisms
Infrared singularities in Landau gauge Yang-Mills theory
We present a more detailed picture of the infrared regime of Landau gauge
Yang-Mills theory. This is done within a novel framework that allows one to
take into account the influence of finite scales within an infrared power
counting analysis. We find that there are two qualitatively different infrared
fixed points of the full system of Dyson-Schwinger equations. The first extends
the known scaling solution, where the ghost dynamics is dominant and gluon
propagation is strongly suppressed. It features in addition to the strong
divergences of gluonic vertex functions in the previously considered uniform
scaling limit, when all external momenta tend to zero, also weaker kinematic
divergences, when only some of the external momenta vanish. The second solution
represents the recently proposed decoupling scenario where the gluons become
massive and the ghosts remain bare. In this case we find that none of the
vertex functions is enhanced, so that the infrared dynamics is entirely
suppressed. Our analysis also provides a strict argument why the Landau gauge
gluon dressing function cannot be infrared divergent.Comment: 29 pages, 25 figures; published versio
- …