23,197 research outputs found
Many-site coherence revivals in the extended Bose-Hubbard model and the Gutzwiller approximation
We investigate the collapse and revival of first-order coherence in deep
optical lattices when long-range interactions are turned on, and find that the
first few revival peaks are strongly attenuated already for moderate values of
the nearest-neighbor interaction coupling. It is shown that the conventionally
employed Gutzwiller wavefunction, with only onsite-number dependence of the
variational amplitudes, leads to incorrect predictions for the collapse and
revival oscillations. We provide a modified variant of the Gutzwiller ansatz,
reproducing the analytically calculated time dependence of first-order
coherence in the limit of zero tunneling.Comment: 8+\epsilon{} pages of RevTex4-1, 4 figures; with an appendix added,
has been published in Physical Review
Q-CSMA: Queue-Length Based CSMA/CA Algorithms for Achieving Maximum Throughput and Low Delay in Wireless Networks
Recently, it has been shown that CSMA-type random access algorithms can
achieve the maximum possible throughput in ad hoc wireless networks. However,
these algorithms assume an idealized continuous-time CSMA protocol where
collisions can never occur. In addition, simulation results indicate that the
delay performance of these algorithms can be quite bad. On the other hand,
although some simple heuristics (such as distributed approximations of greedy
maximal scheduling) can yield much better delay performance for a large set of
arrival rates, they may only achieve a fraction of the capacity region in
general. In this paper, we propose a discrete-time version of the CSMA
algorithm. Central to our results is a discrete-time distributed randomized
algorithm which is based on a generalization of the so-called Glauber dynamics
from statistical physics, where multiple links are allowed to update their
states in a single time slot. The algorithm generates collision-free
transmission schedules while explicitly taking collisions into account during
the control phase of the protocol, thus relaxing the perfect CSMA assumption.
More importantly, the algorithm allows us to incorporate mechanisms which lead
to very good delay performance while retaining the throughput-optimality
property. It also resolves the hidden and exposed terminal problems associated
with wireless networks.Comment: 12 page
Functionals in stochastic thermodynamics: how to interpret stochastic integrals
In stochastic thermodynamics standard concepts from macroscopic thermodynamics, such as heat, work, and entropy production, are generalized to small fluctuating systems by defining them on a trajectory-wise level. In Langevin systems with continuous state-space such definitions involve stochastic integrals along system trajectories, whose specific values depend on the discretization rule used to evaluate them (i.e. the 'interpretation' of the noise terms in the integral). Via a systematic mathematical investigation of this apparent dilemma, we corroborate the widely used standard interpretation of heat-and work-like functionals as Stratonovich integrals. We furthermore recapitulate the anomalies that are known to occur for entropy production in the presence of temperature gradients
A note on the minimum skew rank of a graph
The minimum skew rank of a graph over a field
is the smallest possible rank among all skew symmetric matrices
over , whose (,)-entry (for ) is nonzero whenever
is an edge in and is zero otherwise. We give some new properties of
the minimum skew rank of a graph, including a characterization of the graphs
with cut vertices over the infinite field such that
, determination of the minimum skew rank of -paths
over a field , and an extending of an existing result to show that
for a connected graph
with no even cycles and a field , where is the matching
number of , and is the largest possible rank among
all skew symmetric matrices over
Minority Challenge of Majority Actions in a Close Corporation in Italy and the United States
This paper addresses the problem of segmenting a time-series with respect to changes in the mean value or in the variance. The first case is when the time data is modeled as a sequence of independent and normal distributed random variables with unknown, possibly changing, mean value but fixed variance. The main assumption is that the mean value is piecewise constant in time, and the task is to estimate the change times and the mean values within the segments. The second case is when the mean value is constant, but the variance can change. The assumption is that the variance is piecewise constant in time, and we want to estimate change times and the variance values within the segments. To find solutions to these problems, we will study an l_1 regularized maximum likelihood method, related to the fused lasso method and l_1 trend filtering, where the parameters to be estimated are free to vary at each sample. To penalize variations in the estimated parameters, the -norm of the time difference of the parameters is used as a regularization term. This idea is closely related to total variation denoising. The main contribution is that a convex formulation of this variance estimation problem, where the parametrization is based on the inverse of the variance, can be formulated as a certain mean estimation problem. This implies that results and methods for mean estimation can be applied to the challenging problem of variance segmentation/estimationQC 20140908</p
- …