271 research outputs found
Online codes for analog signals
This paper revisits a classical scenario in communication theory: a waveform
sampled at regular intervals is to be encoded so as to minimize distortion in
its reconstruction, despite noise. This transformation must be online (causal),
to enable real-time signaling; and should use no more power than the original
signal. The noise model we consider is an "atomic norm" convex relaxation of
the standard (discrete alphabet) Hamming-weight-bounded model: namely,
adversarial -bounded. In the "block coding" (noncausal) setting, such
encoding is possible due to the existence of large almost-Euclidean sections in
spaces, a notion first studied in the work of Dvoretzky in 1961. Our
main result is that an analogous result is achievable even causally.
Equivalently, our work may be seen as a "lower triangular" version of
Dvoretzky theorems. In terms of communication, the guarantees are expressed in
terms of certain time-weighted norms: the time-weighted norm imposed
on the decoder forces increasingly accurate reconstruction of the distant past
signal, while the time-weighted norm on the noise ensures vanishing
interference from distant past noise. Encoding is linear (hence easy to
implement in analog hardware). Decoding is performed by an LP analogous to
those used in compressed sensing
The Ising Partition Function: Zeros and Deterministic Approximation
We study the problem of approximating the partition function of the
ferromagnetic Ising model in graphs and hypergraphs. Our first result is a
deterministic approximation scheme (an FPTAS) for the partition function in
bounded degree graphs that is valid over the entire range of parameters
(the interaction) and (the external field), except for the case
(the "zero-field" case). A randomized algorithm (FPRAS)
for all graphs, and all , has long been known. Unlike most other
deterministic approximation algorithms for problems in statistical physics and
counting, our algorithm does not rely on the "decay of correlations" property.
Rather, we exploit and extend machinery developed recently by Barvinok, and
Patel and Regts, based on the location of the complex zeros of the partition
function, which can be seen as an algorithmic realization of the classical
Lee-Yang approach to phase transitions. Our approach extends to the more
general setting of the Ising model on hypergraphs of bounded degree and edge
size, where no previous algorithms (even randomized) were known for a wide
range of parameters. In order to achieve this extension, we establish a tight
version of the Lee-Yang theorem for the Ising model on hypergraphs, improving a
classical result of Suzuki and Fisher.Comment: clarified presentation of combinatorial arguments, added new results
on optimality of univariate Lee-Yang theorem
Spatial mixing and approximation algorithms for graphs with bounded connective constant
The hard core model in statistical physics is a probability distribution on
independent sets in a graph in which the weight of any independent set I is
proportional to lambda^(|I|), where lambda > 0 is the vertex activity. We show
that there is an intimate connection between the connective constant of a graph
and the phenomenon of strong spatial mixing (decay of correlations) for the
hard core model; specifically, we prove that the hard core model with vertex
activity lambda < lambda_c(Delta + 1) exhibits strong spatial mixing on any
graph of connective constant Delta, irrespective of its maximum degree, and
hence derive an FPTAS for the partition function of the hard core model on such
graphs. Here lambda_c(d) := d^d/(d-1)^(d+1) is the critical activity for the
uniqueness of the Gibbs measure of the hard core model on the infinite d-ary
tree. As an application, we show that the partition function can be efficiently
approximated with high probability on graphs drawn from the random graph model
G(n,d/n) for all lambda < e/d, even though the maximum degree of such graphs is
unbounded with high probability.
We also improve upon Weitz's bounds for strong spatial mixing on bounded
degree graphs (Weitz, 2006) by providing a computationally simple method which
uses known estimates of the connective constant of a lattice to obtain bounds
on the vertex activities lambda for which the hard core model on the lattice
exhibits strong spatial mixing. Using this framework, we improve upon these
bounds for several lattices including the Cartesian lattice in dimensions 3 and
higher.
Our techniques also allow us to relate the threshold for the uniqueness of
the Gibbs measure on a general tree to its branching factor (Lyons, 1989).Comment: 26 pages. In October 2014, this paper was superseded by
arxiv:1410.2595. Before that, an extended abstract of this paper appeared in
Proc. IEEE Symposium on the Foundations of Computer Science (FOCS), 2013, pp.
300-30
Optimal Fidelity Selection for Improved Performance in Human-in-the-Loop Queues for Underwater Search
In the context of human-supervised autonomy, we study the problem of optimal
fidelity selection for a human operator performing an underwater visual search
task. Human performance depends on various cognitive factors such as workload
and fatigue. We perform human experiments in which participants perform two
tasks simultaneously: a primary task, which is subject to evaluation, and a
secondary task to estimate their workload. The primary task requires
participants to search for underwater mines in videos, while the secondary task
involves a simple visual test where they respond when a green light displayed
on the side of their screens turns red. Videos arrive as a Poisson process and
are stacked in a queue to be serviced by the human operator. The operator can
choose to watch the video with either normal or high fidelity, with normal
fidelity videos playing at three times the speed of high fidelity ones.
Participants receive rewards for their accuracy in mine detection for each
primary task and penalties based on the number of videos waiting in the queue.
We consider the workload of the operator as a hidden state and model the
workload dynamics as an Input-Output Hidden Markov Model (IOHMM). We use a
Partially Observable Markov Decision Process (POMDP) to learn an optimal
fidelity selection policy, where the objective is to maximize total rewards.
Our results demonstrate improved performance when videos are serviced based on
the optimal fidelity policy compared to a baseline where humans choose the
fidelity level themselves
Structural Properties of Optimal Fidelity Selection Policies for Human-in-the-loop Queues
We study optimal fidelity selection for a human operator servicing a queue of
homogeneous tasks. The agent can service a task with a normal or high fidelity
level, where fidelity refers to the degree of exactness and precision while
servicing the task. Therefore, high-fidelity servicing results in
higher-quality service but leads to larger service times and increased operator
tiredness. We treat the cognitive state of the human operator as a lumped
parameter that captures psychological factors such as workload and fatigue. The
service time distribution of the human operator depends on her cognitive
dynamics and the fidelity level selected for servicing the task. Her cognitive
dynamics evolve as a Markov chain in which the cognitive state increases with
high probability whenever she is busy and decreases while resting. The tasks
arrive according to a Poisson process and the operator is penalized at a fixed
rate for each task waiting in the queue. We address the trade-off between
high-quality service of the task and consequent penalty due to subsequent
increase in queue length using a discrete-time Semi-Markov Decision Process
(SMDP) framework. We numerically determine an optimal policy and the
corresponding optimal value function. Finally, we establish the structural
properties of an optimal fidelity policy and provide conditions under which the
optimal policy is a threshold-based policy
Evolutionary Dynamics in Finite Populations Mix Rapidly
In this paper we prove that the mixing time of a broad class of evolutionary dynamics in finite, unstructured populations is roughly logarithmic in the size of the state space. An important special case of such a stochastic process is the Wright-Fisher model from evolutionary biology (with selection and mutation) on a population of size N over m genotypes. Our main result implies that the mixing time of this process is O(log N) for all mutation rates and fitness landscapes, and solves the main open problem from [4]. In particular, it significantly extends the main result in [18] who proved this for m = 2. Biologically, such models have been used to study the evolution of viral populations with applications to drug design strategies countering them. Here the time it takes for the population to reach a steady state is important both for the estimation of the steady-state structure of the population as well in the modeling of the treatment strength and duration. Our result, that such populations exhibit rapid mixing, makes both of these approaches sound.
Technically, we make a novel connection between Markov chains arising in evolutionary dynamics and dynamical systems on the probability simplex. This allows us to use the local and global stability properties of the fixed points of such dynamical systems to construct a contractive coupling in a fairly general setting. We expect that our mixing time result would be useful beyond the evolutionary biology setting, and the techniques used here would find applications in bounding the mixing times of Markov chains which have a natural underlying dynamical system
- β¦