128 research outputs found

    Online codes for analog signals

    Full text link
    This paper revisits a classical scenario in communication theory: a waveform sampled at regular intervals is to be encoded so as to minimize distortion in its reconstruction, despite noise. This transformation must be online (causal), to enable real-time signaling; and should use no more power than the original signal. The noise model we consider is an "atomic norm" convex relaxation of the standard (discrete alphabet) Hamming-weight-bounded model: namely, adversarial â„“1\ell_1-bounded. In the "block coding" (noncausal) setting, such encoding is possible due to the existence of large almost-Euclidean sections in â„“1\ell_1 spaces, a notion first studied in the work of Dvoretzky in 1961. Our main result is that an analogous result is achievable even causally. Equivalently, our work may be seen as a "lower triangular" version of â„“1\ell_1 Dvoretzky theorems. In terms of communication, the guarantees are expressed in terms of certain time-weighted norms: the time-weighted â„“2\ell_2 norm imposed on the decoder forces increasingly accurate reconstruction of the distant past signal, while the time-weighted â„“1\ell_1 norm on the noise ensures vanishing interference from distant past noise. Encoding is linear (hence easy to implement in analog hardware). Decoding is performed by an LP analogous to those used in compressed sensing

    The Ising Partition Function: Zeros and Deterministic Approximation

    Full text link
    We study the problem of approximating the partition function of the ferromagnetic Ising model in graphs and hypergraphs. Our first result is a deterministic approximation scheme (an FPTAS) for the partition function in bounded degree graphs that is valid over the entire range of parameters β\beta (the interaction) and λ\lambda (the external field), except for the case ∣λ∣=1\vert{\lambda}\vert=1 (the "zero-field" case). A randomized algorithm (FPRAS) for all graphs, and all β,λ\beta,\lambda, has long been known. Unlike most other deterministic approximation algorithms for problems in statistical physics and counting, our algorithm does not rely on the "decay of correlations" property. Rather, we exploit and extend machinery developed recently by Barvinok, and Patel and Regts, based on the location of the complex zeros of the partition function, which can be seen as an algorithmic realization of the classical Lee-Yang approach to phase transitions. Our approach extends to the more general setting of the Ising model on hypergraphs of bounded degree and edge size, where no previous algorithms (even randomized) were known for a wide range of parameters. In order to achieve this extension, we establish a tight version of the Lee-Yang theorem for the Ising model on hypergraphs, improving a classical result of Suzuki and Fisher.Comment: clarified presentation of combinatorial arguments, added new results on optimality of univariate Lee-Yang theorem

    Spatial mixing and approximation algorithms for graphs with bounded connective constant

    Full text link
    The hard core model in statistical physics is a probability distribution on independent sets in a graph in which the weight of any independent set I is proportional to lambda^(|I|), where lambda > 0 is the vertex activity. We show that there is an intimate connection between the connective constant of a graph and the phenomenon of strong spatial mixing (decay of correlations) for the hard core model; specifically, we prove that the hard core model with vertex activity lambda < lambda_c(Delta + 1) exhibits strong spatial mixing on any graph of connective constant Delta, irrespective of its maximum degree, and hence derive an FPTAS for the partition function of the hard core model on such graphs. Here lambda_c(d) := d^d/(d-1)^(d+1) is the critical activity for the uniqueness of the Gibbs measure of the hard core model on the infinite d-ary tree. As an application, we show that the partition function can be efficiently approximated with high probability on graphs drawn from the random graph model G(n,d/n) for all lambda < e/d, even though the maximum degree of such graphs is unbounded with high probability. We also improve upon Weitz's bounds for strong spatial mixing on bounded degree graphs (Weitz, 2006) by providing a computationally simple method which uses known estimates of the connective constant of a lattice to obtain bounds on the vertex activities lambda for which the hard core model on the lattice exhibits strong spatial mixing. Using this framework, we improve upon these bounds for several lattices including the Cartesian lattice in dimensions 3 and higher. Our techniques also allow us to relate the threshold for the uniqueness of the Gibbs measure on a general tree to its branching factor (Lyons, 1989).Comment: 26 pages. In October 2014, this paper was superseded by arxiv:1410.2595. Before that, an extended abstract of this paper appeared in Proc. IEEE Symposium on the Foundations of Computer Science (FOCS), 2013, pp. 300-30

    Optimal Fidelity Selection for Improved Performance in Human-in-the-Loop Queues for Underwater Search

    Full text link
    In the context of human-supervised autonomy, we study the problem of optimal fidelity selection for a human operator performing an underwater visual search task. Human performance depends on various cognitive factors such as workload and fatigue. We perform human experiments in which participants perform two tasks simultaneously: a primary task, which is subject to evaluation, and a secondary task to estimate their workload. The primary task requires participants to search for underwater mines in videos, while the secondary task involves a simple visual test where they respond when a green light displayed on the side of their screens turns red. Videos arrive as a Poisson process and are stacked in a queue to be serviced by the human operator. The operator can choose to watch the video with either normal or high fidelity, with normal fidelity videos playing at three times the speed of high fidelity ones. Participants receive rewards for their accuracy in mine detection for each primary task and penalties based on the number of videos waiting in the queue. We consider the workload of the operator as a hidden state and model the workload dynamics as an Input-Output Hidden Markov Model (IOHMM). We use a Partially Observable Markov Decision Process (POMDP) to learn an optimal fidelity selection policy, where the objective is to maximize total rewards. Our results demonstrate improved performance when videos are serviced based on the optimal fidelity policy compared to a baseline where humans choose the fidelity level themselves

    Structural Properties of Optimal Fidelity Selection Policies for Human-in-the-loop Queues

    Full text link
    We study optimal fidelity selection for a human operator servicing a queue of homogeneous tasks. The agent can service a task with a normal or high fidelity level, where fidelity refers to the degree of exactness and precision while servicing the task. Therefore, high-fidelity servicing results in higher-quality service but leads to larger service times and increased operator tiredness. We treat the cognitive state of the human operator as a lumped parameter that captures psychological factors such as workload and fatigue. The service time distribution of the human operator depends on her cognitive dynamics and the fidelity level selected for servicing the task. Her cognitive dynamics evolve as a Markov chain in which the cognitive state increases with high probability whenever she is busy and decreases while resting. The tasks arrive according to a Poisson process and the operator is penalized at a fixed rate for each task waiting in the queue. We address the trade-off between high-quality service of the task and consequent penalty due to subsequent increase in queue length using a discrete-time Semi-Markov Decision Process (SMDP) framework. We numerically determine an optimal policy and the corresponding optimal value function. Finally, we establish the structural properties of an optimal fidelity policy and provide conditions under which the optimal policy is a threshold-based policy

    Fisher Zeros and Correlation Decay in the Ising Model

    Get PDF
    The Ising model originated in statistical physics as a means of studying phase transitions in magnets, and has been the object of intensive study for almost a century. Combinatorially, it can be viewed as a natural distribution over cuts in a graph, and it has also been widely studied in computer science, especially in the context of approximate counting and sampling. In this paper, we study the complex zeros of the partition function of the Ising model, viewed as a polynomial in the "interaction parameter"; these are known as Fisher zeros in light of their introduction by Fisher in 1965. While the zeros of the partition function as a polynomial in the "field" parameter have been extensively studied since the classical work of Lee and Yang, comparatively little is known about Fisher zeros. Our main result shows that the zero-field Ising model has no Fisher zeros in a complex neighborhood of the entire region of parameters where the model exhibits correlation decay. In addition to shedding light on Fisher zeros themselves, this result also establishes a formal connection between two distinct notions of phase transition for the Ising model: the absence of complex zeros (analyticity of the free energy, or the logarithm of the partition function) and decay of correlations with distance. We also discuss the consequences of our result for efficient deterministic approximation of the partition function. Our proof relies heavily on algorithmic techniques, notably Weitz\u27s self-avoiding walk tree, and as such belongs to a growing body of work that uses algorithmic methods to resolve classical questions in statistical physics
    • …
    corecore