3,484 research outputs found

    Comments on 'a representation for the symbol error rate using completely monotone functions'

    Get PDF
    It was shown in the above-titled paper by Rajan and Tepedelenlioglu (see ibid., vol. 59, no. 6, p. 3922-31, June 2013) that the symbol error rate (SER) of an arbitrary multidimensional constellation subject to additive white Gaussian noise is characterized as the product of a completely monotone function with a nonnegative power of signal-to-noise ratio (SNR) under minimum distance detection. In this comment, it is proved that the probability of correct decision of an arbitrary constellation admits a similar representation as well. Based on this fact, it is shown that the stochastic ordering { G α} proposed by the authors as an extension of the existing Laplace transform order to compare the average SERs over two different fading channels actually predicts that the average SERs are equal for any constellation of dimensionality smaller than or equal to 2α. Furthermore, it is noted that there are no positive random variables X1 and X2 such that the proposed stochastic ordering is satisfied in the strict sense, i.e., X1<Gα X2, when α=N/2 for any positive integer N. Additional remarks are noted about the fading scenarios at low SNR and the generalization to additive compound Gaussian noise originally discussed in the subject paper. © 1963-2012 IEEE

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto

    Estimation of the Rate-Distortion Function

    Full text link
    Motivated by questions in lossy data compression and by theoretical considerations, we examine the problem of estimating the rate-distortion function of an unknown (not necessarily discrete-valued) source from empirical data. Our focus is the behavior of the so-called "plug-in" estimator, which is simply the rate-distortion function of the empirical distribution of the observed data. Sufficient conditions are given for its consistency, and examples are provided to demonstrate that in certain cases it fails to converge to the true rate-distortion function. The analysis of its performance is complicated by the fact that the rate-distortion function is not continuous in the source distribution; the underlying mathematical problem is closely related to the classical problem of establishing the consistency of maximum likelihood estimators. General consistency results are given for the plug-in estimator applied to a broad class of sources, including all stationary and ergodic ones. A more general class of estimation problems is also considered, arising in the context of lossy data compression when the allowed class of coding distributions is restricted; analogous results are developed for the plug-in estimator in that case. Finally, consistency theorems are formulated for modified (e.g., penalized) versions of the plug-in, and for estimating the optimal reproduction distribution.Comment: 18 pages, no figures [v2: removed an example with an error; corrected typos; a shortened version will appear in IEEE Trans. Inform. Theory

    Kalikow-type decomposition for multicolor infinite range particle systems

    Full text link
    We consider a particle system on Zd\mathbb{Z}^d with real state space and interactions of infinite range. Assuming that the rate of change is continuous we obtain a Kalikow-type decomposition of the infinite range change rates as a mixture of finite range change rates. Furthermore, if a high noise condition holds, as an application of this decomposition, we design a feasible perfect simulation algorithm to sample from the stationary process. Finally, the perfect simulation scheme allows us to forge an algorithm to obtain an explicit construction of a coupling attaining Ornstein's dˉ\bar{d}-distance for two ordered Ising probability measures.Comment: Published in at http://dx.doi.org/10.1214/12-AAP882 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The resource theory of informational nonequilibrium in thermodynamics

    Get PDF
    We review recent work on the foundations of thermodynamics in the light of quantum information theory. We adopt a resource-theoretic perspective, wherein thermodynamics is formulated as a theory of what agents can achieve under a particular restriction, namely, that the only state preparations and transformations that they can implement for free are those that are thermal at some fixed temperature. States that are out of thermal equilibrium are the resources. We consider the special case of this theory wherein all systems have trivial Hamiltonians (that is, all of their energy levels are degenerate). In this case, the only free operations are those that add noise to the system (or implement a reversible evolution) and the only nonequilibrium states are states of informational nonequilibrium, that is, states that deviate from the maximally mixed state. The degree of this deviation we call the state's nonuniformity; it is the resource of interest here, the fuel that is consumed, for instance, in an erasure operation. We consider the different types of state conversion: exact and approximate, single-shot and asymptotic, catalytic and noncatalytic. In each case, we present the necessary and sufficient conditions for the conversion to be possible for any pair of states, emphasizing a geometrical representation of the conditions in terms of Lorenz curves. We also review the problem of quantifying the nonuniformity of a state, in particular through the use of generalized entropies. Quantum state conversion problems in this resource theory can be shown to be always reducible to their classical counterparts, so that there are no inherently quantum-mechanical features arising in such problems. This body of work also demonstrates that the standard formulation of the second law of thermodynamics is inadequate as a criterion for deciding whether or not a given state transition is possible.Comment: 51 pages, 9 figures, Revised Versio

    Asymptotics of Discrete MDL for Online Prediction

    Get PDF
    Minimum Description Length (MDL) is an important principle for induction and prediction, with strong relations to optimal Bayesian learning. This paper deals with learning non-i.i.d. processes by means of two-part MDL, where the underlying model class is countable. We consider the online learning framework, i.e. observations come in one by one, and the predictor is allowed to update his state of mind after each time step. We identify two ways of predicting by MDL for this setup, namely a static} and a dynamic one. (A third variant, hybrid MDL, will turn out inferior.) We will prove that under the only assumption that the data is generated by a distribution contained in the model class, the MDL predictions converge to the true values almost surely. This is accomplished by proving finite bounds on the quadratic, the Hellinger, and the Kullback-Leibler loss of the MDL learner, which are however exponentially worse than for Bayesian prediction. We demonstrate that these bounds are sharp, even for model classes containing only Bernoulli distributions. We show how these bounds imply regret bounds for arbitrary loss functions. Our results apply to a wide range of setups, namely sequence prediction, pattern classification, regression, and universal induction in the sense of Algorithmic Information Theory among others.Comment: 34 page
    • 

    corecore