17,824 research outputs found

    LISA Source Confusion

    Full text link
    The Laser Interferometer Space Antenna (LISA) will detect thousands of gravitational wave sources. Many of these sources will be overlapping in the sense that their signals will have a non-zero cross-correlation. Such overlaps lead to source confusion, which adversely affects how well we can extract information about the individual sources. Here we study how source confusion impacts parameter estimation for galactic compact binaries, with emphasis on the effects of the number of overlaping sources, the time of observation, the gravitational wave frequencies of the sources, and the degree of the signal correlations. Our main findings are that the parameter resolution decays exponentially with the number of overlapping sources, and super-exponentially with the degree of cross-correlation. We also find that an extended mission lifetime is key to disentangling the source confusion as the parameter resolution for overlapping sources improves much faster than the usual square root of the observation time.Comment: 8 pages, 14 figure

    Time's Barbed Arrow: Irreversibility, Crypticity, and Stored Information

    Full text link
    We show why the amount of information communicated between the past and future--the excess entropy--is not in general the amount of information stored in the present--the statistical complexity. This is a puzzle, and a long-standing one, since the latter is what is required for optimal prediction, but the former describes observed behavior. We layout a classification scheme for dynamical systems and stochastic processes that determines when these two quantities are the same or different. We do this by developing closed-form expressions for the excess entropy in terms of optimal causal predictors and retrodictors--the epsilon-machines of computational mechanics. A process's causal irreversibility and crypticity are key determining properties.Comment: 4 pages, 2 figure

    The Minimum Description Length Principle and Model Selection in Spectropolarimetry

    Get PDF
    It is shown that the two-part Minimum Description Length Principle can be used to discriminate among different models that can explain a given observed dataset. The description length is chosen to be the sum of the lengths of the message needed to encode the model plus the message needed to encode the data when the model is applied to the dataset. It is verified that the proposed principle can efficiently distinguish the model that correctly fits the observations while avoiding over-fitting. The capabilities of this criterion are shown in two simple problems for the analysis of observed spectropolarimetric signals. The first is the de-noising of observations with the aid of the PCA technique. The second is the selection of the optimal number of parameters in LTE inversions. We propose this criterion as a quantitative approach for distinguising the most plausible model among a set of proposed models. This quantity is very easy to implement as an additional output on the existing inversion codes.Comment: Accepted for publication in the Astrophysical Journa

    Fluctuation Theorem with Information Exchange: Role of Correlations in Stochastic Thermodynamics

    Full text link
    We establish the fluctuation theorem in the presence of information exchange between a nonequilibrium system and other degrees of freedom such as an observer and a feedback controller, where the amount of information exchange is added to the entropy production. The resulting generalized second law sets the fundamental limit of energy dissipation and energy cost during the information exchange. Our results apply not only to feedback-controlled processes but also to a much broader class of information exchanges, and provides a unified framework of nonequilibrium thermodynamics of measurement and feedback control.Comment: To appear in PR

    Entropy exchange and entanglement in the Jaynes-Cummings model

    Full text link
    The Jaynes-Cummings model is the simplest fully quantum model that describes the interaction between light and matter. We extend a previous analysis by Phoenix and Knight (S. J. D. Phoenix, P. L. Knight, Annals of Physics 186, 381). of the JCM by considering mixed states of both the light and matter. We present examples of qualitatively different entropic correlations. In particular, we explore the regime of entropy exchange between light and matter, i.e. where the rate of change of the two are anti-correlated. This behavior contrasts with the case of pure light-matter states in which the rate of change of the two entropies are positively correlated and in fact identical. We give an analytical derivation of the anti-correlation phenomenon and discuss the regime of its validity. Finally, we show a strong correlation between the region of the Bloch sphere characterized by entropy exchange and that characterized by minimal entanglement as measured by the negative eigenvalues of the partially transposed density matrix.Comment: 8 pages, 5 figure

    Measuring the effective complexity of cosmological models

    Get PDF
    We introduce a statistical measure of the effective model complexity, called the Bayesian complexity. We demonstrate that the Bayesian complexity can be used to assess how many effective parameters a set of data can support and that it is a useful complement to the model likelihood (the evidence) in model selection questions. We apply this approach to recent measurements of cosmic microwave background anisotropies combined with the Hubble Space Telescope measurement of the Hubble parameter. Using mildly non-informative priors, we show how the 3-year WMAP data improves on the first-year data by being able to measure both the spectral index and the reionization epoch at the same time. We also find that a non-zero curvature is strongly disfavored. We conclude that although current data could constrain at least seven effective parameters, only six of them are required in a scheme based on the Lambda-CDM concordance cosmology.Comment: 9 pages, 4 figures, revised version accepted for publication in PRD, updated with WMAP3 result

    Generalized Hurst exponent and multifractal function of original and translated texts mapped into frequency and length time series

    Full text link
    A nonlinear dynamics approach can be used in order to quantify complexity in written texts. As a first step, a one-dimensional system is examined : two written texts by one author (Lewis Carroll) are considered, together with one translation, into an artificial language, i.e. Esperanto are mapped into time series. Their corresponding shuffled versions are used for obtaining a "base line". Two different one-dimensional time series are used here: (i) one based on word lengths (LTS), (ii) the other on word frequencies (FTS). It is shown that the generalized Hurst exponent h(q)h(q) and the derived f(α)f(\alpha) curves of the original and translated texts show marked differences. The original "texts" are far from giving a parabolic f(α)f(\alpha) function, - in contrast to the shuffled texts. Moreover, the Esperanto text has more extreme values. This suggests cascade model-like, with multiscale time asymmetric features as finally written texts. A discussion of the difference and complementarity of mapping into a LTS or FTS is presented. The FTS f(α)f(\alpha) curves are more opened than the LTS onesComment: preprint for PRE; 2 columns; 10 pages; 6 (multifigures); 3 Tables; 70 reference

    Statistical mechanical aspects of joint source-channel coding

    Full text link
    An MN-Gallager Code over Galois fields, qq, based on the Dynamical Block Posterior probabilities (DBP) for messages with a given set of autocorrelations is presented with the following main results: (a) for a binary symmetric channel the threshold, fcf_c, is extrapolated for infinite messages using the scaling relation for the median convergence time, tmed1/(fcf)t_{med} \propto 1/(f_c-f); (b) a degradation in the threshold is observed as the correlations are enhanced; (c) for a given set of autocorrelations the performance is enhanced as qq is increased; (d) the efficiency of the DBP joint source-channel coding is slightly better than the standard gzip compression method; (e) for a given entropy, the performance of the DBP algorithm is a function of the decay of the correlation function over large distances.Comment: 6 page

    Maximum Entropy and Bayesian Data Analysis: Entropic Priors

    Full text link
    The problem of assigning probability distributions which objectively reflect the prior information available about experiments is one of the major stumbling blocks in the use of Bayesian methods of data analysis. In this paper the method of Maximum (relative) Entropy (ME) is used to translate the information contained in the known form of the likelihood into a prior distribution for Bayesian inference. The argument is inspired and guided by intuition gained from the successful use of ME methods in statistical mechanics. For experiments that cannot be repeated the resulting "entropic prior" is formally identical with the Einstein fluctuation formula. For repeatable experiments, however, the expected value of the entropy of the likelihood turns out to be relevant information that must be included in the analysis. The important case of a Gaussian likelihood is treated in detail.Comment: 23 pages, 2 figure
    corecore