23,915 research outputs found

    Universal Lossless Compression with Unknown Alphabets - The Average Case

    Full text link
    Universal compression of patterns of sequences generated by independently identically distributed (i.i.d.) sources with unknown, possibly large, alphabets is investigated. A pattern is a sequence of indices that contains all consecutive indices in increasing order of first occurrence. If the alphabet of a source that generated a sequence is unknown, the inevitable cost of coding the unknown alphabet symbols can be exploited to create the pattern of the sequence. This pattern can in turn be compressed by itself. It is shown that if the alphabet size kk is essentially small, then the average minimax and maximin redundancies as well as the redundancy of every code for almost every source, when compressing a pattern, consist of at least 0.5 log(n/k^3) bits per each unknown probability parameter, and if all alphabet letters are likely to occur, there exist codes whose redundancy is at most 0.5 log(n/k^2) bits per each unknown probability parameter, where n is the length of the data sequences. Otherwise, if the alphabet is large, these redundancies are essentially at least O(n^{-2/3}) bits per symbol, and there exist codes that achieve redundancy of essentially O(n^{-1/2}) bits per symbol. Two sub-optimal low-complexity sequential algorithms for compression of patterns are presented and their description lengths analyzed, also pointing out that the pattern average universal description length can decrease below the underlying i.i.d.\ entropy for large enough alphabets.Comment: Revised for IEEE Transactions on Information Theor

    Neutrino Physics

    Full text link
    The fundamental properties of neutrinos are reviewed in these lectures. The first part is focused on the basic characteristics of neutrinos in the Standard Model and how neutrinos are detected. Neutrino masses and oscillations are introduced and a summary of the most important experimental results on neutrino oscillations to date is provided. Then, present and future experimental proposals are discussed, including new precision reactor and accelerator experiments. Finally, different approaches for measuring the neutrino mass and the nature (Majorana or Dirac) of neutrinos are reviewed. The detection of neutrinos from supernovae explosions and the information that this measurement can provide are also summarized at the end.Comment: 50 pages, contribution to the 2011 CERN-Latin-American School of High-Energy Physics, Natal, Brazil, 23 March-5 April 2011, edited by C. Grojean, M. Mulders and M. Spiropulu. arXiv admin note: text overlap with arXiv:1010.5112, arXiv:1010.4131, arXiv:0704.1800 by other author

    Non-equilibrium physics of Rydberg lattices in the presence of noise and dissipative processes

    Full text link
    We study the non-equilibrium dynamics of driven spin lattices in the presence of decoherence caused by either laser phase noise or strong decay. In the first case, we discriminate between correlated and uncorrelated noise and explore their effect on the mean density of Rydberg states and the full counting statistics (FCS). We find that while the mean density is almost identical in both cases, the FCS differ considerably. The main method employed is the Langevin equation (LE) but for the sake of efficiency in certain regimes, we use a Markovian master equation and Monte Carlo rate equations, respectively. In the second case, we consider dissipative systems with more general power-law interactions. We determine the phase diagram in the steady state and analyse its generation dynamics using Monte Carlo rate equations. In contrast to nearest-neighbour models, there is no transition to long-range-ordered phases for realistic interactions and resonant driving. Yet, for finite laser detunings, we show that Rydberg lattices can undergo a dissipative phase transition to a long-range-ordered antiferromagnetic (AF) phase. We identify the advantages of Monte Carlo rate equations over mean field (MF) predictions

    Price transmission analysis: A flexible methodological approach applied to European hog markets

    Get PDF
    The study of spatial price relationships contributes to explain markets performance, their degree of integration or isolation, and the speed at which information is transmitted. A great deal of methods have been used to analyze this issue, being the most important: causality tests, impulse- response functions and cointegration. Normally, these techniques have been individually applied. However, a more rich knowledge of the functioning of markets can be extracted when they are jointly applied. In this paper, we try to conjugate these three techniques in a common econometric model. First, Johansen(1988) multivariate cointegration tests are used to determine the number of long-run equilibrium relationships. Cointegration is considered not only as informative about long-run price transmission but also as an essential step in the correct specification of a vector error correction model (VECM) used in the subsequent analysis. Second, Dolado and Lutkepohl(1996) causality tests are used to investigate the lead-lag behaviour among markets. Finally, impulse-response functions are calculated from the VECM estimated in the first stage for evaluating dynamic price linkages. The method exposed is applied to study spatial pork prices relationships among seven countries in the EU from 1988 to 1995. Weekly prices at farm level published by EUROSTAT: "Agricultural Markets" are used.
    corecore