3,370 research outputs found

    Performance of Lempel-Ziv compressors with deferred innovation

    Get PDF
    The noiseless data-compression algorithms introduced by Lempel and Ziv (LZ) parse an input data string into successive substrings each consisting of two parts: The citation, which is the longest prefix that has appeared earlier in the input, and the innovation, which is the symbol immediately following the citation. In extremal versions of the LZ algorithm the citation may have begun anywhere in the input; in incremental versions it must have begun at a previous parse position. Originally the citation and the innovation were encoded, either individually or jointly, into an output word to be transmitted or stored. Subsequently, it was speculated that the cost of this encoding may be excessively high because the innovation contributes roughly 1g(A) bits, where A is the size of the input alphabet, regardless of the compressibility of the source. To remedy this excess, it was suggested to store the parsed substring as usual, but encoding for output only the citation, leaving the innovation to be encoded as the first symbol of the next substring. Being thus included in the next substring, the innovation can participate in whatever compression that substring enjoys. This strategy is called deferred innovation. It is exemplified in the algorithm described by Welch and implemented in the C program compress that has widely displaced adaptive Huffman coding (compact) as a UNIX system utility. The excessive expansion is explained, an implicit warning is given against using the deferred innovation compressors on nearly incompressible data

    Combining galaxy and 21cm surveys

    Full text link
    Acoustic waves traveling through the early Universe imprint a characteristic scale in the clustering of galaxies, QSOs and inter-galactic gas. This scale can be used as a standard ruler to map the expansion history of the Universe, a technique known as Baryon Acoustic Oscillations (BAO). BAO offer a high-precision, low-systematics means of constraining our cosmological model. The statistical power of BAO measurements can be improved if the `smearing' of the acoustic feature by non-linear structure formation is undone in a process known as reconstruction. In this paper we use low-order Lagrangian perturbation theory to study the ability of 21 21\,cm experiments to perform reconstruction and how augmenting these surveys with galaxy redshift surveys at relatively low number densities can improve performance. We find that the critical number density which must be achieved in order to benefit 21 21\,cm surveys is set by the linear theory power spectrum near its peak, and corresponds to densities achievable by upcoming surveys of emission line galaxies such as eBOSS and DESI. As part of this work we analyze reconstruction within the framework of Lagrangian perturbation theory with local Lagrangian bias, redshift-space distortions, k{\bf k}-dependent noise and anisotropic filtering schemes.Comment: 10 pages, final version to appear in MNRAS, helpful suggestions from referee and others include

    TACMB-1: The Theory of Anisotropies in the Cosmic Microwave Background (Bibliographic Resource Letter)

    Full text link
    This Resource Letter provides a guide to the literature on the theory of anisotropies in the cosmic microwave background. Journal articles, web pages, and books are cited for the following topics: discovery, cosmological origin, early work, recombination, general CMB anisotropy references, primary CMB anisotropies (numerical, analytical work), secondary effects, Sunyaev-Zel'dovich effect(s), lensing, reionization, polarization, gravity waves, defects, topology, origin of fluctuations, development of fluctuations, inflation and other ties to particle physics, parameter estimation, recent constraints, web resources, foregrounds, observations and observational issues, and gaussianity.Comment: AJP/AAPT Bibliographic Resource letter published Feb. 2002, 24 pages (9 of text), 1 figur

    On the decrease of the number of bound states with the increase of the angular momentum

    Full text link
    For the class of central potentials possessing a finite number of bound states and for which the second derivative of rV(r)r V(r) is negative, we prove, using the supersymmetric quantum mechanics formalism, that an increase of the angular momentum ℓ\ell by one unit yields a decrease of the number of bound states of at least one unit: Nℓ+1≤Nℓ−1N_{\ell+1}\le N_{\ell}-1. This property is used to obtain, for this class of potential, an upper limit on the total number of bound states which significantly improves previously known results

    Comparison of fluorescence-based techniques for the quantification of particle-induced hydroxyl radicals

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Reactive oxygen species including hydroxyl radicals can cause oxidative stress and mutations. Inhaled particulate matter can trigger formation of hydroxyl radicals, which have been implicated as one of the causes of particulate-induced lung disease. The extreme reactivity of hydroxyl radicals presents challenges to their detection and quantification. Here, three fluorescein derivatives [aminophenyl fluorescamine (APF), amplex ultrared, and dichlorofluorescein (DCFH)] and two radical species, proxyl fluorescamine and tempo-9-ac have been compared for their usefulness to measure hydroxyl radicals generated in two different systems: a solution containing ferrous iron and a suspension of pyrite particles.</p> <p>Results</p> <p>APF, amplex ultrared, and DCFH react similarly to the presence of hydroxyl radicals. Proxyl fluorescamine and tempo-9-ac do not react with hydroxyl radicals directly, which reduces their sensitivity. Since both DCFH and amplex ultrared will react with reactive oxygen species other than hydroxyl radicals and another highly reactive species, peroxynitite, they lack specificity.</p> <p>Conclusion</p> <p>The most useful probe evaluated here for hydroxyl radicals formed from cell-free particle suspensions is APF due to its sensitivity and selectivity.</p

    Association schemes related to universally optimal configurations, Kerdock codes and extremal Euclidean line-sets

    Full text link
    H. Cohn et. al. proposed an association scheme of 64 points in R^{14} which is conjectured to be a universally optimal code. We show that this scheme has a generalization in terms of Kerdock codes, as well as in terms of maximal real mutually unbiased bases. These schemes also related to extremal line-sets in Euclidean spaces and Barnes-Wall lattices. D. de Caen and E. R. van Dam constructed two infinite series of formally dual 3-class association schemes. We explain this formal duality by constructing two dual abelian schemes related to quaternary linear Kerdock and Preparata codes.Comment: 16 page

    High performance compression of science data

    Get PDF
    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time
    • …
    corecore