3,370 research outputs found
Performance of Lempel-Ziv compressors with deferred innovation
The noiseless data-compression algorithms introduced by Lempel and Ziv (LZ) parse an input data string into successive substrings each consisting of two parts: The citation, which is the longest prefix that has appeared earlier in the input, and the innovation, which is the symbol immediately following the citation. In extremal versions of the LZ algorithm the citation may have begun anywhere in the input; in incremental versions it must have begun at a previous parse position. Originally the citation and the innovation were encoded, either individually or jointly, into an output word to be transmitted or stored. Subsequently, it was speculated that the cost of this encoding may be excessively high because the innovation contributes roughly 1g(A) bits, where A is the size of the input alphabet, regardless of the compressibility of the source. To remedy this excess, it was suggested to store the parsed substring as usual, but encoding for output only the citation, leaving the innovation to be encoded as the first symbol of the next substring. Being thus included in the next substring, the innovation can participate in whatever compression that substring enjoys. This strategy is called deferred innovation. It is exemplified in the algorithm described by Welch and implemented in the C program compress that has widely displaced adaptive Huffman coding (compact) as a UNIX system utility. The excessive expansion is explained, an implicit warning is given against using the deferred innovation compressors on nearly incompressible data
Recommended from our members
Situating Urban Agriculture: What, Where, and Why in New York City
Urban agriculture has the potential to address multiple concerns simultaneously in dense urban spaces. Where and how urban agricultural interventions are sited within cities are critical questions to ask as governments, municipalities, and urban planners address the need for healthy and resilient food systems as well as environmental resiliency. This thesis explores the potential for planners to utilize digital mapping methodologies and multi-criteria decision making analysis (MCDA) in a way in which socio-economically vulnerable neighborhoods and neighborhoods facing environmental vulnerability can be addressed simultaneously. This research demonstrates this process by utilizing a geospatial mapping model that incorporates multiple layers of information on the current state of food access, rates of health, economic need, and water and heat risk that New York City currently exhibits. The results of this model, run multiple times, are applied to each of the tax lots in New York City, thus identifying exactly where the greatest socio-economic need and environmental vulnerability exists.
The methodology used in this thesis includes the collection, classification, and rasterization of a series of decision layers that feed into five larger components of analysis. These components are combined to generate an overall map that displays socio-economic need and another that displays environmental vulnerability as the combination of water and heat vulnerability. When analyzed together different sets of core targeted areas are identified and evaluated for potential available and appropriate land and rooftop areas that can be conducive to three different types of urban agriculture — ground level farms, rooftop open-air farms and rooftop greenhouses. This methodology builds on previous methodologies developed by the Urban Design Lab at Columbia University / The Earth Institute that evaluate the potential for urban agriculture in New York City (published in 2011 and 2013). This thesis advocates for the development of a comprehensive city-wide plan for the application of urban agriculture as a networked system of open spaces and productive greenhouses that have the potential to offer co-benefits through proximity, clustering, and strategic siting within the core targeted areas. This plan would ideally be supported by the development of open space zoning and ecological corridor zoning districts.
While the data used here supports lot-level and high resolution decision making, it ultimately identifies areas of opportunity which can be starting points for areas of participatory processes and a set of community engagement practices that may be able to address issues such as private owner development constraints in the potential siting of urban agriculture. Mapping and data collection is one part of the decision making process in planning but it is not the end goal. How findings of this type of mapping study are actualized on the ground or made actionable should be done with community involvement. In this regard, utilizing GIS and MCDA with public participation can be seen as a community empowerment strategy whereby (a) communities that can benefit from an intervention are first identified and incorporated into the overall process and (b) the maps generated can be used to advocate for specific types of development that will offer co-benefits. Regardless of the issue being analyzed, this thesis concludes that there are immense benefits to using digital mapping methodologies in making large city-wide decisions and in incorporating the public and non-expert voices into the conversation
Combining galaxy and 21cm surveys
Acoustic waves traveling through the early Universe imprint a characteristic
scale in the clustering of galaxies, QSOs and inter-galactic gas. This scale
can be used as a standard ruler to map the expansion history of the Universe, a
technique known as Baryon Acoustic Oscillations (BAO). BAO offer a
high-precision, low-systematics means of constraining our cosmological model.
The statistical power of BAO measurements can be improved if the `smearing' of
the acoustic feature by non-linear structure formation is undone in a process
known as reconstruction. In this paper we use low-order Lagrangian perturbation
theory to study the ability of cm experiments to perform reconstruction
and how augmenting these surveys with galaxy redshift surveys at relatively low
number densities can improve performance. We find that the critical number
density which must be achieved in order to benefit cm surveys is set by
the linear theory power spectrum near its peak, and corresponds to densities
achievable by upcoming surveys of emission line galaxies such as eBOSS and
DESI. As part of this work we analyze reconstruction within the framework of
Lagrangian perturbation theory with local Lagrangian bias, redshift-space
distortions, -dependent noise and anisotropic filtering schemes.Comment: 10 pages, final version to appear in MNRAS, helpful suggestions from
referee and others include
TACMB-1: The Theory of Anisotropies in the Cosmic Microwave Background (Bibliographic Resource Letter)
This Resource Letter provides a guide to the literature on the theory of
anisotropies in the cosmic microwave background. Journal articles, web pages,
and books are cited for the following topics: discovery, cosmological origin,
early work, recombination, general CMB anisotropy references, primary CMB
anisotropies (numerical, analytical work), secondary effects,
Sunyaev-Zel'dovich effect(s), lensing, reionization, polarization, gravity
waves, defects, topology, origin of fluctuations, development of fluctuations,
inflation and other ties to particle physics, parameter estimation, recent
constraints, web resources, foregrounds, observations and observational issues,
and gaussianity.Comment: AJP/AAPT Bibliographic Resource letter published Feb. 2002, 24 pages
(9 of text), 1 figur
On the decrease of the number of bound states with the increase of the angular momentum
For the class of central potentials possessing a finite number of bound
states and for which the second derivative of is negative, we prove,
using the supersymmetric quantum mechanics formalism, that an increase of the
angular momentum by one unit yields a decrease of the number of bound
states of at least one unit: . This property is used
to obtain, for this class of potential, an upper limit on the total number of
bound states which significantly improves previously known results
Comparison of fluorescence-based techniques for the quantification of particle-induced hydroxyl radicals
<p>Abstract</p> <p>Background</p> <p>Reactive oxygen species including hydroxyl radicals can cause oxidative stress and mutations. Inhaled particulate matter can trigger formation of hydroxyl radicals, which have been implicated as one of the causes of particulate-induced lung disease. The extreme reactivity of hydroxyl radicals presents challenges to their detection and quantification. Here, three fluorescein derivatives [aminophenyl fluorescamine (APF), amplex ultrared, and dichlorofluorescein (DCFH)] and two radical species, proxyl fluorescamine and tempo-9-ac have been compared for their usefulness to measure hydroxyl radicals generated in two different systems: a solution containing ferrous iron and a suspension of pyrite particles.</p> <p>Results</p> <p>APF, amplex ultrared, and DCFH react similarly to the presence of hydroxyl radicals. Proxyl fluorescamine and tempo-9-ac do not react with hydroxyl radicals directly, which reduces their sensitivity. Since both DCFH and amplex ultrared will react with reactive oxygen species other than hydroxyl radicals and another highly reactive species, peroxynitite, they lack specificity.</p> <p>Conclusion</p> <p>The most useful probe evaluated here for hydroxyl radicals formed from cell-free particle suspensions is APF due to its sensitivity and selectivity.</p
Association schemes related to universally optimal configurations, Kerdock codes and extremal Euclidean line-sets
H. Cohn et. al. proposed an association scheme of 64 points in R^{14} which
is conjectured to be a universally optimal code. We show that this scheme has a
generalization in terms of Kerdock codes, as well as in terms of maximal real
mutually unbiased bases. These schemes also related to extremal line-sets in
Euclidean spaces and Barnes-Wall lattices. D. de Caen and E. R. van Dam
constructed two infinite series of formally dual 3-class association schemes.
We explain this formal duality by constructing two dual abelian schemes related
to quaternary linear Kerdock and Preparata codes.Comment: 16 page
High performance compression of science data
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time
- …