38 research outputs found
Recommended from our members
Data compressions on machines with limited memory
We consider two problems in which machines with limited internal memory are used to compress and decompress data. In the first application, a powerful encoder transmits a coded file to a decoder that has severely constrained memory. A data structure that achieves minimum storage is presented, and alternative methods that sacrifice a small amount of storage to attain faster decoding are described. The second problem we address is that of encoding and decoding in limited memory. Methods for representing context models succinctly are described. These methods provide compression performance that is superior to state-of-the-art techniques, and competitive with newer approaches that use five times as much internal memory
Predicting the deleterious effects of mutation load in fragmented populations.
Human-induced habitat fragmentation constitutes a major threat to biodiversity. Both genetic and demographic factors combine to drive small and isolated populations into extinction vortices. Nevertheless, the deleterious effects of inbreeding and drift load may depend on population structure, migration patterns, and mating systems and are difficult to predict in the absence of crossing experiments. We performed stochastic individual-based simulations aimed at predicting the effects of deleterious mutations on population fitness (offspring viability and median time to extinction) under a variety of settings (landscape configurations, migration models, and mating systems) on the basis of easy-to-collect demographic and genetic information. Pooling all simulations, a large part (70%) of variance in offspring viability was explained by a combination of genetic structure (F(ST)) and within-deme heterozygosity (H(S)). A similar part of variance in median time to extinction was explained by a combination of local population size (N) and heterozygosity (H(S)). In both cases the predictive power increased above 80% when information on mating systems was available. These results provide robust predictive models to evaluate the viability prospects of fragmented populations
Grundriss des Militärstrafrechts /
Mode of access: Internet.Author's inscribed cop
Recommended from our members
Data compressions on machines with limited memory
We consider two problems in which machines with limited internal memory are used to compress and decompress data. In the first application, a powerful encoder transmits a coded file to a decoder that has severely constrained memory. A data structure that achieves minimum storage is presented, and alternative methods that sacrifice a small amount of storage to attain faster decoding are described. The second problem we address is that of encoding and decoding in limited memory. Methods for representing context models succinctly are described. These methods provide compression performance that is superior to state-of-the-art techniques, and competitive with newer approaches that use five times as much internal memory
Recommended from our members
DOE's HAZMAT Spill Center at the Nevada Test Site: Activities and Capabilities
The U.S. Department of Energy (DOE) owns and operates the Hazardous Materials (HAZMAT) Spill Center (HSC) as a research and demonstration facility available on a user-fee basis to private and public sector test and training sponsors concerned with safety aspects of hazardous materials. Though initially designed to accommodate large liquefied natural gas releasers, the HSC has accommodated hazardous materials training and safety-related testing of most chemicals in commercial use. The HSC is located at DOE's Nevada Test Site (NTS) near Mercury, Nevada. The HSC provides a unique opportunity for industry and other users to conduct hazardous materials testing and training. This is the only facility of its kind for either large- or small-scale testing of hazardous and toxic fluids under controlled conditions. It is ideally suited for test sponsors to develop verified data on release prevention, mitigation, cleanup, and environmental effects of toxic and hazardous materials. The facility site also supports structured training for hazardous spills, nkigation, and cleanup. Since 1986, the HSC has been utilized for releases to evaluate the patterns of dispersion mitigation techniques, and combustion characteristics of select materials. Use of the facility can also aid users in developing emergency planning under U.S. Public Law 99-499; the Superfund Amendments and Reauthorization Act of 1986 (SARA); and other federal, state, and international laws and regulations. The HSC Program is managed by the DOE, OffIce of Emergency Management, Nonproliferation and National Security, with the support and assistance of other divisions of DOE and the U. S. government
Streamlining Context Models For Data Compression
Context modeling has emerged as the most promising new approach to compressing text. While context-modeling algorithms provide very good compression, they suffer from the disadvantages of being slow and requiring large amounts of main memory in which to execute. We describe a context-model-based algorithm that runs significantly faster, uses much less space, and provides compression ratios close to those of earlier context modeling algorithms. We achieve these improvements through the use of self-organizing lists. Introduction The most widely used data compression algorithms, including the Unix utility compress, are based on the work of Ziv and Lempel [ZL78]. These are dynamic algorithms that build a dictionary representative of the input text and code dictionary entries using fixed-length codewords. Compress typically reduces a file to approximately 50% of its original size and is extremely fast, but has a large memory requirement (450 Kbytes). Algorithm FG, an updated version of t..
Recommended from our members
Data compression
This paper surveys a variety of data compression methods spanning almost forty years of research, from the work of Shannon, Fano and Huffman in the late 40's to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory, as they relate to the goals and evaluation of data compression methods, are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported and possibilities for future research are suggested
An Order-2 Context Model for Data Compression With Reduced Time and Space Requirements
Context modeling has emerged as the most promising new approach to compressing text. While context-modeling algorithms provide very good compression, they suffer from the disadvantages of being quite slow and requiring large amounts of main memory in which to execute. We describe a context-model-based algorithm that runs significantly faster and uses less space than earlier context models. Although our algorithm does not achieve the compression performance of competing context models, it does provide a significant improvement over the widely-used Unix utility compress in terms of both use of memory and compression performance. Introduction The most widely used data compression algorithms, including the Unix utility compress, are based on the work of Ziv and Lempel [ZL78]. These are dynamic algorithms that build a dictionary representative of the input text and code dictionary entries using fixed-length codewords. Compress typically reduces a file to 40--50% of its original size. Co..
Recommended from our members
Efficient decoding of prefix codes
We discuss representations of prefix codes and the corresponding storage space and decoding time requirements. We assume that a dictionary of words to be encoded has been defined and that a prefix code appropriate to the dictionary has been constructed. The encoding operation becomes simple given these assumptions and given an appropriate parsing strategy, therefore we concentrate on decoding. The application which led us to this work constrains the use of internal memory during the decode operation. As a result, we seek a method of decoding which has a small memory requirement
Effect of the two-stage thermal disintegration and anaerobic digestion of sewage sludge on the COD fractions
The research presents the changes in chemical oxygen demand (COD) fractions during the two-stage thermal disintegration and anaerobic digestion (AD) of sewage sludge in municipal wastewater treatment plant (WWTP). Four COD fractions have been separated taking into account the solubility of substrates and their susceptibility to biodegradation: inert soluble organic matter SI, readily biodegradable substrate SS, slowly biodegradable substrates XS and inert particulate organic material XI. The results showed that readily biodegradable substrates SS (46.8% of total COD) and slowly biodegradable substrates XS (36.1% of total COD) were dominant in the raw sludge effluents. In sewage effluents after two-stage thermal disintegration, the percentage of SS fraction increased to 90% of total COD and percentage of XS fraction decreased to 8% of total COD. After AD, percentage of SS fraction in total COD decreased to 64%, whereas the percentage of other fractions in effluents increased