10 research outputs found

    Brotli: A General-Purpose Data Compressor

    Get PDF
    Brotli is an open source general-purpose data compressor introduced by Google in late 2013 and now adopted in most known browsers and Web servers. It is publicly available on GitHub and its data format was submitted as RFC 7932 in July 2016. Brotli is based on the Lempel-Ziv compression scheme and planned as a generic replacement of Gzip and ZLib. The main goal in its design was to compress data on the Internet, which meant optimizing the resources used at decoding time, while achieving maximal compression density. This article is intended to provide the first thorough, systematic description of the Brotli format as well as a detailed computational and experimental analysis of the main algorithmic blocks underlying the current encoder implementation, together with a comparison against compressors of different families constituting the state-of-the-art either in practice or in theory. This treatment will allow us to raise a set of new algorithmic and software engineering problems that deserve further attention from the scientific community

    Write-and-f-array: implementacja i zastosowanie

    No full text
    Wprowadzamy nowy obiekt współbieżny: write-and-f-array, podajemy jego implementację wait-free i używamy jej aby skonstruować ulepszoną implementację wait-free obiektu fetch-and-add. Write-and-f-array uogólnia write-and-snapshot dla jednego pisarza w podobny sposób co f-array uogólnia snapshot. Dokładniej, write-and-f-array jest sparametryzowana przez łączny operator ff i koncepcyjnie jest tablicą z dwoma operacjami atomowymi:\\begin{itemize}\\item write-and-f, która modyfikuje jeden element tablicy i zwraca wynik aplikacji ff do wszystkich jej elementów,\\item read, która zwraca wynik aplikacji ff do wszystkich elementów tablicy.\\end{itemize}Podajemy implementację wait-free NN-elementowej write-and-f-array ze złożonością pamięciową O(NlogN)O(N \\log N), złożonością krokową write-and-f O(log3N)O(\\log^3 N) i stałą złożonością operacji read. Implementacja taużywa obiektów CAS o rozmiarze Omega(logM)\\Omega(\\log M), gdzie MM jest całkowitą liczbą wykonanych operacji write-and-f. Pokazujemy też modyfikację tej implementacji, która zmniejsza złożoność krokową write-and-f do O(log2N)O(\\log^2 N), jednocześnie zwiększając złożoność pamięciową do O(Nlog2N)O(N \\log^2 N).Write-and-f-array znajduje zastosowanie w konstrukcji obiektu fetch-and-add dla PP procesorów ze złożonością pamięciową O(PlogP)O(P \\log P) i złożonością krokową operacji O(log3P)O(\\log^3 P). Jest to pierwsza implementacja fetch-and-add z polilogarytmiczną złożonością krokową operacji i podkwadratową złożonością pamięciową, która nie wymaga obiektów CAS lub LL/SC o nierealistycznie dużym rozmiarze.We introduce a new shared memory object: the write-and-f-array, provide its wait-free implementation and use it to construct an improved wait-free implementation of the fetch-and-add object. The write-and-f-array generalizes single-writer write-and-snapshot object in a similar way that the f-array generalizes the multi-writer snapshot object. More specifically, a write-and-f-array is parameterized by an associative operator ff and is conceptually an array with two atomic operations:\begin{itemize}\item write-and-f modifies a single array's element and returns the result of applying ff to all the elements,\item read returns the result of applying ff to all the array's elements.\end{itemize}We provide a wait-free implementation of an NN-element write-and-f-array with O(NlogN)O(N \log N) memory complexity, O(log3N)O(\log^3 N) step complexity of the write-and-f operation and O(1)O(1) step complexity of the read operation. The implementation uses CAS objects and requires their size to be Ω(logM)\Omega(\log M), where MM is the total number of write-and-f operations executed. We also show, how it can be modified to achieve O(log2N)O(\log^2 N) step complexity of write-and-f, while increasing the memory complexity to O(Nlog2N)O(N \log^2 N).The write-and-f-array can be applied to create a fetch-and-add object for PP processes with O(PlogP)O(P \log P) memory complexity and O(log3P)O(\log^3 P) step complexity of the fetch-and-add operation. This is the first implementation of fetch-and-add with polylogarithmic step complexity and subquadratic memory complexity that can be implemented without CAS or LL/SC objects of unrealistic size

    The physical limnology of a permanently ice-covered and chemically stratified Antarctic lake using high resolution spatial data from an autonomous underwater vehicle

    No full text
    © 2018 The Authors Limnology and Oceanography published by Wiley Periodicals, Inc. on behalf of Association for the Sciences of Limnology and Oceanography We used an Environmentally Non-Disturbing Under-ice Robotic ANtarctic Explorer to make measurements of conductivity and temperature in Lake Bonney, a chemically stratified, permanently ice-covered Antarctic lake that abuts Taylor Glacier, an outlet glacier from the Polar Plateau. The lake is divided into two lobes – East Lobe Bonney (ELB) and West Lobe Bonney (WLB), each with unique temperature and salinity profiles. Most of our data were collected in November 2009 from WLB to examine the influence of the Taylor Glacier on the structure of the water column. Temperatures adjacent to the glacier face between 20 m and 22 m were 3°C colder than in the rest of WLB, due to latent heat transfer associated with melting of the submerged glacier face and inflow of cold brines that originate beneath the glacier. Melting of the glacier face into the salinity gradient below the chemocline generates a series of nearly horizontal intrusions into WLB that were previously documented in profiles measured with 3 cm vertical resolution in 1990–1991. WLB and ELB are connected by a narrow channel through which water can be exchanged over a shallow sill that controls the position of the chemocline in WLB. A complex exchange flow appears to exist through the narrows, driven by horizontal density gradients and melting at the glacier face. Superimposed on the exchange is a net west-to-east flow generated by the higher volume of meltwater inflows to WLB. Both of these processes can be expected to be enhanced in the future as more meltwater is produced

    Analysis of DD, TT and DT Neutron Streaming Experiments with the ADVANTG Code

    No full text
    The paper presents an analysis of DD, TT and DT neutron streaming benchmark experiments with the recently released hybrid transport code ADVANTG (AutomateD VAriaNce reducTion Generator). ADVANTG combines the deterministic neutron transport solver Denovo with the Monte Carlo transport code MCNP via the principle of variance reduction. It automatically produces weight-window and source biasing variance reduction parameters based on the CADIS (Consistent Adjoint Driven Importance Sampling) methodology. Using this novel hybrid methodology Monte Carlo simulations of realistic complex fusion streaming geometries have become possible. In this paper the experimental results from the 2016 DD campaign using measurements with TLDs and activation foils up to 40 m from the plasma source are analyzed. New detailed models of the detector assemblies were incorporated into the JET 360° MCNP model for this analysis. In preparation of the TT and DTE2 campaigns at JET a pre-analysis for these campaigns is also presented

    Analysis of DD, TT and DT Neutron Streaming Experiments with the ADVANTG Code

    Get PDF
    The paper presents an analysis of DD, TT and DT neutron streaming benchmark experiments with the recently released hybrid transport code ADVANTG (AutomateD VAriaNce reducTion Generator). ADVANTG combines the deterministic neutron transport solver Denovo with the Monte Carlo transport code MCNP via the principle of variance reduction. It automatically produces weight-window and source biasing variance reduction parameters based on the CADIS (Consistent Adjoint Driven Importance Sampling) methodology. Using this novel hybrid methodology Monte Carlo simulations of realistic complex fusion streaming geometries have become possible. In this paper the experimental results from the 2016 DD campaign using measurements with TLDs and activation foils up to 40 m from the plasma source are analyzed. New detailed models of the detector assemblies were incorporated into the JET 360° MCNP model for this analysis. In preparation of the TT and DTE2 campaigns at JET a pre-analysis for these campaigns is also presented
    corecore