30,944 research outputs found

    Influence of adaptive mesh refinement and the hydro solver on shear-induced mass stripping in a minor-merger scenario

    Full text link
    We compare two different codes for simulations of cosmological structure formation to investigate the sensitivity of hydrodynamical instabilities to numerics, in particular, the hydro solver and the application of adaptive mesh refinement (AMR). As a simple test problem, we consider an initially spherical gas cloud in a wind, which is an idealized model for the merger of a subcluster or galaxy with a big cluster. Based on an entropy criterion, we calculate the mass stripping from the subcluster as a function of time. Moreover, the turbulent velocity field is analyzed with a multi-scale filtering technique. We find remarkable differences between the commonly used PPM solver with directional splitting in the Enzo code and an unsplit variant of PPM in the Nyx code, which demonstrates that different codes can converge to systematically different solutions even when using uniform grids. For the test case of an unbound cloud, AMR simulations reproduce uniform-grid results for the mass stripping quite well, although the flow realizations can differ substantially. If the cloud is bound by a static gravitational potential, however, we find strong sensitivity to spurious fluctuations which are induced at the cutoff radius of the potential and amplified by the bow shock. This gives rise to substantial deviations between uniform-grid and AMR runs performed with Enzo, while the mass stripping in Nyx simulations of the subcluster is nearly independent of numerical resolution and AMR. Although many factors related to numerics are involved, our study indicates that unsplit solvers with advanced flux limiters help to reduce grid effects and to keep numerical noise under control, which is important for hydrodynamical instabilities and turbulent flows.Comment: 23 pages, 18 figures, accepted for publication by Astronomy and Computin

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    A reliable order-statistics-based approximate nearest neighbor search algorithm

    Full text link
    We propose a new algorithm for fast approximate nearest neighbor search based on the properties of ordered vectors. Data vectors are classified based on the index and sign of their largest components, thereby partitioning the space in a number of cones centered in the origin. The query is itself classified, and the search starts from the selected cone and proceeds to neighboring ones. Overall, the proposed algorithm corresponds to locality sensitive hashing in the space of directions, with hashing based on the order of components. Thanks to the statistical features emerging through ordering, it deals very well with the challenging case of unstructured data, and is a valuable building block for more complex techniques dealing with structured data. Experiments on both simulated and real-world data prove the proposed algorithm to provide a state-of-the-art performance
    • …
    corecore