185,749 research outputs found

    Dissipation of shear-free turbulence near boundaries

    Get PDF
    The rapid-distortion model of Hunt & Graham (1978) for the initial distortion of turbulence by a flat boundary is extended to account fully for viscous processes. Two types of boundary are considered: a solid wall and a free surface. The model is shown to be formally valid provided two conditions are satisfied. The first condition is that time is short compared with the decorrelation time of the energy-containing eddies, so that nonlinear processes can be neglected. The second condition is that the viscous layer near the boundary, where tangential motions adjust to the boundary condition, is thin compared with the scales of the smallest eddies. The viscous layer can then be treated using thin-boundary-layer methods. Given these conditions, the distorted turbulence near the boundary is related to the undistorted turbulence, and thence profiles of turbulence dissipation rate near the two types of boundary are calculated and shown to agree extremely well with profiles obtained by Perot & Moin (1993) by direct numerical simulation. The dissipation rates are higher near a solid wall than in the bulk of the flow because the no-slip boundary condition leads to large velocity gradients across the viscous layer. In contrast, the weaker constraint of no stress at a free surface leads to the dissipation rate close to a free surface actually being smaller than in the bulk of the flow. This explains why tangential velocity fluctuations parallel to a free surface are so large. In addition we show that it is the adjustment of the large energy-containing eddies across the viscous layer that controls the dissipation rate, which explains why rapid-distortion theory can give quantitatively accurate values for the dissipation rate. We also find that the dissipation rate obtained from the model evaluated at the time when the model is expected to fail actually yields useful estimates of the dissipation obtained from the direct numerical simulation at times when the nonlinear processes are significant. We conclude that the main role of nonlinear processes is to arrest growth by linear processes of the viscous layer after about one large-eddy turnover time

    Topology of large scale structure as test of modified gravity

    Full text link
    The genus of the iso-density contours is a robust measure of the topology of large scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias and redshift-space distortion. We show that the growth of density fluctuations is scale-dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the iso-density contours, an intrinsic measure of the topology of large scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations are growing at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures are growing homologously, so we expect that the genus-smoothing scale relation is basically time-independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing scale relation should change over time. This can be used to test the gravity models with large scale structure observations. We studied the case of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann (PPF) models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21cm radio surveys in the near future.Comment: Introduction and discussion expanded and refined, conclusion unchanged, 10 pages, 8 figures. ApJ accepte

    Linear Complexity Lossy Compressor for Binary Redundant Memoryless Sources

    Full text link
    A lossy compression algorithm for binary redundant memoryless sources is presented. The proposed scheme is based on sparse graph codes. By introducing a nonlinear function, redundant memoryless sequences can be compressed. We propose a linear complexity compressor based on the extended belief propagation, into which an inertia term is heuristically introduced, and show that it has near-optimal performance for moderate block lengths.Comment: 4 pages, 1 figur

    Diapycnal displacement, diffusion, and distortion of tracers in the ocean

    Full text link
    Small-scale mixing drives the diabatic upwelling that closes the abyssal ocean overturning circulation. Indirect microstructure measurements of in-situ turbulence suggest that mixing is bottom-enhanced over rough topography, implying downwelling in the interior and stronger upwelling in a sloping bottom boundary layer. Tracer Release Experiments (TREs), in which inert tracers are purposefully released and their dispersion is surveyed over time, have been used to independently infer turbulent diffusivities—but typically provide estimates in excess of microstructure ones. In an attempt to reconcile these differences, Ruan and Ferrari (2021) derived exact tracer-weighted buoyancy moment diagnostics, which we here apply to quasi-realistic simulations. A tracer’s diapycnal displacement rate is exactly twice the tracer-averaged buoyancy velocity, itself a convolution of an asymmetric upwelling/downwelling dipole. The tracer’s diapycnal spreading rate, however, involves both the expected positive contribution from the tracer-averaged in-situ diffusion as well as an additional non-linear diapycnal distortion term, which is caused by correlations between buoyancy and the buoyancy velocity, and can be of either sign. Distortion is generally positive (stretching) due to bottom-enhanced mixing in the stratified interior but negative (contraction) near the bottom. Our simulations suggest that these two effects coincidentally cancel for the Brazil Basin Tracer Release Experiment, resulting in negligible net distortion. By contrast, near-bottom tracers experience leading-order distortion that varies in time. Errors in tracer moments due to realistically sparse sampling are generally small (< 20%), especially compared to the O(1) structural errors due to the omission of distortion effects in inverse models. These results suggest that TREs, although indispensable, should not be treated as “unambiguous” constraints on diapycnal mixing.First author draf

    Can decaying modes save void models for acceleration?

    Full text link
    The unexpected dimness of Type Ia supernovae (SNe), apparently due to accelerated expansion driven by some form of dark energy or modified gravity, has led to attempts to explain the observations using only general relativity with baryonic and cold dark matter, but by dropping the standard assumption of homogeneity on Hubble scales. In particular, the SN data can be explained if we live near the centre of a Hubble-scale void. However, such void models have been shown to be inconsistent with various observations, assuming the void consists of a pure growing mode. Here it is shown that models with significant decaying mode contribution today can be ruled out on the basis of the expected cosmic microwave background spectral distortion. This essentially closes one of the very few remaining loopholes in attempts to rule out void models, and strengthens the evidence for Hubble-scale homogeneity.Comment: 11 pages, 3 figures; discussion expanded, appendix added; version accepted to Phys. Rev.

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    Efficient LDPC Codes over GF(q) for Lossy Data Compression

    Full text link
    In this paper we consider the lossy compression of a binary symmetric source. We present a scheme that provides a low complexity lossy compressor with near optimal empirical performance. The proposed scheme is based on b-reduced ultra-sparse LDPC codes over GF(q). Encoding is performed by the Reinforced Belief Propagation algorithm, a variant of Belief Propagation. The computational complexity at the encoder is O(.n.q.log q), where is the average degree of the check nodes. For our code ensemble, decoding can be performed iteratively following the inverse steps of the leaf removal algorithm. For a sparse parity-check matrix the number of needed operations is O(n).Comment: 5 pages, 3 figure
    • …
    corecore