99 research outputs found

    A new method for avalanche hazard mapping using a combination of statistical and deterministic models

    No full text
    International audienceThe purpose of the present paper is to propose a new method for avalanche hazard mapping using a combination of statistical and deterministic modelling tools. The methodology is based on frequency-weighted impact pressure, and uses an avalanche dynamics model embedded within a statistical framework. The outlined procedure provides a useful way for avalanche experts to produce hazard maps for the typical case of avalanche sites where historical records are either poorly documented or even completely lacking, as well as to derive confidence limits on the proposed zoning. The methodology is implemented using avalanche information from Iceland and the Swiss mapping criteria, and applied to an Icelandic real world avalanche-mapping problem

    A Rapid, Empirical Method for Detection and Estimation of Outlier Frames in Particle Imaging Velocimetry Data using Proper Orthogonal Decomposition

    Get PDF
    This paper develops a method for detection and removal of outlier images from digital Particle Image Velocimetry data using Proper Orthogonal De-composition (POD). The outlier is isolated in the leading POD modes, removed and a replacement value re-estimated. The method is used to estimate and replace whole images within the sequence. This is particularly useful, if a single PIV image is suddenly heavily contaminated with background noise, or to estimate a dropped frame within a sequence. The technique is tested on a synthetic dataset that permits the effective acquisition frequency to be varied systematically, before application to flow field frames obtained from a large-eddy simulation. As expected, outlier re-estimation becomes more difficult when the integral time scale for the flow is long relative to the sampling period. However, the method provides a systematic improvement in predicting frames compared to interpolating from neighbouring(1) frames

    Wavelet phase analysis of two velocity components to infer the structure of interscale transfers in a turbulent boundary-layer

    Get PDF
    Scale-dependent phase analysis of velocity time series measured in a zero pressure gradient boundary layer shows that phase coupling between longitudinal and vertical velocity components is strong at both large and small scales, but minimal in the middle of the inertial regime. The same general pattern is observed at all vertical positions studied, but there is stronger phase coherence as the vertical coordinate, y, increases. The phase difference histograms evolve from a unimodal shape at small scales to the development of significant bimodality at the integral scale and above. The asymmetry in the off-diagonal couplings changes sign at the midpoint of the inertial regime, with the small scale relation consistent with intense ejections followed by a more prolonged sweep motion. These results may be interpreted in a manner that is consistent with the action of low speed streaks and hairpin vortices near the wall, with large scale motions further from the wall, the effect of which penetrates to smaller scales. Hence, a measure of phase coupling, when combined with a scale-by-scale decomposition of perpendicular velocity components, is a useful tool for investigating boundary-layer structure and inferring process from single-point measurements

    (Multi)wavelets increase both accuracy and efficiency of standard Godunov-type hydrodynamic models

    Get PDF
    This paper presents a scaled reformulation of a robust second-order Discontinuous Galerkin (DG2) solver for the Shallow Water Equations (SWE), with guiding principles on how it can be naturally extended to fit into the multiresolution analysis of multiwavelets (MW). Multiresolution analysis applied to the flow and topography data enables the creation of an adaptive MWDG2 solution on a non-uniform grid. The multiresolution analysis also permits control of the adaptive model error by a single user-prescribed parameter. This results in an adaptive MWDG2 solver that can fully exploit the local (de)compression of piecewise-linear modelled data, and from which a first-order finite volume version (FV1) is directly obtainable based on the Haar wavelet (HFV1) for local (de)compression of piecewise-constant modelled data. The behaviour of the adaptive HFV1 and MWDG2 solvers is systematically studied on a number of well-known hydraulic tests that cover all elementary aspects relevant to accurate, efficient and robust modelling. The adaptive solvers are run starting from a baseline mesh with a single element, and their accuracy and efficiency are measured referring to standard FV1 and DG2 simulations on the uniform grid involving the finest resolution accessible by the adaptive solvers. Our findings reveal that the MWDG2 solver can achieve the same accuracy as the DG2 solver but with a greater efficiency than the FV1 solver due to the smoothness of its piecewise-linear basis, which enables more aggressive coarsening than with the piecewise-constant basis in the HFV1 solver. This suggests a great potential for the MWDG2 solver to efficiently handle the depth and breadth in resolution variability, while also being a multiresolution mesh generator. Accompanying model software and simulation data are openly available online

    Gradual wavelet reconstruction of the velocity increments for turbulent wakes

    Get PDF
    This work explores the properties of the velocity increment distributions for wakes of contrasting local Reynolds number and nature of generation (a cylinder wake and a multiscale-forced case, respectively). It makes use of a technique called gradual wavelet reconstruction (GWR) to generate constrained randomizations of the original data, the nature of which is a function of a parameter, ϑ. This controls the proportion of the energy between the Markov-Einstein length (∼ 0.8 Taylor scales) and integral scale that is fixed in place in the synthetic data. The properties of the increments for these synthetic data are then compared to the original data as a function of ϑ. We write a Fokker-Planck equation for the evolution of the velocity increments as a function of spatial scale, r, and, in line with previous work, expand the drift and diffusion terms in terms up to fourth order in the increments and find no terms are relevant beyond the quadratic terms. Only the linear contribution to the expansion of the drift coefficient is non-zero and it exhibits a consistent scaling with ϑ for different flows above a low threshold. For the diffusion coefficient, we find a local Reynolds number independence in the relation between the constant term and ϑ for the multiscale-forced wakes. This term characterizes small scale structure and can be contrasted with the results for the Kolmogorov capacity of the zero-crossings of the velocity signals, which measures structure over all scales and clearly distinguishes between the types of forcing. Using GWR shows that results for the linear and quadratic terms in the expansion of the diffusion coefficient are significant, providing a new means for identifying intermittency and anomalous scaling in turbulence datasets. All our data showed a similar scaling behavior for these parameters irrespective of forcing type or Reynolds number, indicating a degree of universality to the anomalous scaling of turbulence. Hence, these terms are a useful metric for testing the efficacy of synthetic turbulence generation schemes used in large eddy simulation, and we also discuss the implications of our approach for reduced order modeling of the Navier-Stokes equations

    A rapid non-iterative proper orthogonal decomposition based outlier detection and correction for PIV data

    Get PDF
    The present work proposes a novel method of detection and estimation of outliers in particle image velocimetry measurements by the modification of the temporal coefficients associated with a proper orthogonal decomposition of an experimental time series. Using synthetic outliers applied to two sequences of vector fields, the method is benchmarked against stateof-the-art approaches recently proposed to remove the influence of outliers. Compared with these methods, the proposed approach offers an increase in accuracy and robustness for the detection of outliers and comparable accuracy for their estimation

    measures of biological diversity overview and unified framework

    Get PDF
    A variety of statistical measures of diversity have been employed across biology and ecology, including Shannon entropy, the Gini-Simpson index, so-called effective numbers of species (aka Hill's measures), and more besides. I will review several major options and then present a comprehensive formalism in which all these can be embedded as special cases, depending on the setting of two parameters, labelled degree and order. This mathematical framework is adapted from generalized information theory. A discussion of the theoretical meaning of the parameters in biological applications provides insight into the conceptual features and limitations of current approaches. The unified framework described also allows for the development of a tailored solution for the measurement of biological diversity that jointly satisfies otherwise divergent desiderata put forward in the literature
    corecore