10,848 research outputs found

    Non-local modulation of the energy cascade in broad-band forced turbulence

    Get PDF
    Classically, large-scale forced turbulence is characterized by a transfer of energy from large to small scales via nonlinear interactions. We have investigated the changes in this energy transfer process in broad-band forced turbulence where an additional perturbation of flow at smaller scales is introduced. The modulation of the energy dynamics via the introduction of forcing at smaller scales occurs not only in the forced region but also in a broad range of length-scales outside the forced bands due to non-local triad interactions. Broad-band forcing changes the energy distribution and energy transfer function in a characteristic manner leading to a significant modulation of the turbulence. We studied the changes in this transfer of energy when changing the strength and location of the small-scale forcing support. The energy content in the larger scales was observed to decrease, while the energy transport power for scales in between the large and small scale forcing regions was enhanced. This was investigated further in terms of the detailed transfer function between the triad contributions and observing the long-time statistics of the flow. The energy is transferred toward smaller scales not only by wavenumbers of similar size as in the case of large-scale forced turbulence, but by a much wider extent of scales that can be externally controlled.Comment: submitted to Phys. Rev. E, 15 pages, 18 figures, uses revtex4.cl

    A framework to assess the quality and robustness of LES codes

    Get PDF
    We present a framework which can be used to rigourously assess and compare large-eddy simulation methods. We apply this to LES of homogeneous isotropic turbulence using three different discretizations and a Smagorinsky model. By systematically varying the simulation resolution and the Smagorinsky coefficient, one can determine parameter regions for which one, two or multiple flow predictions are simultaneously predicted with minimal error. To this end errors on the predicted longitudinal integral length scale, the resolved kinetic energy and the resolved enstrophy are considered. Parameter regions where all considered errors are simultaneously (nearly) minimal are entitled ‘multi-objective optimal’ parameter regions. Surprisingly, we find that a standard second-order method has a larger ‘multiobjective optimal’ parameter region than two considered fourth order methods. Moreover, the errors in the respective ‘multi-objective optimal’ regions are also lowest for the second-order scheme

    Optimal model parameters for multi-objective large-eddy simulations

    Get PDF
    A methodology is proposed for the assessment of error dynamics in large-eddy simulations. It is demonstrated that the optimization of model parameters with respect to one flow property can be obtained at the expense of the accuracy with which other flow properties are predicted. Therefore, an approach is introduced which allows to assess the total errors based on various flow properties simultaneously. We show that parameter settings exist, for which all monitored errors are "near optimal," and refer to such regions as "multi-objective optimal parameter regions." We focus on multi-objective errors that are obtained from weighted spectra, emphasizing both large- as well small-scale errors. These multi-objective optimal parameter regions depend strongly on the simulation Reynolds number and the resolution. At too coarse resolutions, no multi-objective optimal regions might exist as not all error-components might simultaneously be sufficiently small. The identification of multi-objective optimal parameter regions can be adopted to effectively compare different subgrid models. A comparison between large-eddy simulations using the Lilly-Smagorinsky model, the dynamic Smagorinsky model and a new Re-consistent eddy-viscosity model is made, which illustrates this. Based on the new methodology for error assessment the latter model is found to be the most accurate and robust among the selected subgrid models, in combination with the finite volume discretization used in the present study

    Three regularization models of the Navier-Stokes equations

    Get PDF
    We determine how the differences in the treatment of the subfilter-scale physics affect the properties of the flow for three closely related regularizations of Navier-Stokes. The consequences on the applicability of the regularizations as SGS models are also shown by examining their effects on superfilter-scale properties. Numerical solutions of the Clark-alpha model are compared to two previously employed regularizations, LANS-alpha and Leray-alpha (at Re ~ 3300, Taylor Re ~ 790) and to a DNS. We derive the Karman-Howarth equation for both the Clark-alpha and Leray-alpha models. We confirm one of two possible scalings resulting from this equation for Clark as well as its associated k^(-1) energy spectrum. At sub-filter scales, Clark-alpha possesses similar total dissipation and characteristic time to reach a statistical turbulent steady-state as Navier-Stokes, but exhibits greater intermittency. As a SGS model, Clark reproduces the energy spectrum and intermittency properties of the DNS. For the Leray model, increasing the filter width decreases the nonlinearity and the effective Re is substantially decreased. Even for the smallest value of alpha studied, Leray-alpha was inadequate as a SGS model. The LANS energy spectrum k^1, consistent with its so-called "rigid bodies," precludes a reproduction of the large-scale energy spectrum of the DNS at high Re while achieving a large reduction in resolution. However, that this same feature reduces its intermittency compared to Clark-alpha (which shares a similar Karman-Howarth equation). Clark is found to be the best approximation for reproducing the total dissipation rate and the energy spectrum at scales larger than alpha, whereas high-order intermittency properties for larger values of alpha are best reproduced by LANS-alpha.Comment: 21 pages, 8 figure

    Distribution of the Timing, Trigger and Control Signals in the Endcap Cathode Strip Chamber System at CMS

    Get PDF
    This paper presents the implementation of the Timing, Trigger and Control (TTC) signal distribution tree in the Cathode Strip Chamber (CSC) sub-detector of the CMS Experiment at CERN. The key electronic component, the Clock and Control Board (CCB) is described in detail, as well as the transmission of TTC signals from the top of the system down to the front-end boards

    Random forests with random projections of the output space for high dimensional multi-label classification

    Full text link
    We adapt the idea of random projections applied to the output space, so as to enhance tree-based ensemble methods in the context of multi-label classification. We show how learning time complexity can be reduced without affecting computational complexity and accuracy of predictions. We also show that random output space projections may be used in order to reach different bias-variance tradeoffs, over a broad panel of benchmark problems, and that this may lead to improved accuracy while reducing significantly the computational burden of the learning stage

    Distributed utterances

    Get PDF
    I propose an apparatus for handling intrasentential change in context. The standard approach has problems with sentences with multiple occurrences of the same demonstrative or indexical. My proposal involves the idea that contexts can be complex. Complex contexts are built out of (“simple”) Kaplanian contexts by ordered n-tupling. With these we can revise the clauses of Kaplan’s Logic of Demonstratives so that each part of a sentence is taken in a different component of a complex context. I consider other applications of the framework: to agentially distributed utterances (ones made partly by one speaker and partly by another); to an account of scare-quoting; and to an account of a binding-like phenomenon that avoids what Kit Fine calls “the antinomy of the variable.

    Selective labeling: identifying representative sub-volumes for interactive segmentation

    Get PDF
    Automatic segmentation of challenging biomedical volumes with multiple objects is still an open research field. Automatic approaches usually require a large amount of training data to be able to model the complex and often noisy appearance and structure of biological organelles and their boundaries. However, due to the variety of different biological specimens and the large volume sizes of the datasets, training data is costly to produce, error prone and sparsely available. Here, we propose a novel Selective Labeling algorithm to overcome these challenges; an unsupervised sub-volume proposal method that identifies the most representative regions of a volume. This massively-reduced subset of regions are then manually labeled and combined with an active learning procedure to fully segment the volume. Results on a publicly available EM dataset demonstrate the quality of our approach by achieving equivalent segmentation accuracy with only 5 % of the training data
    corecore