1,517 research outputs found

    The Lundgren-Monin-Novikov Hierarchy: Kinetic Equations for Turbulence

    Get PDF
    We present an overview of recent works on the statistical description of turbulent flows in terms of probability density functions (PDFs) in the framework of the Lundgren-Monin-Novikov (LMN) hierarchy. Within this framework, evolution equations for the PDFs are derived from the basic equations of fluid motion. The closure problem arises either in terms of a coupling to multi-point PDFs or in terms of conditional averages entering the evolution equations as unknown functions. We mainly focus on the latter case and use data from direct numerical simulations (DNS) to specify the unclosed terms. Apart from giving an introduction into the basic analytical techniques, applications to two-dimensional vorticity statistics, to the single-point velocity and vorticity statistics of three-dimensional turbulence, to the temperature statistics of Rayleigh-B\'enard convection and to Burgers turbulence are discussed.Comment: Accepted for publication in C. R. Acad. Sc

    Beam Dynamics Studies for SRF Photoinjectors

    Get PDF
    The SRF photoinjector combines the advantages of photo assisted production of high brightness, short electron pulses and high gradient, low loss continuous wave CW operation of a superconducting radiofrequency SRF cavity. The paper discusses beam dynamics considerations for ERL class applications of SRF photoinjectors. One case of particular interest is the design of the SRF photoinjector for BERLinPro, an ERL test facility demanding a high brightness beam with an emittance better than 1 mm mrad at 77 pC and average current of 100 m

    Learning to Learn from Weak Supervision by Full Supervision

    Get PDF
    In this paper, we propose a method for training neural networks when we have a large set of data with weak labels and a small amount of data with true labels. In our proposed model, we train two neural networks: a target network, the learner and a confidence network, the meta-learner. The target network is optimized to perform a given task and is trained using a large set of unlabeled data that are weakly annotated. We propose to control the magnitude of the gradient updates to the target network using the scores provided by the second confidence network, which is trained on a small amount of supervised data. Thus we avoid that the weight updates computed from noisy labels harm the quality of the target network model.Comment: Accepted at NIPS Workshop on Meta-Learning (MetaLearn 2017), Long Beach, CA, US
    • …
    corecore