322 research outputs found

    Transitions in large eddy simulation of box turbulence

    Full text link
    One promising decomposition of turbulent dynamics is that into building blocks such as equilibrium and periodic solutions and orbits connecting these. While the numerical approximation of such building blocks is feasible for flows in small domains and at low Reynolds numbers, computations in developed turbulence are currently out of reach because of the large number of degrees of freedom necessary to represent Navier-Stokes flow on all relevant spatial scales. We mitigate this problem by applying large eddy simulation (LES), which aims to model, rather than resolve, motion on scales below the filter length, which is fixed by a model parameter. By considering a periodic spatial domain, we avoid complications that arise in LES modelling in the presence of boundary layers. We consider the motion of an LES fluid subject to a constant body force of the Taylor-Green type as the separation between the forcing length scale and the filter length is increased. In particular, we discuss the transition from laminar to weakly turbulent motion, regulated by simple invariant solution, on a grid of 32332^3 points

    Alignment Knowledge Distillation for Online Streaming Attention-based Speech Recognition

    Get PDF
    This article describes an efficient training method for online streaming attention-based encoder-decoder (AED) automatic speech recognition (ASR) systems. AED models have achieved competitive performance in offline scenarios by jointly optimizing all components. They have recently been extended to an online streaming framework via models such as monotonie chunkwise attention (MoChA). However, the elaborate attention calculation process is not robust against long-form speech utterances. Moreover, the sequence-level training objective and time-restricted streaming encoder cause a nonnegligible delay in token emission during inference. To address these problems, we propose CTC synchronous training (CTC-ST), in which CTC alignments are leveraged as a reference for token boundaries to enable a MoChA model to learn optimal monotonie input-output alignments. We formulate a purely end-to-end training objective to synchronize the boundaries of MoChA to those of CTC. The CTC model shares an encoder with the MoChA model to enhance the encoder representation. Moreover, the proposed method provides alignment information learned in the CTC branch to the attention-based decoder. Therefore, CTC-ST can be regarded as self-distillation of alignment knowledge from CTC to MoChA. Experimental evaluations on a variety of benchmark datasets show that the proposed method significantly reduces recognition errors and emission latency simultaneously. The robustness to long-form and noisy speech is also demonstrated. We compare CTC-ST with several methods that distill alignment knowledge from a hybrid ASR system and show that the CTC-ST can achieve a comparable tradeoff of accuracy and latency without relying on external alignment information
    corecore