12,526 research outputs found

    RANS Equations with Explicit Data-Driven Reynolds Stress Closure Can Be Ill-Conditioned

    Full text link
    Reynolds-averaged Navier--Stokes (RANS) simulations with turbulence closure models continue to play important roles in industrial flow simulations. However, the commonly used linear eddy viscosity models are intrinsically unable to handle flows with non-equilibrium turbulence. Reynolds stress models, on the other hand, are plagued by their lack of robustness. Recent studies in plane channel flows found that even substituting Reynolds stresses with errors below 0.5% from direct numerical simulation (DNS) databases into RANS equations leads to velocities with large errors (up to 35%). While such an observation may have only marginal relevance to traditional Reynolds stress models, it is disturbing for the recently emerging data-driven models that treat the Reynolds stress as an explicit source term in the RANS equations, as it suggests that the RANS equations with such models can be ill-conditioned. So far, a rigorous analysis of the condition of such models is still lacking. As such, in this work we propose a metric based on local condition number function for a priori evaluation of the conditioning of the RANS equations. We further show that the ill-conditioning cannot be explained by the global matrix condition number of the discretized RANS equations. Comprehensive numerical tests are performed on turbulent channel flows at various Reynolds numbers and additionally on two complex flows, i.e., flow over periodic hills and flow in a square duct. Results suggest that the proposed metric can adequately explain observations in previous studies, i.e., deteriorated model conditioning with increasing Reynolds number and better conditioning of the implicit treatment of Reynolds stress compared to the explicit treatment. This metric can play critical roles in the future development of data-driven turbulence models by enforcing the conditioning as a requirement on these models.Comment: 35 pages, 18 figure

    To be or not to be stable, that is the question: understanding neural networks for inverse problems

    Full text link
    The solution of linear inverse problems arising, for example, in signal and image processing is a challenging problem, since the ill-conditioning amplifies the noise on the data. Recently introduced deep-learning based algorithms overwhelm the more traditional model-based approaches but they typically suffer from instability with respect to data perturbation. In this paper, we theoretically analyse the trade-off between neural networks stability and accuracy in the solution of linear inverse problems. Moreover, we propose different supervised and unsupervised solutions, to increase network stability by maintaining good accuracy, by inheriting, in the network training, regularization from a model-based iterative scheme. Extensive numerical experiments on image deblurring confirm the theoretical results and the effectiveness of the proposed networks in solving inverse problems with stability with respect to noise.Comment: 26 pages, 9 figures, divided in 4 blocks of figures in the LaTeX code. Paper will be sent for publication on a journal soon. This is a preliminary version, updated versions will be uploaded on ArXi
    • …
    corecore