19 research outputs found

    Modulation of homogeneous turbulence seeded with finite size bubbles or particles

    Get PDF
    The dynamics of homogeneous, isotropic turbulence seeded with finite sized particles or bubbles is investigated in a series of numerical simulations, using the force-coupling method for the particle phase and low wavenumber forcing of the flow to sustain the turbulence. Results are given on the modulation of the turbulence due to massless bubbles, neutrally buoyant particles and inertial particles of specific density 1.4 at volumetric concentrations of 6%. Buoyancy forces due to gravity are excluded to emphasize finite size and inertial effects for the bubbles or particles and their interactions with the turbulence. Besides observing the classical entrapment of bubbles and the expulsion of inertial particles by vortex structures, we analyze the Lagrangian statistics for the velocity and acceleration of the dispersed phase. The turbulent fluctuations are damped at mid-range wavenumbers by the bubbles or particles while the smallscale kinetic energy is significantly enhanced. Unexpectedly, the modulation of turbulence depends only slightly on the dispersion characteristics (bubble entrapment in vortices or inertial sweeping of the solid particles) but is closely related to the stresslet component (finite size effect) of the flow disturbances. The pivoting wavenumber characterizing the transition from damped to enhanced energy content is shown to vary with the size of the bubbles or particles. The spectrum for the energy transfer by the particle phase is examined and the possibility of representing this, at large scales, through an additional effective viscosity is discussed

    A Method for Computing Inverse Parametric PDE Problems with Random-Weight Neural Networks

    Full text link
    We present a method for computing the inverse parameters and the solution field to inverse parametric PDEs based on randomized neural networks. This extends the local extreme learning machine technique originally developed for forward PDEs to inverse problems. We develop three algorithms for training the neural network to solve the inverse PDE problem. The first algorithm (NLLSQ) determines the inverse parameters and the trainable network parameters all together by the nonlinear least squares method with perturbations (NLLSQ-perturb). The second algorithm (VarPro-F1) eliminates the inverse parameters from the overall problem by variable projection to attain a reduced problem about the trainable network parameters only. It solves the reduced problem first by the NLLSQ-perturb algorithm for the trainable network parameters, and then computes the inverse parameters by the linear least squares method. The third algorithm (VarPro-F2) eliminates the trainable network parameters from the overall problem by variable projection to attain a reduced problem about the inverse parameters only. It solves the reduced problem for the inverse parameters first, and then computes the trainable network parameters afterwards. VarPro-F1 and VarPro-F2 are reciprocal to each other in a sense. The presented method produces accurate results for inverse PDE problems, as shown by the numerical examples herein. For noise-free data, the errors for the inverse parameters and the solution field decrease exponentially as the number of collocation points or the number of trainable network parameters increases, and can reach a level close to the machine accuracy. For noisy data, the accuracy degrades compared with the case of noise-free data, but the method remains quite accurate. The presented method has been compared with the physics-informed neural network method.Comment: 40 pages, 8 figures, 34 table

    An Extreme Learning Machine-Based Method for Computational PDEs in Higher Dimensions

    Full text link
    We present two effective methods for solving high-dimensional partial differential equations (PDE) based on randomized neural networks. Motivated by the universal approximation property of this type of networks, both methods extend the extreme learning machine (ELM) approach from low to high dimensions. With the first method the unknown solution field in dd dimensions is represented by a randomized feed-forward neural network, in which the hidden-layer parameters are randomly assigned and fixed while the output-layer parameters are trained. The PDE and the boundary/initial conditions, as well as the continuity conditions (for the local variant of the method), are enforced on a set of random interior/boundary collocation points. The resultant linear or nonlinear algebraic system, through its least squares solution, provides the trained values for the network parameters. With the second method the high-dimensional PDE problem is reformulated through a constrained expression based on an Approximate variant of the Theory of Functional Connections (A-TFC), which avoids the exponential growth in the number of terms of TFC as the dimension increases. The free field function in the A-TFC constrained expression is represented by a randomized neural network and is trained by a procedure analogous to the first method. We present ample numerical simulations for a number of high-dimensional linear/nonlinear stationary/dynamic PDEs to demonstrate their performance. These methods can produce accurate solutions to high-dimensional PDEs, in particular with their errors reaching levels not far from the machine accuracy for relatively lower dimensions. Compared with the physics-informed neural network (PINN) method, the current method is both cost-effective and more accurate for high-dimensional PDEs.Comment: 38 pages, 17 tables, 25 figure
    corecore