452 research outputs found

    MeshfreeFlowNet: A Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework

    Get PDF
    We propose MeshfreeFlowNet, a novel deep learning-based super-resolution framework to generate continuous (grid-free) spatio-temporal solutions from the low-resolution inputs. While being computationally efficient, MeshfreeFlowNet accurately recovers the fine-scale quantities of interest. MeshfreeFlowNet allows for: (i) the output to be sampled at all spatio-temporal resolutions, (ii) a set of Partial Differential Equation (PDE) constraints to be imposed, and (iii) training on fixed-size inputs on arbitrarily sized spatio-temporal domains owing to its fully convolutional encoder. We empirically study the performance of MeshfreeFlowNet on the task of super-resolution of turbulent flows in the Rayleigh-Benard convection problem. Across a diverse set of evaluation metrics, we show that MeshfreeFlowNet significantly outperforms existing baselines. Furthermore, we provide a large scale implementation of MeshfreeFlowNet and show that it efficiently scales across large clusters, achieving 96.80% scaling efficiency on up to 128 GPUs and a training time of less than 4 minutes.Comment: Supplementary Video: https://youtu.be/mjqwPch9gDo. Accepted to SC2

    MeshfreeFlowNet: A Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework

    Get PDF
    We propose MeshfreeFlowNet, a novel deep learning-based super-resolution framework to generate continuous (grid-free) spatio-temporal solutions from the low-resolution inputs. While being computationally efficient, MeshfreeFlowNet accurately recovers the fine-scale quantities of interest. MeshfreeFlowNet allows for: (i) the output to be sampled at all spatio-temporal resolutions, (ii) a set of Partial Differential Equation (PDE) constraints to be imposed, and (iii) training on fixed-size inputs on arbitrarily sized spatio-temporal domains owing to its fully convolutional encoder. We empirically study the performance of MeshfreeFlowNet on the task of super-resolution of turbulent flows in the Rayleigh-Benard convection problem. Across a diverse set of evaluation metrics, we show that MeshfreeFlowNet significantly outperforms existing baselines. Furthermore, we provide a large scale implementation of MeshfreeFlowNet and show that it efficiently scales across large clusters, achieving 96.80% scaling efficiency on up to 128 GPUs and a training time of less than 4 minutes. We provide an open-source implementation of our method that supports arbitrary combinations of PDE constraints

    A staggered semi-implicit hybrid finite volume / finite element scheme for the shallow water equations at all Froude numbers

    Full text link
    We present a novel staggered semi-implicit hybrid FV/FE method for the numerical solution of the shallow water equations at all Froude numbers on unstructured meshes. A semi-discretization in time of the conservative Saint-Venant equations with bottom friction terms leads to its decomposition into a first order hyperbolic subsystem containing the nonlinear convective term and a second order wave equation for the pressure. For the spatial discretization of the free surface elevation an unstructured mesh of triangular simplex elements is considered, whereas a dual grid of the edge-type is employed for the computation of the depth-averaged momentum vector. The first stage of the proposed algorithm consists in the solution of the nonlinear convective subsystem using an explicit Godunov-type FV method on the staggered grid. Next, a classical continuous FE scheme provides the free surface elevation at the vertex of the primal mesh. The semi-implicit strategy followed circumvents the contribution of the surface wave celerity to the CFL-type time step restriction making the proposed algorithm well-suited for low Froude number flows. The conservative formulation of the governing equations also allows the discretization of high Froude number flows with shock waves. As such, the new hybrid FV/FE scheme is able to deal simultaneously with both, subcritical as well as supercritical flows. Besides, the algorithm is well balanced by construction. The accuracy of the overall methodology is studied numerically and the C-property is proven theoretically and validated via numerical experiments. The solution of several Riemann problems attests the robustness of the new method to deal also with flows containing bores and discontinuities. Finally, a 3D dam break problem over a dry bottom is studied and our numerical results are successfully compared with numerical reference solutions and experimental data

    Lattice Boltzmann modeling for shallow water equations using high performance computing

    Get PDF
    The aim of this dissertation project is to extend the standard Lattice Boltzmann method (LBM) for shallow water flows in order to deal with three dimensional flow fields. The shallow water and mass transport equations have wide applications in ocean, coastal, and hydraulic engineering, which can benefit from the advantages of the LBM. The LBM has recently become an attractive numerical method to solve various fluid dynamics phenomena; however, it has not been extensively applied to modeling shallow water flow and mass transport. Only a few works can be found on improving the LBM for mass transport in shallow water flows and even fewer on extending it to model three dimensional shallow water flow fields. The application of the LBM to modeling the shallow water and mass transport equations has been limited because it is not clearly understood how the LBM solves the shallow water and mass transport equations. The project first focuses on studying the importance of choosing enhanced collision operators such as the multiple-relaxation-time (MRT) and two-relaxation-time (TRT) over the standard single-relaxation-time (SRT) in LBM. A (MRT) collision operator is chosen for the shallow water equations, while a (TRT) method is used for the advection-dispersion equation. Furthermore, two speed-of-sound techniques are introduced to account for heterogeneous and anisotropic dispersion coefficients. By selecting appropriate equilibrium distribution functions, the standard LBM is extended to solve three-dimensional wind-driven and density-driven circulation by introducing a multi-layer LB model. A MRT-LBM model is used to solve for each layer coupled by the vertical viscosity forcing term. To increase solution stability, an implicit step is suggested to obtain stratified flow velocities. Numerical examples are presented to verify the multi-layer LB model against analytical solutions. The modelā€™s capability of calculating lateral and vertical distributions of the horizontal velocities is demonstrated for wind- and density- driven circulation over non-uniform bathymetry. The parallel performance of the LBM on central processing unit (CPU) based and graphics processing unit (GPU) based high performance computing (HPC) architectures is investigated showing attractive performance in relation to speedup and scalability

    Deep learning approach to forecasting hourly solar irradiance

    Get PDF
    Abstract: In this dissertation, six artificial intelligence (AI) based methods for forecasting solar irradiance are presented. Solar energy is a clean renewable energy source (RES) which is free and abundant in nature. But despite the environmental impacts of fossil energy, global dependence on it is yet to drop appreciably in favor of solar energy for power generation purposes. Although the latest improvements on the technologies of photovoltaic (PV) cells have led to a significant drop in the cost of solar panels, solar power is still unattractive to some consumers due to its unpredictability. Consequently, accurate prediction of solar irradiance for stable solar power production continues to be a critical need both in the field of physical simulations or artificial intelligence. The performance of various methods in use for prediction of solar irradiance depends on the diversity of dataset, time step, experimental setup, performance evaluators, and forecasting horizon. In this study, historical meteorological data for the city of Johannesburg were used as training data for the solar irradiance forecast. Data collected for this work spanned from 1984 to 2019. Only ten years (2009 to 2018) of data was used. Tools used are Jupyter notebook and Computer with Nvidia GPU...M.Ing. (Electrical and Electronic Engineering Management

    Data-driven deep-learning methods for the accelerated simulation of Eulerian fluid dynamics

    Get PDF
    Deep-learning (DL) methods for the fast inference of the temporal evolution of ļ¬‚uid-dynamics systems, based on the previous recognition of features underlying large sets of ļ¬‚uid-dynamics data, have been studied. Speciļ¬cally, models based on convolution neural networks (CNNs) and graph neural networks (GNNs) were proposed and discussed. A U-Net, a popular fully-convolutional architecture, was trained to infer wave dynamics on liquid surfaces surrounded by walls, given as input the system state at previous time-points. A term for penalising the error of the spatial derivatives was added to the loss function, which resulted in a suppression of spurious oscillations and a more accurate location and length of the predicted wavefronts. This model proved to accurately generalise to complex wall geometries not seen during training. As opposed to the image data-structures processed by CNNs, graphs oļ¬€er higher freedom on how data is organised and processed. This motivated the use of graphs to represent the state of ļ¬‚uid-dynamic systems discretised by unstructured sets of nodes, and GNNs to process such graphs. Graphs have enabled more accurate representations of curvilinear geometries and higher resolution placement exclusively in areas where physics is more challenging to resolve. Two novel GNN architectures were designed for ļ¬‚uid-dynamics inference: the MuS-GNN, a multi-scale GNN, and the REMuS-GNN, a rotation-equivariant multi-scale GNN. Both architectures work by repeatedly passing messages from each node to its nearest nodes in the graph. Additionally, lower-resolutions graphs, with a reduced number of nodes, are deļ¬ned from the original graph, and messages are also passed from ļ¬ner to coarser graphs and vice-versa. The low-resolution graphs allowed for eļ¬ƒciently capturing physics encompassing a range of lengthscales. Advection and ļ¬‚uid ļ¬‚ow, modelled by the incompressible Navier-Stokes equations, were the two types of problems used to assess the proposed GNNs. Whereas a single-scale GNN was suļ¬ƒcient to achieve high generalisation accuracy in advection simulations, ļ¬‚ow simulation highly beneļ¬ted from an increasing number of low-resolution graphs. The generalisation and long-term accuracy of these simulations were further improved by the REMuS-GNN architecture, which processes the system state independently of the orientation of the coordinate system thanks to a rotation-invariant representation and carefully designed components. To the best of the authorā€™s knowledge, the REMuS-GNN architecture was the ļ¬rst rotation-equivariant and multi-scale GNN. The simulations were accelerated between one (in a CPU) and three (in a GPU) orders of magnitude with respect to a CPU-based numerical solver. Additionally, the parallelisation of multi-scale GNNs resulted in a close-to-linear speedup with the number of CPU cores or GPUs.Open Acces
    • ā€¦
    corecore