7 research outputs found
Assessment of Neural Network Augmented Reynolds Averaged Navier Stokes Turbulence Model in Extrapolation Modes
A machine-learned (ML) model is developed to enhance the accuracy of
turbulence transport equations of Reynolds Averaged Navier Stokes (RANS) solver
and applied for periodic hill test case, which involves complex flow regimes,
such as attached boundary layer, shear-layer, and separation and reattachment.
The accuracy of the model is investigated in extrapolation modes, i.e., the
test case has much larger separation bubble and higher turbulence than the
training cases. A parametric study is also performed to understand the effect
of network hyperparameters on training and model accuracy and to quantify the
uncertainty in model accuracy due to the non-deterministic nature of the neural
network training. The study revealed that, for any network, less than optimal
mini-batch size results in overfitting, and larger than optimal batch size
reduces accuracy. Data clustering is found to be an efficient approach to
prevent the machine-learned model from over-training on more prevalent flow
regimes, and results in a model with similar accuracy using almost one-third of
the training dataset. Feature importance analysis reveals that turbulence
production is correlated with shear strain in the free-shear region, with shear
strain and wall-distance and local velocity-based Reynolds number in the
boundary layer regime, and with streamwise velocity gradient in the
accelerating flow regime. The flow direction is found to be key in identifying
flow separation and reattachment regime. Machine-learned models perform poorly
in extrapolation mode, wherein the prediction shows less than 10% correlation
with Direct Numerical Simulation (DNS). A priori tests reveal that model
predictability improves significantly as the hill dataset is partially added
during training in a partial extrapolation model, e.g., with the addition of
only 5% of the hill data increases correlation with DNS to 80%.Comment: 50 pages, 18 figure
Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States
Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks
The United States COVID-19 Forecast Hub dataset
Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages
Development and Validation of a Machine Learned Turbulence Model
A stand-alone machine learned turbulence model is developed and applied for the solution of steady and unsteady boundary layer equations, and issues and constraints associated with the model are investigated. The results demonstrate that an accurately trained machine learned model can provide grid convergent, smooth solutions, work in extrapolation mode, and converge to a correct solution from ill-posed flow conditions. The accuracy of the machine learned response surface depends on the choice of flow variables, and training approach to minimize the overlap in the datasets. For the former, grouping flow variables into a problem relevant parameter for input features is desirable. For the latter, incorporation of physics-based constraints during training is helpful. Data clustering is also identified to be a useful tool as it avoids skewness of the model towards a dominant flow feature
Recommended from our members
Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States.
Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks
Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States.
Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks