16 research outputs found

    Improving the predictions of ML-corrected climate models with novelty detection

    Full text link
    While previous works have shown that machine learning (ML) can improve the prediction accuracy of coarse-grid climate models, these ML-augmented methods are more vulnerable to irregular inputs than the traditional physics-based models they rely on. Because ML-predicted corrections feed back into the climate model's base physics, the ML-corrected model regularly produces out of sample data, which can cause model instability and frequent crashes. This work shows that adding semi-supervised novelty detection to identify out-of-sample data and disable the ML-correction accordingly stabilizes simulations and sharply improves the quality of predictions. We design an augmented climate model with a one-class support vector machine (OCSVM) novelty detector that provides better temperature and precipitation forecasts in a year-long simulation than either a baseline (no-ML) or a standard ML-corrected run. By improving the accuracy of coarse-grid climate models, this work helps make accurate climate models accessible to researchers without massive computational resources.Comment: Appearing at Tackling Climate Change with Machine Learning Workshop at NeurIPS 202

    Machine-learned climate model corrections from a global storm-resolving model

    Full text link
    Due to computational constraints, running global climate models (GCMs) for many years requires a lower spatial grid resolution (≳50{\gtrsim}50 km) than is optimal for accurately resolving important physical processes. Such processes are approximated in GCMs via subgrid parameterizations, which contribute significantly to the uncertainty in GCM predictions. One approach to improving the accuracy of a coarse-grid global climate model is to add machine-learned state-dependent corrections at each simulation timestep, such that the climate model evolves more like a high-resolution global storm-resolving model (GSRM). We train neural networks to learn the state-dependent temperature, humidity, and radiative flux corrections needed to nudge a 200 km coarse-grid climate model to the evolution of a 3~km fine-grid GSRM. When these corrective ML models are coupled to a year-long coarse-grid climate simulation, the time-mean spatial pattern errors are reduced by 6-25% for land surface temperature and 9-25% for land surface precipitation with respect to a no-ML baseline simulation. The ML-corrected simulations develop other biases in climate and circulation that differ from, but have comparable amplitude to, the baseline simulation

    Emulating Fast Processes in Climate Models

    Full text link
    Cloud microphysical parameterizations in atmospheric models describe the formation and evolution of clouds and precipitation, a central weather and climate process. Cloud-associated latent heating is a primary driver of large and small-scale circulations throughout the global atmosphere, and clouds have important interactions with atmospheric radiation. Clouds are ubiquitous, diverse, and can change rapidly. In this work, we build the first emulator of an entire cloud microphysical parameterization, including fast phase changes. The emulator performs well in offline and online (i.e. when coupled to the rest of the atmospheric model) tests, but shows some developing biases in Antarctica. Sensitivity tests demonstrate that these successes require careful modeling of the mixed discrete-continuous output as well as the input-output structure of the underlying code and physical process.Comment: Accepted at the Machine Learning and the Physical Sciences Workshop at the 36th conference on Neural Information Processing Systems (NeurIPS) December 3, 202

    ACE: A fast, skillful learned global atmospheric model for climate prediction

    Full text link
    Existing ML-based atmospheric models are not suitable for climate prediction, which requires long-term stability and physical consistency. We present ACE (AI2 Climate Emulator), a 200M-parameter, autoregressive machine learning emulator of an existing comprehensive 100-km resolution global atmospheric model. The formulation of ACE allows evaluation of physical laws such as the conservation of mass and moisture. The emulator is stable for 100 years, nearly conserves column moisture without explicit constraints and faithfully reproduces the reference model's climate, outperforming a challenging baseline on over 90% of tracked variables. ACE requires nearly 100x less wall clock time and is 100x more energy efficient than the reference model using typically available resources. Without fine-tuning, ACE can stably generalize to a previously unseen historical sea surface temperature dataset.Comment: Accepted at Tackling Climate Change with Machine Learning: workshop at NeurIPS 202

    Cloud System Evolution in the Trades (CSET): Following the Evolution of Boundary Layer Cloud Systems with the NSFNCAR GV

    Get PDF
    The Cloud System Evolution in the Trades (CSET) study was designed to describe and explain the evolution of the boundary layer aerosol, cloud, and thermodynamic structures along trajectories within the North Pacific trade winds. The study centered on seven round trips of the National Science FoundationNational Center for Atmospheric Research (NSFNCAR) Gulfstream V (GV) between Sacramento, California, and Kona, Hawaii, between 7 July and 9 August 2015. The CSET observing strategy was to sample aerosol, cloud, and boundary layer properties upwind from the transition zone over the North Pacific and to resample these areas two days later. Global Forecast System forecast trajectories were used to plan the outbound flight to Hawaii with updated forecast trajectories setting the return flight plan two days later. Two key elements of the CSET observing system were the newly developed High-Performance Instrumented Airborne Platform for Environmental Research (HIAPER) Cloud Radar (HCR) and the high-spectral-resolution lidar (HSRL). Together they provided unprecedented characterizations of aerosol, cloud, and precipitation structures that were combined with in situ measurements of aerosol, cloud, precipitation, and turbulence properties. The cloud systems sampled included solid stratocumulus infused with smoke from Canadian wildfires, mesoscale cloudprecipitation complexes, and patches of shallow cumuli in very clean environments. Ultraclean layers observed frequently near the top of the boundary layer were often associated with shallow, optically thin, layered veil clouds. The extensive aerosol, cloud, drizzle, and boundary layer sampling made over open areas of the northeast Pacific along 2-day trajectories during CSET will be an invaluable resource for modeling studies of boundary layer cloud system evolution and its governing physical processes

    Improving Prognostic Moist Turbulence Parameterization with Machine Learning and Software Design

    No full text
    Thesis (Ph.D.)--University of Washington, 2019The primary result of this work is that concepts from software design and machine learning may be used to improve moist turbulence parameterization in weather and climate models. We have seen relatively slow improvement of moist turbulence parameterization in past decades, and explore a radically different approach to parameterization involving machine learning. The core of the approach is to rely on a trusted source of training data, such as high-resolution models or reanalysis, to be used to train a machine learning algorithm to perform the closures normally defined by conventional parameterization. The Python packages \texttt{sympl} (System for Modelling Planets) and \texttt{climt} (Climate Modeling and Diagnostics Toolkit) are introduced. These packages are an attempt to rethink climate modelling frameworks from the ground up. The result defines expressive data structures that enforce software design best practices. It allows scientists to easily and reliably combine model components to represent the climate system at a desired level of complexity and enables users to fully understand what the model is doing. Random forest and polynomial regression are used as an alternate closure assumption in a higher-order turbulence closure scheme trained for use over the summertime Northeast Pacific stratocumulus to trade cumulus transition region. While the machine learning closures better match high-resolution model data over withheld validation samples compared to a state-of-the-art higher-order turbulence closure scheme, the resulting model is unstable when used prognostically. Within a first-order closure framework, an artificial neural network is trained to reproduce thermodynamic tendencies and boundary layer properties from ERA5 HIRES reanalysis data over the summertime Northeast Pacific stratocumulus to trade cumulus transition region. The network is trained prognostically using 7-day forecasts rather than using diagnosed instantaneous tendencies alone. The resulting model, Machine Assisted Reanalysis Boundary Layer Emulation (MARBLE), skillfully reproduces the boundary layer structure and cloud properties of the reanalysis data in 7-day single-column prognostic simulations over withheld testing periods. Radiative heating profiles are well-simulated, and the mean climatology and variability of the stratocumulus to cumulus transition are accurately reproduced. MARBLE more closely tracks the reanalysis than does a comparable configuration of the underlying forecast model. Similar results are obtained over the Southern Great Plains

    Skill of ship-following large-eddy simulations in reproducing MAGIC observations across the Northeast Pacific stratocumulus to cumulus transition region

    No full text
    Thesis (Master's)--University of Washington, 2017-03During the Marine ARM GPCI Investigation of Clouds (MAGIC) in Oct. 2011 - Sept. 2012, a container ship making periodic cruises between Los Angeles, CA and Honolulu, HI was instrumented with surface meteorological, aerosol and radiation instruments, a cloud radar and ceilometer, and radiosondes. Here, large-eddy simulation (LES) is performed in a ship-following frame of reference for 13 4-day transects from the MAGIC field campaign. The goal is to assess if LES can skillfully simulate the broad range of observed cloud characteristics and boundary layer structure across the subtropical stratocumulus to cumulus transition region sampled during different seasons and meteorological conditions. Results from Leg 15A, which sampled a particularly well-defined stratocumulus to cumulus transition, demonstrate the approach. The LES reproduces the observed timing of decoupling and transition from stratocumulus to cumulus, and matches the observed evolution of boundary-layer structure, cloud fraction, liquid water path, and precipitation statistics remarkably well. Considering the simulations of all 13 cruises, the LES skillfully simulates the mean diurnal variation of key measured quantities, including liquid water path (LWP), cloud fraction, measures of decoupling and cloud radar-derived precipitation. The daily mean quantities are well-represented, and daily mean LWP and cloud fraction and show the expected correlation with estimated inversion strength. There is a -0.6 K low bias in LES near-surface air temperature that results in a high bias of 5.7 W m^-2 in sensible heat flux (SHF). Overall, these results build confidence in the ability of LES to represent the northeast Pacific stratocumulus to trade cumulus transition region

    Convergent Validity of a Wearable Sensor System for Measuring Sub-Task Performance during the Timed Up-and-Go Test

    No full text
    Background: The timed-up-and-go test (TUG) is one of the most commonly used tests of physical function in clinical practice and for research outcomes. Inertial sensors have been used to parse the TUG test into its composite phases (rising, walking, turning, etc.), but have not validated this approach against an optoelectronic gold-standard, and to our knowledge no studies have published the minimal detectable change of these measurements. Methods: Eleven adults performed the TUG three times each under normal and slow walking conditions, and 3 m and 5 m walking distances, in a 12-camera motion analysis laboratory. An inertial measurement unit (IMU) with tri-axial accelerometers and gyroscopes was worn on the upper-torso. Motion analysis marker data and IMU signals were analyzed separately to identify the six main TUG phases: sit-to-stand, 1st walk, 1st turn, 2nd walk, 2nd turn, and stand-to-sit, and the absolute agreement between two systems analyzed using intra-class correlation (ICC, model 2) analysis. The minimal detectable change (MDC) within subjects was also calculated for each TUG phase. Results: The overall difference between TUG sub-tasks determined using 3D motion capture data and the IMU sensor data was <0.5 s. For all TUG distances and speeds, the absolute agreement was high for total TUG time and walk times (ICC > 0.90), but less for chair activity (ICC range 0.5–0.9) and typically poor for the turn time (ICC < 0.4). MDC values for total TUG time ranged between 2–4 s or 12–22% of the TUG time measurement. MDC of the sub-task times were higher proportionally, being 20–60% of the sub-task duration. Conclusions: We conclude that a commercial IMU can be used for quantifying the TUG phases with accuracy sufficient for clinical applications; however, the MDC when using inertial sensors is not necessarily improved over less sophisticated measurement tools

    Improving the Reliability of ML‐Corrected Climate Models With Novelty Detection

    No full text
    Abstract Using machine learning (ML) for the online correction of coarse‐resolution atmospheric models has proven effective in reducing biases in near‐surface temperature and precipitation rate. However, ML corrections often introduce new biases in the upper atmosphere and causes inconsistent model performance across different random seeds. Furthermore, they produce profiles that are outside the distribution of samples used in training, which can interfere with the baseline physics of the atmospheric model and reduce model reliability. This study introduces the use of a novelty detector to mask ML corrections when the atmospheric state is deemed out‐of‐sample. The novelty detector is trained on profiles of temperature and specific humidity in a semi‐supervised fashion using samples from the coarsened reference fine‐resolution simulation. The novelty detector responds to particularly biased simulations relative to the reference simulation by categorizing more columns as out‐of‐sample. Without novelty detection, corrective ML occasionally causes undesirably large climate biases. When coupled to a running year‐long coarse‐grid simulation, novelty detection deems about 21% of columns to be novelties. This identification reduces the spread in the root‐mean‐square error (RMSE) of time‐mean spatial patterns of surface temperature and precipitation rate across a random seed ensemble. In particular, the random seed with the worst RMSE is improved by up to 60% (depending on the variable) while the best seed maintains its low RMSE. By reducing the variance in quality of ML‐corrected climate models, novelty detection offers reliability without compromising prediction quality in atmospheric models
    corecore