77 research outputs found
Recommended from our members
Using Temporal Deep Learning Models to Estimate Daily Snow Water Equivalent Over the Rocky Mountains
In this study we construct and compare three different deep learning (DL) models for estimating daily snow water equivalent (SWE) from high-resolution gridded meteorological fields over the Rocky Mountain region. To train the DL models, Snow Telemetry (SNOTEL) station-based SWE observations are used as the prediction target. All DL models produce higher median Nash-Sutcliffe Efficiency (NSE) values than a conceptual SWE model and interpolated gridded data sets, although mean squared errors also tend to be higher. Sensitivity of the SWE prediction to the model's input variables is analyzed using an explainable artificial intelligence (XAI) method, yielding insight into the physical relationships learned by the models. This method reveals the dominant role precipitation and temperature play in snowpack dynamics. In applying our models to estimate SWE throughout the Rocky Mountains, an extrapolation problem arises since the statistical properties of SWE (e.g., annual maximum) and geographical properties of individual grid points (e.g., elevation) differ from the training data. This problem is solved by normalizing the SWE with its historical maximum value to alleviate extrapolation for all tested DL models. Our work shows that the DL models are promising tools for estimating SWE, and sufficiently capture relevant physical relationships to make them useful for spatial and temporal extrapolation of SWE values
Recommended from our members
Are Atmospheric Models Too Cold in the Mountains? The State of Science and Insights from the SAIL Field Campaign
Recommended from our members
LSTM-Based Data Integration to Improve Snow Water Equivalent Prediction and Diagnose Error Sources
Accurate prediction of snow water equivalent (SWE) can be valuable for water resource managers. Re-cently, deep learning methods such as long short-term memory (LSTM) have exhibited high accuracy in simulating hydrologic variables and can integrate lagged observations to improve prediction, but their benefits were not clear for SWE simulations. Here we tested an LSTM network with data integration (DI) for SWE in the western United States to integrate 30-day-lagged or 7-day-lagged observations of either SWE or satellite-observed snow cover fraction (SCF) to improve future predictions. SCF proved beneficial only for shallow-snow sites during snowmelt, while lagged SWE integration significantly improved prediction accuracy for both shallow-and deep-snow sites. The median Nash–Sutcliffe model efficiency coefficient (NSE) in temporal testing improved from 0.92 to 0.97 with 30-day-lagged SWE integration, and root-mean-square error (RMSE) and the difference between estimated and observed peak SWE values dmax were reduced by 41% and 57%, respectively. DI effectively mitigated accumulated model and forcing errors that would otherwise be persistent. Moreover, by applying DI to different observations (30-day-lagged, 7-day-lagged), we revealed the spatial distribution of errors with different persistent lengths. For example, integrating 30-day-lagged SWE was ineffective for ephemeral snow sites in the southwestern United States, but significantly reduced monthly-scale biases for regions with sta-ble seasonal snowpack such as high-elevation sites in California. These biases are likely attributable to large interannual variability in snowfall or site-specific snow redistribution patterns that can accumulate to impactful levels over time for nonephemeral sites. These results set up benchmark levels and provide guidance for future model improvement strategies
Correctly validating results from single molecule data: the case of stretched exponential decay in the catalytic activity of single lipase B molecules
The question of how to validate and interpret correctly the waiting time
probability density functions (WT-PDFs) from single molecule data is addressed.
It is shown by simulation that when a stretched exponential WT-PDF, with a
stretched exponent alfa and a time scale parameter tau, generates the off
periods of a two-state trajectory, a reliable recovery of the input WT-PDF from
the trajectory is obtained even when the bin size used to define the
trajectory, dt, is much larger than the parameter tau. This holds true as long
as the first moment of the WT-PDF is much larger than dt. Our results validate
the results in an earlier study of the activity of single Lipase B molecules
and disprove recent related critique
Climate Science Special Report: Fourth National Climate Assessment (NCA4), Volume I
New observations and new research have increased our understanding of past, current, and future climate change since the Third U.S. National Climate Assessment (NCA3) was published in May 2014. This Climate Science Special Report (CSSR) is designed to capture that new information and build on the existing body of science in order to summarize the current state of knowledge and provide the scientific foundation for the Fourth National Climate Assessment (NCA4)
Case Study: Wellness, tourism and small business development in a UK coastal resort: Public engagement in practice
This article examines the scope of well-being as a focus for tourism and its potential as a tool for small business development, particularly the opportunities for tourism entrepreneurs in coastal resorts. The study reports an example of public engagement by a research team and the co-creation of research knowledge with businesses to assist in business development by adapting many existing features of tourist resorts and extending their offer to wider markets. The synergy between well-being and public health interests also brings potential benefits for the tourism workforce and the host community. The Case Study outlines how these ideas were tested in Bournemouth, a southern coastal resort in the UK, in a study ultimately intended to be adopted nationally and with more wide reaching implications for global development of the visitor economy. Local changes ascribed to the study are assessed and its wider potential is evaluated
The 'Risk-Adjusted' Price-Concentration Relationship in Banking
Price-concentration studies in banking typically find a significant and negative relationship between consumer deposit rates (i.e., prices) and market concentration. This relationship implies that highly concentrated banking markets are "bad" for depositors. It also provides support for the Structure-Conduct-Performance hypothesis and rejects the Efficient-Structure hypothesis. However, these studies have focused almost exclusively on supply-side control variables and have neglected demand-side variables when estimating the reduced form price-concentration relationship. For example, previous studies have not included in their analysis bank-specific risk variables as measures of cross-sectional derived deposit demand. The authors find that when bank-specific risk variables are included in the analysis the magnitude of the relationship between deposit rates and market concentration decreases by over 50 percent. They offer an explanation for these results based on the correlation between a bank’s risk profile and the structure of the market in which it operates. These results suggest that it may be necessary to reconsider the well-established assumption that higher market concentration necessarily leads to anticompetitive deposit pricing behavior by commercial banks. This finding has direct implications for the antitrust evaluations of bank merger and acquisition proposals by regulatory agencies. And, in a more general sense, these results suggest that any Structure-Conduct-Performance-based study that does not explicitly consider the possibility of very different risk profiles of the firms analyzed may indeed miss a very important set of explanatory variables. And, thus, the results from those studies may be spurious
Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies
Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner’s ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships
- …