454 research outputs found

    Representing moisture fluxes and phase changes in glacier debris cover using a reservoir approach

    Get PDF
    Due to the complexity of treating moisture in supraglacial debris, surface energy balance models to date have neglected moisture infiltration and phase changes in the debris layer. The latent heat flux (QL) is also often excluded due to the uncertainty in determining the surface vapour pressure. To quantify the importance of moisture on the surface energy and climatic mass balance (CMB) of debris-covered glaciers, we developed a simple reservoir parameterization for the debris ice and water content, as well as an estimation of the latent heat flux. The parameterization was incorporated into a CMB model adapted for debris-covered glaciers. We present the results of two point simulations, using both our new “moist” and the conventional “dry” approaches, on the Miage Glacier, Italy, during summer 2008 and fall 2011. The former year coincides with available in situ glaciological and meteorological measurements, including the first eddy-covariance measurements of the turbulent fluxes over supraglacial debris, while the latter contains two refreeze events that permit evaluation of the influence of phase changes. The simulations demonstrate a clear influence of moisture on the glacier energy and mass-balance dynamics. When water and ice are considered, heat transmission to the underlying glacier ice is lower, as the effective thermal diffusivity of the saturated debris layers is reduced by increases in both the density and the specific heat capacity of the layers. In combination with surface heat extraction by QL, subdebris ice melt is reduced by 3.1% in 2008 and by 7.0% in 2011 when moisture effects are included. However, the influence of the parameterization on the total accumulated mass balance varies seasonally. In summer 2008, mass loss due to surface vapour fluxes more than compensates for the reduction in ice melt, such that the total ablation increases by 4.0 %. Conversely, in fall 2011, the modulation of basal debris temperature by debris ice results in a decrease in total ablation of 2.1 %. Although the parameterization is a simplified representation of the moist physics of glacier debris, it is a novel attempt at including moisture in a numerical model of debris-covered glaciers and one that opens up additional avenues for future research

    A portfolio approach to massively parallel Bayesian optimization

    Full text link
    One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box that can be used to select the designs to evaluate efficiently via an infill criterion. Still, with higher levels of parallelization becoming available, the strategies that work for a few tens of parallel evaluations become limiting, in particular due to the complexity of selecting more evaluations. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on deterministic and noisy functions, for mono and multiobjective optimization tasks. These experiments show similar or better performance than existing methods, while being orders of magnitude faster

    Bayesian calibration of stochastic agent based model via random forest

    Full text link
    Agent-based models (ABM) provide an excellent framework for modeling outbreaks and interventions in epidemiology by explicitly accounting for diverse individual interactions and environments. However, these models are usually stochastic and highly parametrized, requiring precise calibration for predictive performance. When considering realistic numbers of agents and properly accounting for stochasticity, this high dimensional calibration can be computationally prohibitive. This paper presents a random forest based surrogate modeling technique to accelerate the evaluation of ABMs and demonstrates its use to calibrate an epidemiological ABM named CityCOVID via Markov chain Monte Carlo (MCMC). The technique is first outlined in the context of CityCOVID's quantities of interest, namely hospitalizations and deaths, by exploring dimensionality reduction via temporal decomposition with principal component analysis (PCA) and via sensitivity analysis. The calibration problem is then presented and samples are generated to best match COVID-19 hospitalization and death numbers in Chicago from March to June in 2020. These results are compared with previous approximate Bayesian calibration (IMABC) results and their predictive performance is analyzed showing improved performance with a reduction in computation

    A portfolio approach to massively parallel Bayesian optimization

    Get PDF
    One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box that can be used to select the designs to evaluate efficiently via an infill criterion. Still, with higher levels of parallelization becoming available, the strategies that work for a few tens of parallel evaluations become limiting, in particular due to the complexity of selecting more evaluations. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on deterministic and noisy functions, for mono and multi-objective optimization tasks. These experiments show similar or better performance than existing methods, while being orders of magnitude faster

    A standardised sampling protocol for robust assessment of reach-scale fish community diversity in wadeable New Zealand streams

    Get PDF
    The New Zealand fish fauna contains species that are affected not only by river system connectivity, but also by catchment and local-scale changes in landcover, water quality and habitat quality. Consequently, native fish have potential as multi-scale bioindicators of human pressure on stream ecosystems, yet no standardised, repeatable and scientifically defensible methods currently exist for effectively quantifying their abundance or diversity in New Zealand stream reaches. Here we report on the testing of a back-pack electrofishing method, modified from that used by the United States Environmental Protection Agency, on a wide variety of wadeable stream reaches throughout New Zealand. Seventy-three first- to third-order stream reaches were fished with a single pass over 150-345 m length. Time taken to sample a reach using single-pass electrofishing ranged from 1-8 h. Species accumulation curves indicated that, irrespective of location, continuous sampling of 150 stream metres is required to accurately describe reach-scale fish species richness using this approach. Additional species detection beyond 150 m was rare (<10%) with a single additional species detected at only two out of the 17 reaches sampled beyond this distance. A positive relationship was also evident between species detection and area fished, although stream length rather than area appeared to be the better predictor. The method tested provides a standardised and repeatable approach for regional and/or national reporting on the state of New Zealand's freshwater fish communities and trends in richness and abundance over time

    Trajectory-oriented optimization of stochastic epidemiological models

    Full text link
    Epidemiological models must be calibrated to ground truth for downstream tasks such as producing forward projections or running what-if scenarios. The meaning of calibration changes in case of a stochastic model since output from such a model is generally described via an ensemble or a distribution. Each member of the ensemble is usually mapped to a random number seed (explicitly or implicitly). With the goal of finding not only the input parameter settings but also the random seeds that are consistent with the ground truth, we propose a class of Gaussian process (GP) surrogates along with an optimization strategy based on Thompson sampling. This Trajectory Oriented Optimization (TOO) approach produces actual trajectories close to the empirical observations instead of a set of parameter settings where only the mean simulation behavior matches with the ground truth

    Characterization and valuation of uncertainty of calibrated parameters in stochastic decision models

    Full text link
    We evaluated the implications of different approaches to characterize uncertainty of calibrated parameters of stochastic decision models (DMs) in the quantified value of such uncertainty in decision making. We used a microsimulation DM of colorectal cancer (CRC) screening to conduct a cost-effectiveness analysis (CEA) of a 10-year colonoscopy screening. We calibrated the natural history model of CRC to epidemiological data with different degrees of uncertainty and obtained the joint posterior distribution of the parameters using a Bayesian approach. We conducted a probabilistic sensitivity analysis (PSA) on all the model parameters with different characterizations of uncertainty of the calibrated parameters and estimated the value of uncertainty of the different characterizations with a value of information analysis. All analyses were conducted using high performance computing resources running the Extreme-scale Model Exploration with Swift (EMEWS) framework. The posterior distribution had high correlation among some parameters. The parameters of the Weibull hazard function for the age of onset of adenomas had the highest posterior correlation of -0.958. Considering full posterior distributions and the maximum-a-posteriori estimate of the calibrated parameters, there is little difference on the spread of the distribution of the CEA outcomes with a similar expected value of perfect information (EVPI) of \$653 and \$685, respectively, at a WTP of \$66,000/QALY. Ignoring correlation on the posterior distribution of the calibrated parameters, produced the widest distribution of CEA outcomes and the highest EVPI of \$809 at the same WTP. Different characterizations of uncertainty of calibrated parameters have implications on the expect value of reducing uncertainty on the CEA. Ignoring inherent correlation among calibrated parameters on a PSA overestimates the value of uncertainty.Comment: 17 pages, 6 figures, 3 table

    Towards Improved Uncertainty Quantification of Stochastic Epidemic Models Using Sequential Monte Carlo

    Full text link
    Sequential Monte Carlo (SMC) algorithms represent a suite of robust computational methodologies utilized for state estimation and parameter inference within dynamical systems, particularly in real-time or online environments where data arrives sequentially over time. In this research endeavor, we propose an integrated framework that combines a stochastic epidemic simulator with a sequential importance sampling (SIS) scheme to dynamically infer model parameters, which evolve due to social as well as biological processes throughout the progression of an epidemic outbreak and are also influenced by evolving data measurement bias. Through iterative updates of a set of weighted simulated trajectories based on observed data, this framework enables the estimation of posterior distributions for these parameters, thereby capturing their temporal variability and associated uncertainties. Through simulation studies, we showcase the efficacy of SMC in accurately tracking the evolving dynamics of epidemics while appropriately accounting for uncertainties. Moreover, we delve into practical considerations and challenges inherent in implementing SMC for parameter estimation within dynamic epidemiological settings, areas where the substantial computational capabilities of high-performance computing resources can be usefully brought to bear.Comment: 10 pages, 5 figure
    corecore