844 research outputs found

    Uncertainty quantification for spatial field data using expensive computer models: refocussed Bayesian calibration with optimal projection

    Get PDF
    In this thesis, we present novel methodology for emulating and calibrating computer models with high-dimensional output. Computer models for complex physical systems, such as climate, are typically expensive and time-consuming to run. Due to this inability to run computer models efficiently, statistical models ('emulators') are used as fast approximations of the computer model, fitted based on a small number of runs of the expensive model, allowing more of the input parameter space to be explored. Common choices for emulators are regressions and Gaussian processes. The input parameters of the computer model that lead to output most consistent with the observations of the real-world system are generally unknown, hence computer models require careful tuning. Bayesian calibration and history matching are two methods that can be combined with emulators to search for the best input parameter setting of the computer model (calibration), or remove regions of parameter space unlikely to give output consistent with the observations, if the computer model were to be run at these settings (history matching). When calibrating computer models, it has been argued that fitting regression emulators is sufficient, due to the large, sparsely-sampled input space. We examine this for a range of examples with different features and input dimensions, and find that fitting a correlated residual term in the emulator is beneficial, in terms of more accurately removing regions of the input space, and identifying parameter settings that give output consistent with the observations. We demonstrate and advocate for multi-wave history matching followed by calibration for tuning. In order to emulate computer models with large spatial output, projection onto a low-dimensional basis is commonly used. The standard accepted method for selecting a basis is to use n runs of the computer model to compute principal components via the singular value decomposition (the SVD basis), with the coefficients given by this projection emulated. We show that when the n runs used to define the basis do not contain important patterns found in the real-world observations of the spatial field, linear combinations of the SVD basis vectors will not generally be able to represent these observations. Therefore, the results of a calibration exercise are meaningless, as we converge to incorrect parameter settings, likely assigning zero posterior probability to the correct region of input space. We show that the inadequacy of the SVD basis is very common and present in every climate model field we looked at. We develop a method for combining important patterns from the observations with signal from the model runs, developing a calibration-optimal rotation of the SVD basis that allows a search of the output space for fields consistent with the observations. We illustrate this method by performing two iterations of history matching on a climate model, CanAM4. We develop a method for beginning to assess model discrepancy for climate models, where modellers would first like to see whether the model can achieve certain accuracy, before allowing specific model structural errors to be accounted for. We show that calibrating using the basis coefficients often leads to poor results, with fields consistent with the observations ruled out in history matching. We develop a method for adjusting for basis projection when history matching, so that an efficient and more accurate implausibility bound can be derived that is consistent with history matching using the computationally prohibitive spatial field

    At what point do academics forego citations for journal status?

    Get PDF
    The limitations of journal based citation metrics for assessing individual researchers are well known. However, the way in which these assessment systems differentially shape research practices within disciplines is less well understood. Presenting evidence from a new analysis of business and management academics, Rossella Salandra and Ammon Salter and James Walker¸ explore how journal status is valued by these academics and the point at which journal status becomes more prized than academic influence

    Efficient calibration for high-dimensional computer model output using basis methods

    Full text link
    Calibration of expensive computer models with high-dimensional output fields can be approached via history matching. If the entire output field is matched, with patterns or correlations between locations or time points represented, calculating the distance metric between observational data and model output for a single input setting requires a time intensive inversion of a high-dimensional matrix. By using a low-dimensional basis representation rather than emulating each output individually, we define a metric in the reduced space that allows the implausibility for the field to be calculated efficiently, with only small matrix inversions required, using projection that is consistent with the variance specifications in the implausibility. We show that projection using the L2L_2 norm can result in different conclusions, with the ordering of points not maintained on the basis, with implications for both history matching and probabilistic methods. We demonstrate the scalability of our method through history matching of the Canadian atmosphere model, CanAM4, comparing basis methods to emulation of each output individually, showing that the basis approach can be more accurate, whilst also being more efficient

    Cross-Validation Based Adaptive Sampling for Multi-Level Gaussian Process Models

    Full text link
    Complex computer codes or models can often be run in a hierarchy of different levels of complexity ranging from the very basic to the sophisticated. The top levels in this hierarchy are typically expensive to run, which limits the number of possible runs. To make use of runs over all levels, and crucially improve predictions at the top level, we use multi-level Gaussian process emulators (GPs). The accuracy of the GP greatly depends on the design of the training points. In this paper, we present a multi-level adaptive sampling algorithm to sequentially increase the set of design points to optimally improve the fit of the GP. The normalised expected leave-one-out cross-validation error is calculated at all unobserved locations, and a new design point is chosen using expected improvement combined with a repulsion function. This criterion is calculated for each model level weighted by an associated cost for the code at that level. Hence, at each iteration, our algorithm optimises for both the new point location and the model level. The algorithm is extended to batch selection as well as single point selection, where batches can be designed for single levels or optimally across all levels

    A new method to identify key match-play behaviours of young soccer players: Development of the Hull Soccer Behavioural Scoring Tool

    Get PDF
    The aim of this research was to assess the validity and reliability of a newly developed scoring tool, designed for monitoring youth soccer players during match-play performance to support coaches/scouts with the talent identification process. The method used to design the Hull Soccer Behavioural Scoring Tool comprised of a five-stage process of (i) conducting an initial literature review to establish content validity (ii) gaining content validity through a cross sectional online survey (iii) establishing face validity via expert coach feedback (iv) conducting inter-rater reliability tests and (v) intra-rater reliability tests. In stage two, twenty-two soccer academy practitioners completed an online survey, which revealed that player behaviours such as resilience, competitiveness, and decision making were all valued as the most important behavioural characteristics by practitioners (90.9%), whilst X-factor was valued as least important by a significant amount (27.2%). Stages three to five of the testing procedure included a sample of four academy coaches not involved in the preceding stages. Twenty male collegiate soccer players (under-16 to under-18) involved in the study took part in four versus four small-sided games (SSG) in a ‘round-robin’ tournament across three weeks which accumulated 14 SSG’s, 100 – 140 minutes of playing time and 70 – 98 individual player grades. Two of the four academy coaches watched the SSG’s and used the Hull Soccer Behavioural Scoring Tool to assess live evidence of desirable player behaviours, which was subsequently followed by retrospective video analysis for intra-rater reliability testing. The remaining two academy coaches watched the same SSG retrospective video footage to test for inter-rater reliability. Reliability results revealed an acceptable level of agreement with scores between 81.25% - 89.9% for inter-rater whilst intra-rater provided scores between 80.35% - 99.4%. Preliminary evidence here suggests that the Hull Soccer Behavioural Scoring Tool is both a valid and reliable method to assess desirable player behaviours during talent identification processes. Thus, youth soccer practitioners and researchers should seek to test and further validate the tool in order to confirm its utility as a means of measuring behavioural characteristics of youth soccer players

    Time-reversed measurement of the 18Ne(α,p)21Na cross-section for Type I X-ray bursts

    Get PDF
    Type I X-ray bursts (XRB) are highly energetic and explosive astrophysical events, observed as very sudden and intense emissions of X-rays. X-ray bursts are believed to be powered by a thermonuclear runaway on the surface of a neutron star in a binary system. XRB models are dependent on the accurate information of the nuclear reactions involved. The 18Ne(α,p)21Na reaction is considered to be of great importance as a possible breakout route from the Hot-CNO cycle preceding the thermonuclear runaway. In this thesis work, the 18Ne(α,p)21Na reaction cross-section was indirectly measured at Ecm(α,p) = 2568, 1970, 1758, 1683, 1379 and 1194 keV, using the time-reverse 21Na(p,α)18Ne reaction. Since the time-reverse approach only connects the ground states of 21Na and 18Ne, the cross sections measured here represent lower limits of the 18Ne(α,p)21Na cross-section. An experiment was performed using the the ISAC-II facility at TRIUMF, Vancouver, Canada. A beam of 21Na ions was delivered to a polyethylene (CH2)n target placed within the TUDA scattering chamber. The reaction 18Ne and 4He ions were detected using silicon strip detectors, with time-of-flight and ΔE/E particle identification techniques used to distinguish the ions from background. The measurement at Ecm = 1194 keV is the lowest energy measurement to date of the 18Ne(α,p)21Na cross section. The measured cross sections presented in this thesis were compared to the NON-SMOKER Hauser-Feshbach statistical calculations of the cross section and to the unpublished results of another time-reverse investigation performed by a collaboration at the Argonne National Laboratory. A 18Ne(α,p)21Na reaction rate calculation based on the measured cross sections was performed. In comparison with previous reaction rate estimates, our results indicate a rate that is about a factor 2-3 lower than Hauser-Feshbach calculations, suggesting that a statistical approach may not be appropriate for cross section calculations for nuclei in this mass region. The astrophysical consequences of our new results appear to remain nevertheless negligible. These are also presented in this thesis

    A TPD and RAIRS comparison of the low temperature behavior of benzene, toluene, and xylene on graphite

    Get PDF
    The first comparative study of the surface behavior of four small aromatic molecules, benzene, toluene, p-xylene, and o-xylene, adsorbed on graphite at temperatures ≤30 K, is presented. Intermolecular interactions are shown to be important in determining the growth of the molecules on the graphite surface at low (monolayer) exposures. Repulsive intermolecular interactions dominate the behavior of benzene and toluene. By contrast, stronger interactions with the graphite surface are observed for the xylene isomers, with islanding observed for o-xylene. Multilayer desorption temperatures and energies increase with the size of the molecule, ranging from 45.5 to 59.5 kJ mol−1 for benzene and p-xylene, respectively. Reflection absorption infrared spectroscopy gives insight into the effects of thermal processing on the ordering of the molecules. Multilayer benzene, p-xylene, and o-xylene form crystalline structures following annealing of the ice. However, we do not observe an ordered structure for toluene in this study. The ordering of p-xylene shows a complex relationship dependent on both the annealing temperature and exposure

    Quantifying spatio-temporal boundary condition uncertainty for the North American deglaciation

    Full text link
    Ice sheet models are used to study the deglaciation of North America at the end of the last ice age (past 21,000 years), so that we might understand whether and how existing ice sheets may reduce or disappear under climate change. Though ice sheet models have a few parameters controlling physical behaviour of the ice mass, they also require boundary conditions for climate (spatio-temporal fields of temperature and precipitation, typically on regular grids and at monthly intervals). The behaviour of the ice sheet is highly sensitive to these fields, and there is relatively little data from geological records to constrain them as the land was covered with ice. We develop a methodology for generating a range of plausible boundary conditions, using a low-dimensional basis representation of the spatio-temporal input. We derive this basis by combining key patterns, extracted from a small ensemble of climate model simulations of the deglaciation, with sparse spatio-temporal observations. By jointly varying the ice sheet parameters and basis vector coefficients, we run ensembles of the Glimmer ice sheet model that simultaneously explore both climate and ice sheet model uncertainties. We use these to calibrate the ice sheet physics and boundary conditions for Glimmer, by ruling out regions of the joint coefficient and parameter space via history matching. We use binary ice/no ice observations from reconstructions of past ice sheet margin position to constrain this space by introducing a novel metric for history matching to binary data
    • …
    corecore