1,582 research outputs found

    Model selection and parameter estimation in structural dynamics using approximate Bayesian computation

    Get PDF
    This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours

    A Meta-Learning Approach to Population-Based Modelling of Structures

    Full text link
    A major problem of machine-learning approaches in structural dynamics is the frequent lack of structural data. Inspired by the recently-emerging field of population-based structural health monitoring (PBSHM), and the use of transfer learning in this novel field, the current work attempts to create models that are able to transfer knowledge within populations of structures. The approach followed here is meta-learning, which is developed with a view to creating neural network models which are able to exploit knowledge from a population of various tasks to perform well in newly-presented tasks, with minimal training and a small number of data samples from the new task. Essentially, the method attempts to perform transfer learning in an automatic manner within the population of tasks. For the purposes of population-based structural modelling, the different tasks refer to different structures. The method is applied here to a population of simulated structures with a view to predicting their responses as a function of some environmental parameters. The meta-learning approach, which is used herein is the model-agnostic meta-learning (MAML) approach; it is compared to a traditional data-driven modelling approach, that of Gaussian processes, which is a quite effective alternative when few data samples are available for a problem. It is observed that the models trained using meta-learning approaches, are able to outperform conventional machine learning methods regarding inference about structures of the population, for which only a small number of samples are available. Moreover, the models prove to learn part of the physics of the problem, making them more robust than plain machine-learning algorithms. Another advantage of the methods is that the structures do not need to be parametrised in order for the knowledge transfer to be performed

    On digital twins, mirrors and virtualisations

    Get PDF
    A powerful new idea in the computational representation of structures is that of the digital twin. The concept of the digital twin emerged and developed over the last two decades, and has been identified by many industries as a highly-desired technology. The current situation is that individual companies often have their own definitions of a digital twin, and no clear consensus has emerged. In particular, there is no current mathematical formulation of a digital twin. A companion paper to the current one will attempt to present the essential components of the desired formulation. One of those components is identified as a rigorous representation theory of models, how they are validated, and how validation information can be transferred between models. The current paper will outline the basic ingredients of such a theory, based on the introduction of two new concepts: mirrors and virtualisations. The paper is not intended as a passive wish-list; it is intended as a rallying call. The new theory will require the active participation of researchers across a number of domains including: pure and applied mathematics, physics, computer science and engineering. The paper outlines the main objects of the theory and gives examples of the sort of theorems and hypotheses that might be proved in the new framework

    Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    Get PDF
    A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once

    Regional data assimilation of multi-spectral MOPITT observations of CO over North America

    Get PDF
    Chemical transport models (CTMs) driven with high-resolution meteorological fields can better resolve small-scale processes, such as frontal lifting or deep convection, and thus improve the simulation and emission estimates of tropospheric trace gases. In this work, we explore the use of the GEOS-Chem four-dimensional variational (4D-Var) data assimilation system with the nested high-resolution version of the model (0.5° × 0.67°) to quantify North American CO emissions during the period of June 2004–May 2005. With optimized lateral boundary conditions, regional inversion analyses can reduce the sensitivity of the CO source estimates to errors in long-range transport and in the distributions of the hydroxyl radical (OH), the main sink for CO. To further limit the potential impact of discrepancies in chemical aging of air in the free troposphere, associated with errors in OH, we use surface-level multispectral MOPITT (Measurement of Pollution in The Troposphere) CO retrievals, which have greater sensitivity to CO near the surface and reduced sensitivity in the free troposphere, compared to previous versions of the retrievals. We estimate that the annual total anthropogenic CO emission from the contiguous US 48 states was 97 Tg CO, a 14 % increase from the 85 Tg CO in the a priori. This increase is mainly due to enhanced emissions around the Great Lakes region and along the west coast, relative to the a priori. Sensitivity analyses using different OH fields and lateral boundary conditions suggest a possible error, associated with local North American OH distribution, in these emission estimates of 20 % during summer 2004, when the CO lifetime is short. This 20 % OH-related error is 50 % smaller than the OH-related error previously estimated for North American CO emissions using a global inversion analysis. We believe that reducing this OH-related error further will require integrating additional observations to provide a strong constraint on the CO distribution across the domain. Despite these limitations, our results show the potential advantages of combining high-resolution regional inversion analyses with global analyses to better quantify regional CO source estimates

    Profiles of CH_4, HDO, H_2O, and N_2O with improved lower tropospheric vertical resolution from Aura TES radiances

    Get PDF
    Thermal infrared (IR) radiances measured near 8 microns contain information about the vertical distribution of water vapor (H_2O), the water isotopologue HDO, and methane (CH_4), key gases in the water and carbon cycles. Previous versions (Version 4 or less) of the TES profile retrieval algorithm used a "spectral-window" approach to minimize uncertainty from interfering species at the expense of reduced vertical resolution and sensitivity. In this manuscript we document changes to the vertical resolution and uncertainties of the TES version 5 retrieval algorithm. In this version (Version 5), joint estimates of H_2O, HDO, CH_4 and nitrous oxide (N_2O) are made using radiances from almost the entire spectral region between 1100 cm^(−1) and 1330 cm^(−1). The TES retrieval constraints are also modified in order to better use this information. The new H_2O estimates show improved vertical resolution in the lower troposphere and boundary layer, while the new HDO/H_2O estimates can now profile the HDO/H_2O ratio between 925 hPa and 450 hPa in the tropics and during summertime at high latitudes. The new retrievals are now sensitive to methane in the free troposphere between 800 and 150 mb with peak sensitivity near 500 hPa; whereas in previous versions the sensitivity peaked at 200 hPa. However, the upper troposphere methane concentrations are biased high relative to the lower troposphere by approximately 4% on average. This bias is likely related to temperature, calibration, and/or methane spectroscopy errors. This bias can be mitigated by normalizing the CH_4 estimate by the ratio of the N_2O estimate relative to the N_2O prior, under the assumption that the same systematic error affects both the N_2O and CH_4 estimates. We demonstrate that applying this ratio theoretically reduces the CH4 estimate for non-retrieved parameters that jointly affect both the N_2O and CH_4 estimates. The relative upper troposphere to lower troposphere bias is approximately 2.8% after this bias correction. Quality flags based upon the vertical variability of the methane and N_2O estimates can be used to reduce this bias further. While these new CH_4, HDO/H_2O, and H_2O estimates are consistent with previous TES retrievals in the altitude regions where the sensitivities overlap, future comparisons with independent profile measurement will be required to characterize the biases of these new retrievals and determine if the calculated uncertainties using the new constraints are consistent with actual uncertainties

    Novelty Detection in a Cantilever Beam using Extreme Function Theory

    Get PDF
    Damage detection and localisation in beam-like structures using mode shape features is well-established in the research community. It is known that by inserting a localised anomaly in a cantilever beam, such as a crack, its mode shapes diverge from the usual deflection path. These novelties can hence be detected by a machine-learner trained exclusively on the modal data taken from the pristine beam. Nevertheless, a major issue in current practices regards discerning between damage-related outliers and simple noise in observations, avoiding false alarms. Extreme functions are here introduced as a viable mean of comparison. By combining Extreme Value Theory (EVT) and Gaussian Process (GP) Regression, one can investigate functions as a whole rather than focusing on their constituent data points. Indeed, n discrete observations of a mode shape sampled at D points can be assumed as 1-dimensional sets of n randomly distributed observations. From any given point it is then possible to define its Probability Density Function (PDF) and the Cumulative Density Function (CDF), whose minima, according to the EVT, belong to one of three feasible extreme distributions - Weibull, Frechet or Gumbel. Thus, these functions - intended as vectors of sampled data - can be compared and classified. Anomalous displacement values that could indicate the presence of a crack are therefore identified and related to damage. In this paper, the effectiveness of the proposed methodology is verified on numerically-simulated noisy data, considering several crack locations, levels of damage severity (i.e., depths of the crack) and signal-to-noise ratios

    On improved fail-safe sensor distributions for a structural health monitoring system

    Get PDF
    Sensor placement optimization (SPO) is usually applied during the structural health monitoring sensor system design process to collect effective data. However, the failure of a sensor may significantly affect the expected performance of the entire system. Therefore, it is necessary to study the optimal sensor placement considering the possibility of sensor failure. In this article, the research focusses on an SPO giving a fail-safe sensor distribution, whose sub-distributions still have good performance. The performance of the fail-safe sensor distribution with multiple sensors placed in the same position will also be studied. The adopted data sets include the mode shapes and corresponding labels of structural states from a series of tests on a glider wing. A genetic algorithm is used to search for sensor deployments, and the partial results are validated by an exhaustive search. Two types of optimization objectives are investigated, one for modal identification and the other for damage identification. The results show that the proposed fail-safe sensor optimization method is beneficial for balancing the system performance before and after sensor failure

    Managing bereavement in the classroom: a conspiracy of silence?

    Get PDF
    The ways in which teachers in British schools manage bereaved children are under-reported. This article reports the impact of students' bereavement and their subsequent management in primary and secondary school classrooms in Southeast London. Thirteen school staff working in inner-city schools took part in in-depth interviews that focused on the impact of bereaved children on the school and how teachers responded to these children. All respondents had previously had contact with a local child bereavement service that aims to provide support, advice, and consultancy to children, their parents, and teachers. Interviews were audiotaped, transcribed verbatim, and analyzed using ATLAS-ti. Three main themes were identified from analysis of interview data. Firstly, British society, culture, local communities, and the family were significant influences in these teachers' involvement with bereaved students. Secondly, school staff managed bereaved students through contact with other adults and using practical classroom measures such as "time out" cards and contact books. Lastly, teachers felt they had to be strong, even when they were distressed. Surprise was expressed at the mature reaction of secondary school students to deaths of others. The article recommends that future research needs to concentrate on finding the most effective way of supporting routinely bereaved children, their families, and teachers
    • …
    corecore