1,157 research outputs found

    Identification of Model Uncertainty via Optimal Design of Experiments Applied to a Mechanical Press

    Full text link
    In engineering applications almost all processes are described with the help of models. Especially forming machines heavily rely on mathematical models for control and condition monitoring. Inaccuracies during the modeling, manufacturing and assembly of these machines induce model uncertainty which impairs the controller's performance. In this paper we propose an approach to identify model uncertainty using parameter identification, optimal design of experiments and hypothesis testing. The experimental setup is characterized by optimal sensor positions such that specific model parameters can be determined with minimal variance. This allows for the computation of confidence regions in which the real parameters or the parameter estimates from different test sets have to lie. We claim that inconsistencies in the estimated parameter values, considering their approximated confidence ellipsoids as well, cannot be explained by data uncertainty but are indicators of model uncertainty. The proposed method is demonstrated using a component of the 3D Servo Press, a multi-technology forming machine that combines spindles with eccentric servo drives

    Towards offshore wind digital twins:Application to jacket substructures

    Get PDF

    Performance-driven measurement system design for structural identification

    Get PDF
    Much progress has been achieved in the field of structural identification due to a better understanding of uncertainties, improvement in sensor technologies and cost reductions. However, data interpretation remains a bottleneck. Too often, too much data is acquired, thus hindering interpretation. In this paper, a methodology is described that explicitly indicates when instrumentation can decreases the ability to interpret data. The approach includes uncertainties along with dependencies that may affect model predictions. Two performance indices are used to optimize measurement system designs: monitoring costs and expected identification performance. A case-study shows that the approach is able to justify a reduction in monitoring costs of 50% compared with an initial measurement configuration

    Environmental Sensor Placement with Convolutional Gaussian Neural Processes

    Full text link
    Environmental sensors are crucial for monitoring weather conditions and the impacts of climate change. However, it is challenging to maximise measurement informativeness and place sensors efficiently, particularly in remote regions like Antarctica. Probabilistic machine learning models can evaluate placement informativeness by predicting the uncertainty reduction provided by a new sensor. Gaussian process (GP) models are widely used for this purpose, but they struggle with capturing complex non-stationary behaviour and scaling to large datasets. This paper proposes using a convolutional Gaussian neural process (ConvGNP) to address these issues. A ConvGNP uses neural networks to parameterise a joint Gaussian distribution at arbitrary target locations, enabling flexibility and scalability. Using simulated surface air temperature anomaly over Antarctica as ground truth, the ConvGNP learns spatial and seasonal non-stationarities, outperforming a non-stationary GP baseline. In a simulated sensor placement experiment, the ConvGNP better predicts the performance boost obtained from new observations than GP baselines, leading to more informative sensor placements. We contrast our approach with physics-based sensor placement methods and propose future work towards an operational sensor placement recommendation system. This system could help to realise environmental digital twins that actively direct measurement sampling to improve the digital representation of reality.Comment: In review for the Climate Informatics 2023 special issue of Environmental Data Scienc

    On Model- and Data-based Approaches to Structural Health Monitoring

    Get PDF
    Structural Heath Monitoring (SHM) is the term applied to the process of periodically monitoring the state of a structural system with the aim of diagnosing damage in the structure. Over the course of the past several decades there has been ongoing interest in approaches to the problem of SHM. This attention has been sustained by the belief that SHM will allow substantial economic and life-safety benefits to be realised across a wide range of applications. Several numerical and laboratory implementations have been successfully demonstrated. However, despite this research effort, real-world applications of SHM as originally envisaged are somewhat rare. Numerous technical barriers to the broader application of SHM methods have been identified, namely: severe restrictions on the availability of damaged-state data in real-world scenarios; difficulties associated with the numerical modelling of physical systems; and limited understanding of the physical effect of system inputs (including environmental and operational loads). This thesis focuses on the roles of law-based and data-based modelling in current applications of. First, established approaches to model-based SHM are introduced, with the aid of an exemplar ‘wingbox’ structure. The study highlights the degree of difficulty associated with applying model-updating-based methods and with producing numerical models capable of accurately predicting changes in structural response due to damage. These difficulties motivate the investigation of non-deterministic, predictive modelling of structural responses taking into account both experimental and modelling uncertainties. Secondly, a data-based approach to multiple-site damage location is introduced, which may allow the quantity of experimental data required for classifier training to be drastically reduced. A conclusion of the above research is the identification of hybrid approaches, in which a forward-mode law-based model informs a data-based damage identification scheme, as an area for future wor

    Optimal methodologies for ultrasonic guided-wave based structural health monitoring

    Get PDF
    The assessment of structural integrity is a key issue for many industries due to its important implications in safety, maintenance cost reduction, and improved asset availability. In this context, structural health monitoring (SHM) systems using ultrasonic guided-waves are being explored for an efficient diagnosis of damage and prognosis of the remaining useful life of the monitored structure. Nonetheless, addressing this monitoring scenario is a challenge given the inherent complexities associated to each of the diagnosis steps, which encompass the optimal SHM design, the detection of damage, its localisation, and its identification. Among these complexities, uncertainties stemming from several sources such as equipment noise, manufacturing defects, and the lack of conclusive knowledge about wave propagation introduce a high variability in the response of the SHM system. The main objective of this thesis is to provide probabilistic Bayesian and fuzzy logic methodologies to manage global uncertainties for each step in the SHM process. The accuracy and reliability of an ultrasonic guided-wave based SHM system are dependent on the chosen number and location of sensors and actuators. A general framework for optimal sensor configuration based on value of information is proposed in this thesis, which trades-off information gain and cost. This approach optimally chooses the sensor position so that they render the largest information gain when inferring the damage location. The methodology is tested using different case studies in the context of ultrasonic guided waves and piezoelectric sensors. However, although this framework is mathematically rigorous, it is computationally expensive should the actuators be considered in the optimisation problem. To overcome this issue, a cost-benefit analysis is also proposed using both the Shannon's information entropy and a cost function associated to the number of sensors and actuators. The objective function is based on binary decision variables, which are relaxed into continuous variables, hence convexifying the objective function. This optimisation methodology is illustrated in several case studies considering plate-like structures with irregular geometries and different materials, providing a high computational efficiency. The first diagnosis stage requires a robust and computationally efficient damage detection approach in real-life engineering scenarios. To this end, a novel damage index for ultrasonic guided-wave measurements based on fuzzy-logic principles is proposed in this thesis. This approach assesses the time of flight mismatch between signals acquired in undamaged and non-pristine states using fuzzy sets for its evaluation. The robustness partially builds on the use of a large amount of signals stemming from two experimental procedures: the round robin configuration and the transmission beamforming technique. This new damage index is validated in several scenarios with sudden and progressive damage. Once a damage area has been detected, the next diagnosis stage requires a reliable damage localisation. To address this SHM step, a robust methodology is proposed based on two hierarchical levels: (1) a Bayesian time-frequency model class selection to obtain the time of flight of damage scattered waves; and (2) a Bayesian inverse problem of damage localisation that considers as input data the outcome of the first level. The effectiveness and robustness of the proposed methodology is illustrated using two cases studies with one and two areas of damage. Lastly, to provide a complete diagnosis of damage using ultrasonic guided-waves, the identification of damage needs to be addressed. A multi-level hybrid wave and finite element model-based Bayesian approach is proposed to identify the type of damage in composite beams based on posterior probabilities, hence accounting for different sources of uncertainty. In addition to the type of damage, this approach allows the inference of damage-related parameters and the damage location. A carbon fibre beam with two damage modes, i.e. a crack and a delamination, is used to illustrate the methodology

    A holistic approach to assessment of value of information (VOI) with fuzzy data and decision criteria.

    Get PDF
    Classical decision and value of information theories have been applied in the oil and gas industry from the 1960s with partial success. In this research, we identify that the classical theory of value of information has weaknesses related with optimal data acquisition selection, data fuzziness and fuzzy decision criteria and we propose a modification in the theory to fill the gaps found. The research presented in this paper integrates theories and techniques from statistical analysis and artificial intelligence to develop a more coherent, robust and complete methodology for assessing the value of acquiring new information in the context of the oil and gas industry. The proposed methodology is applied to a case study describing a value of information assessment in an oil field where two alternatives for data acquisition are discussed. It is shown that: i) the technique of design of experiments provides a full identification of the input parameters affecting the value of the project and allows a proper selection of the data acquisition actions, ii) when the fuzziness of the data is included in the assessment, the value of the data decreases compared with the case where data are assumed to be crisp; this result means that the decision concerning the value of acquiring new data depends on whether the fuzzy nature of the data is included in the assessment and on the difference between the project value with and without data acquisition, iii) the fuzzy inference system developed for this case study successfully follows the logic of the decision-maker and results in a straightforward system to aggregate decision criteria
    • 

    corecore