29 research outputs found

    Model bias identification for Bayesian calibration of stochastic digital twins of bridges

    Full text link
    Simulation-based digital twins must provide accurate, robust and reliable digital representations of their physical counterparts. Quantifying the uncertainty in their predictions plays, therefore, a key role in making better-informed decisions that impact the actual system. The update of the simulation model based on data must be then carefully implemented. When applied to complex standing structures such as bridges, discrepancies between the computational model and the real system appear as model bias, which hinders the trustworthiness of the digital twin and increases its uncertainty. Classical Bayesian updating approaches aiming to infer the model parameters often fail at compensating for such model bias, leading to overconfident and unreliable predictions. In this paper, two alternative model bias identification approaches are evaluated in the context of their applicability to digital twins of bridges. A modularized version of Kennedy and O'Hagan's approach and another one based on Orthogonal Gaussian Processes are compared with the classical Bayesian inference framework in a set of representative benchmarks. Additionally, two novel extensions are proposed for such models: the inclusion of noise-aware kernels and the introduction of additional variables not present in the computational model through the bias term. The integration of such approaches in the digital twin corrects the predictions, quantifies their uncertainty, estimates noise from unknown physical sources of error and provides further insight into the system by including additional pre-existing information without modifying the computational model.Comment: 31 pages, 21 figures, 5 tables. Submitted for consideration to Applied Stochastic Models in Business and Industr

    Dynamically Hedging Oil and Currency Futures Using Receding Horizontal Control and Stochastic Programming

    Get PDF
    There is a lack of research in the area of hedging future contracts, especially in illiquid or very volatile market conditions. It is important to understand the volatility of the oil and currency markets because reduced fluctuations in these markets could lead to better hedging performance. This study compared different hedging methods by using a hedging error metric, supplementing the Receding Horizontal Control and Stochastic Programming (RHCSP) method by utilizing the London Interbank Offered Rate with the Levy process. The RHCSP hedging method was investigated to determine if improved hedging error was accomplished compared to the Black-Scholes, Leland, and Whalley and Wilmott methods when applied on simulated, oil, and currency futures markets. A modified RHCSP method was also investigated to determine if this method could significantly reduce hedging error under extreme market illiquidity conditions when applied on simulated, oil, and currency futures markets. This quantitative study used chaos theory and emergence for its theoretical foundation. An experimental research method was utilized for this study with a sample size of 506 hedging errors pertaining to historical and simulation data. The historical data were from January 1, 2005 through December 31, 2012. The modified RHCSP method was found to significantly reduce hedging error for the oil and currency market futures by the use of a 2-way ANOVA with a t test and post hoc Tukey test. This study promotes positive social change by identifying better risk controls for investment portfolios and illustrating how to benefit from high volatility in markets. Economists, professional investment managers, and independent investors could benefit from the findings of this study

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio

    Real-Time Quantum Noise Suppression In Very Low-Dose Fluoroscopy

    Get PDF
    Fluoroscopy provides real-time X-ray screening of patient's organs and of various radiopaque objects, which make it an invaluable tool for many interventional procedures. For this reason, the number of fluoroscopy screenings has experienced a consistent growth in the last decades. However, this trend has raised many concerns about the increase in X-ray exposure, as even low-dose procedures turned out to be not as safe as they were considered, thus demanding a rigorous monitoring of the X-ray dose delivered to the patients and to the exposed medical staff. In this context, the use of very low-dose protocols would be extremely beneficial. Nonetheless, this would result in very noisy images, which need to be suitably denoised in real-time to support interventional procedures. Simple smoothing filters tend to produce blurring effects that undermines the visibility of object boundaries, which is essential for the human eye to understand the imaged scene. Therefore, some denoising strategies embed noise statistics-based criteria to improve their denoising performances. This dissertation focuses on the Noise Variance Conditioned Average (NVCA) algorithm, which takes advantage of the a priori knowledge of quantum noise statistics to perform noise reduction while preserving the edges and has already outperformed many state-of-the-art methods in the denoising of images corrupted by quantum noise, while also being suitable for real-time hardware implementation. Different issues are addressed that currently limit the actual use of very low-dose protocols in clinical practice, e.g. the evaluation of actual performances of denoising algorithms in very low-dose conditions, the optimization of tuning parameters to obtain the best denoising performances, the design of an index to properly measure the quality of X-ray images, and the assessment of an a priori noise characterization approach to account for time-varying noise statistics due to changes of X-ray tube settings. An improved NVCA algorithm is also presented, along with its real-time hardware implementation on a Field Programmable Gate Array (FPGA). The novel algorithm provides more efficient noise reduction performances also for low-contrast moving objects, thus relaxing the trade-off between noise reduction and edge preservation, while providing a further reduction of hardware complexity, which allows for low usage of logic resources also on small FPGA platforms. The results presented in this dissertation provide the means for future studies aimed at embedding the NVCA algorithm in commercial fluoroscopic devices to accomplish real-time denoising of very low-dose X-ray images, which would foster their actual use in clinical practice

    Forecasting: theory and practice

    Get PDF
    Forecasting has always been in the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The lack of a free-lunch theorem implies the need for a diverse set of forecasting methods to tackle an array of applications. This unique article provides a non-systematic review of the theory and the practice of forecasting. We offer a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts, including operations, economics, finance, energy, environment, and social good. We do not claim that this review is an exhaustive list of methods and applications. The list was compiled based on the expertise and interests of the authors. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of the forecasting theory and practice

    Hybrid Theory-Machine Learning Methods for the Prediction of AFP Layup Quality

    Get PDF
    The advanced manufacturing capabilities provided through the automated fiber placement (AFP) system has allowed for faster layup time and more consistent production across a number of different geometries. This contributes to the modern production of large composite structures and the widespread adaptation of composites in industry in general and aerospace in particular. However, the automation introduced in this process increases the difficulty of quality assurance efforts. Industry available tools for predicting layup quality are either limited in scope, or have extremely high computational overhead. With the advent of automated inspection systems, direct capture of semantic inspection data, and therefore complete quality data, becomes available. It is therefore the aim of this document to explore and develop a technique to combine semantic inspection data and incomplete but fast physical modeling tool into a comprehensive hybridized model for predicting and optimizing AFP layup quality. To accomplish this, a novel parameterization of Gaussian Process Regression is developed such that nominal behavior is dictated through theory and analytic models, with latent variables being accounted for in the stochastic aspect of the model. Coupled with a unique clustering approach for data representation, it is the aim of this model to improve on the current state of the art in quality prediction as well as provide a direct path to process parameter optimization

    From Data to Software to Science with the Rubin Observatory LSST

    Full text link
    The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) dataset will dramatically alter our understanding of the Universe, from the origins of the Solar System to the nature of dark matter and dark energy. Much of this research will depend on the existence of robust, tested, and scalable algorithms, software, and services. Identifying and developing such tools ahead of time has the potential to significantly accelerate the delivery of early science from LSST. Developing these collaboratively, and making them broadly available, can enable more inclusive and equitable collaboration on LSST science. To facilitate such opportunities, a community workshop entitled "From Data to Software to Science with the Rubin Observatory LSST" was organized by the LSST Interdisciplinary Network for Collaboration and Computing (LINCC) and partners, and held at the Flatiron Institute in New York, March 28-30th 2022. The workshop included over 50 in-person attendees invited from over 300 applications. It identified seven key software areas of need: (i) scalable cross-matching and distributed joining of catalogs, (ii) robust photometric redshift determination, (iii) software for determination of selection functions, (iv) frameworks for scalable time-series analyses, (v) services for image access and reprocessing at scale, (vi) object image access (cutouts) and analysis at scale, and (vii) scalable job execution systems. This white paper summarizes the discussions of this workshop. It considers the motivating science use cases, identified cross-cutting algorithms, software, and services, their high-level technical specifications, and the principles of inclusive collaborations needed to develop them. We provide it as a useful roadmap of needs, as well as to spur action and collaboration between groups and individuals looking to develop reusable software for early LSST science.Comment: White paper from "From Data to Software to Science with the Rubin Observatory LSST" worksho

    Closing the loop: the integration of long-term ambient vibration monitoring in structural engineering design

    Get PDF
    his study investigated the integration of long-term monitoring into the structural engineering design process to improve the design and operation of civil structures. A survey of civil and structural engineering professionals, conducted as part of this research, identified the cost and complexity of in-situ monitoring as key barriers to their implementation in practice. Therefore, the research focused on the use of ambient vibration monitoring as it is offers a low cost and unobtrusive method for instrumenting new and existing structures. The research was structured around the stages of analysing ambient vibration data using operational modal analysis (OMA), defined in this study as: i) pre-selection of analysis parameters, ii) pre-processing of the data, iii) estimation of the modal parameters, iv) identification of modes of vibration within the modal estimates, and v) using modal parameter estimates as a basis for understanding and quantifying in-service structural behaviour. A method was developed for automating the selecting of the model order, the number of modes of vibrations assumed to be identifiable within the measured dynamic response. This method allowed the modal estimates from different structures, monitoring periods or analysis parameters to be compared, and removed part of the subjectivity identified within current OMA methods. Pre-processing of ambient acceleration responses through filtering was identified as a source of bias within OMA modal estimates. It was shown that this biasing was a result of filtering artefacts within the processed data. Two methods were proposed for removing or reducing the bias of modal estimates induced by filtering artefacts, based on exclusion of sections of the response corrupted by the artefacts or fitting of the artefacts as part of the modal analysis. A new OMA technique, the short-time random decrement technique (ST-RDT) was developed on the basis of the survey of industry perceptions of long-term monitoring and limitations of existing structural monitoring techniques identified within the literature. Key advantages of the ST-RDT are that it allows the uncertainty of modal estimates and any changes in modal behaviour to be quantified through subsampling theory. The ST-RDT has been extensively validated with numerical, experimental and real-world case studies including multi-storey timber buildings and the world's first 3D printed steel bridge. Modal estimates produced using the ST-RDT were used as a basis for developing an automated method of identifying modes of vibration using a probabilistic mixture model. Identification of modes of vibration within OMA estimates was previously a specialized skill. The procedure accounts for the inherent noise associated with ambient vibration monitoring and allows the uncertainty within the modal estimates associated with each mode of vibration to be quantified. Methods of identifying, isolating and quantifying weak non-linear modal behaviour, changes in dynamic behaviour associated with changes in the distributions of mass or stiffness within a structure have been developed based on the fundamental equations of structural dynamics. These methods allow changes in dynamic behaviour associated with thermally-induced changes in stiffness or changes in static loading to be incorporated within the automated identification of modes of vibration. These methods also allow ambient vibration monitoring to be used for estimating structural parameters usually measured by more complex, expensive or delicate sensors. Examples of this include estimating the change in elastic modulus of simple structures with temperature or estimating the location and magnitude of static loads applied to a structure in-service. The methods developed in this study are applicable to a wide range of structural monitoring technologies, are accessible to non-specialist audiences and may be adapted for the monitoring of any civil structure

    Forecasting: theory and practice

    Get PDF
    Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases
    corecore