894 research outputs found

    Vol. 13, No. 1 (Full Issue)

    Get PDF

    Vol. 13, No. 2 (Full Issue)

    Get PDF

    Application of Stochastic Simulation Methods to System Identification

    Get PDF
    Reliable predictive models for the response of structures are a necessity for many branches of earthquake engineering, such as design, structural control, and structural health monitoring. However, the process of choosing an appropriate class of models to describe a system, known as model-class selection, and identifying the specific predictive model based on available data, known as system identification, is difficult. Variability in material properties, complex constitutive behavior, uncertainty in the excitations caused by earthquakes, and limited constraining information (relatively few channels of data, compared to the number of parameters needed for a useful predictive model) make system identification an ill-conditioned problem. In addition, model-class selection is not trivial, as it involves balancing predictive power with simplicity. These problems of system identification and model-class selection may be addressed using a Bayesian probabilistic framework that provides a rational, transparent method for combining prior knowledge of a system with measured data and for choosing between competing model classes. The probabilistic framework also allows for explicit quantification of the uncertainties associated with modeling a system. The essential idea is to use probability logic and Bayes' Theorem to give a measure of plausibility for a model or class of models that is updated with available data. Similar approaches have been used in the field of system identification, but many currently used methods for Bayesian updating focus on the model defined by the set of most plausible parameter values. The challenge for these approaches (referred to as asymptotic-approximation-based methods) is when one must deal with ill-conditioned problems, where there may be many models with high plausibility, rather than a single v dominant model. It is demonstrated here that ill-conditioned problems in system identification and model-class selection can be effectively addressed using stochastic simulation methods. This work focuses on the application of stochastic simulation to updating and comparing model classes in problems of: (1) development of empirical ground motion attenuation relations, (2) structural model updating using incomplete modal data for the purposes of structural health monitoring, and (3) identification of hysteretic structural models, including degrading models, from seismic structural response. The results for system identification and model-class selection in this work fall into three categories. First, in cases where the existing asymptotic approximation-based methods are appropriate (i.e., well-conditioned problems with one highest-plausibility model), the results obtained using stochastic simulation show good agreement with results from asymptotic-approximation-based methods. Second, for cases involving ill-conditioned problems based on simulated data, stochastic simulation methods are successfully applied to obtain results in a situation where the use of asymptotics is not feasible (specfically, the identification of hysteretic models). Third, preliminary studies using stochastic simulation to identify a deteriorating hysteretic model with relatively sparse real data from a structure damaged in the 1994 Northridge earthquake show that the high-plausibility models demonstrate behavior consistent with the observed damage, indicating that there is promise in applying these methods to ill-conditioned problems in the real world

    Vol. 16, No. 2 (Full Issue)

    Get PDF

    A Data-Driven Predictive Model of Reliability Estimation Using State-Space Stochastic Degradation Model

    Get PDF
    The concept of the Industrial Internet of Things (IIoT) provides the foundation to apply data-driven methodologies. The data-driven predictive models of reliability estimation can become a major tool in increasing the life of assets, lowering capital cost, and reducing operating and maintenance costs. Classical models of reliability assessment mainly rely on lifetime data. Failure data may not be easily obtainable for highly reliable assets. Furthermore, the collected historical lifetime data may not be able to accurately describe the behavior of the asset in a unique application or environment. Therefore, it is not an optimal approach anymore to conduct a reliability estimation based on classical models. Fortunately, most of the industrial assets have performance characteristics whose degradation or decay over the operating time can be related to their reliability estimates. The application of the degradation methods has been recently increasing due to their ability to keep track of the dynamic conditions of the system over time. The main purpose of this study is to develop a data-driven predictive model of reliability assessment based on real-time data using a state-space stochastic degradation model to predict the critical time for initiating maintenance actions in order to enhance the value and prolonging the life of assets. The new degradation model developed in this thesis is introducing a new mapping function for the General Path Model based on series of Gamma Processes degradation models in the state-space environment by considering Poisson distributed weights for each of the Gamma processes. The application of the developed algorithm is illustrated for the distributed electrical systems as a generic use case. A data-driven algorithm is developed in order to estimate the parameters of the new degradation model. Once the estimates of the parameters are available, distribution of the failure time, time-dependent distribution of the degradation, and reliability based on the current estimate of the degradation can be obtained

    Machine Learning-based Predictive Maintenance for Optical Networks

    Get PDF
    Optical networks provide the backbone of modern telecommunications by connecting the world faster than ever before. However, such networks are susceptible to several failures (e.g., optical fiber cuts, malfunctioning optical devices), which might result in degradation in the network operation, massive data loss, and network disruption. It is challenging to accurately and quickly detect and localize such failures due to the complexity of such networks, the time required to identify the fault and pinpoint it using conventional approaches, and the lack of proactive efficient fault management mechanisms. Therefore, it is highly beneficial to perform fault management in optical communication systems in order to reduce the mean time to repair, to meet service level agreements more easily, and to enhance the network reliability. In this thesis, the aforementioned challenges and needs are tackled by investigating the use of machine learning (ML) techniques for implementing efficient proactive fault detection, diagnosis, and localization schemes for optical communication systems. In particular, the adoption of ML methods for solving the following problems is explored: - Degradation prediction of semiconductor lasers, - Lifetime (mean time to failure) prediction of semiconductor lasers, - Remaining useful life (the length of time a machine is likely to operate before it requires repair or replacement) prediction of semiconductor lasers, - Optical fiber fault detection, localization, characterization, and identification for different optical network architectures, - Anomaly detection in optical fiber monitoring. Such ML approaches outperform the conventionally employed methods for all the investigated use cases by achieving better prediction accuracy and earlier prediction or detection capability

    VI Workshop on Computational Data Analysis and Numerical Methods: Book of Abstracts

    Get PDF
    The VI Workshop on Computational Data Analysis and Numerical Methods (WCDANM) is going to be held on June 27-29, 2019, in the Department of Mathematics of the University of Beira Interior (UBI), CovilhĆ£, Portugal and it is a unique opportunity to disseminate scientific research related to the areas of Mathematics in general, with particular relevance to the areas of Computational Data Analysis and Numerical Methods in theoretical and/or practical field, using new techniques, giving especial emphasis to applications in Medicine, Biology, Biotechnology, Engineering, Industry, Environmental Sciences, Finance, Insurance, Management and Administration. The meeting will provide a forum for discussion and debate of ideas with interest to the scientific community in general. With this meeting new scientific collaborations among colleagues, namely new collaborations in Masters and PhD projects are expected. The event is open to the entire scientific community (with or without communication/poster)

    Bayesian Methods for Gas-Phase Tomography

    Get PDF
    Gas-phase tomography refers to a set of techniques that determine the 2D or 3D distribution of a target species in a jet, plume, or flame using measurements of light, made around the boundary of a flow area. Reconstructed quantities may include the concentration of one or more species, temperature, pressure, and optical density, among others. Tomography is increasingly used to study fundamental aspects of turbulent combustion and monitor emissions for regulatory compliance. This thesis develops statistical methods to improve gas-phase tomography and reports two novel experimental applications. Tomography is an inverse problem, meaning that a forward model (calculating measurements of light for a known distribution of gas) is inverted to estimate the model parameters (transforming experimental data into a gas distribution). The measurement modality varies with the problem geometry and objective of the experiment. For instance, transmittance data from an array of laser beams that transect a jet may be inverted to recover 2D fields of concentration and temperature; and multiple high-resolution images of a flame, captured from different angles, are used to reconstruct wrinkling of the 3D reacting zone. Forward models for gas-phase tomography modalities share a common mathematical form, that of a Fredholm integral equation of the first-kind (IFK). The inversion of coupled IFKs is necessarily ill-posed, however, meaning that solutions are either unstable or non-unique. Measurements are thus insufficient in themselves to generate a realistic image of the gas and additional information must be incorporated into the reconstruction procedure. Statistical inversion is an approach to inverse problems in which the measurements, experimental parameters, and quantities of interest are treated as random variables, characterized by a probability distribution. These distributions reflect uncertainty about the target due to fluctuations in the flow field, noise in the data, errors in the forward model, and the ill-posed nature of reconstruction. The Bayesian framework for tomography features a likelihood probability density function (pdf), which describes the chance of observing a measurement for a given distribution of gas, and prior pdf, which assigns a relative plausibility to candidate distributions based on assumptions about the flow physics. Bayesā€™ equation updates information about the target in response to measurement data, combining the likelihood and prior functions to form a posterior pdf. The posterior is usually summarized by the maximum a posteriori (MAP) estimate, which is the most likely distribution of gas for a set of data, subject to the effects of noise, model errors, and prior information. The framework can be used to estimate credibility intervals for a reconstruction and the form of Bayesā€™ equation suggests procedures for improving gas tomography. The accuracy of reconstructions depends on the information content of the data, which is a function of the experimental design, as well as the specificity and validity of the prior. This thesis employs theoretical arguments and experimental measurements of scalar fluctuations to justify joint-normal likelihood and prior pdfs for gas-phase tomography. Three methods are introduced to improve each stage of the inverse problem: to develop priors, design optimal experiments, and select a discretization scheme. First, a self-similarity analysis of turbulent jetsā€”common targets in gas tomographyā€”is used to construct an advanced prior, informed by an estimate of the jetā€™s spatial covariance. Next, a Bayesian objective function is proposed to optimize beam positions in limited-data arrays, which are necessary in scenarios where optical access to the flow area is restricted. Finally, a Bayesian expression for model selection is derived from the joint-normal pdfs and employed to select a mathematical basis to reconstruct a flow. Extensive numerical evidence is presented to validate these methods. The dissertation continues with two novel experiments, conducted in a Bayesian way. Broadband absorption tomography is a new technique intended for quantitative emissions detection from spectrally-convolved absorption signals. Theoretical foundations for the diagnostic are developed and the results of a proof-of-concept emissions detection experiment are reported. Lastly, background-oriented schlieren (BOS) tomography is applied to combustion for the first time. BOS tomography employs measurements of beam steering to reconstruct a fluidā€™s optical density field, which can be used to infer temperature and density. The application of BOS tomography to flame imaging sets the stage for instantaneous 3D combustion thermometry. Numerical and experimental results reported in this thesis support a Bayesian approach to gas-phase tomography. Bayesian tomography makes the role of prior information explicit, which can be leveraged to optimize reconstructions and design better imaging systems in support of research on fluid flow and combustion dynamics

    Statistical procedures for certification of software systems

    Get PDF

    Vol. 15, No. 2 (Full Issue)

    Get PDF
    • ā€¦
    corecore