59,476 research outputs found
Fleet Prognosis with Physics-informed Recurrent Neural Networks
Services and warranties of large fleets of engineering assets is a very
profitable business. The success of companies in that area is often related to
predictive maintenance driven by advanced analytics. Therefore, accurate
modeling, as a way to understand how the complex interactions between operating
conditions and component capability define useful life, is key for services
profitability. Unfortunately, building prognosis models for large fleets is a
daunting task as factors such as duty cycle variation, harsh environments,
inadequate maintenance, and problems with mass production can lead to large
discrepancies between designed and observed useful lives. This paper introduces
a novel physics-informed neural network approach to prognosis by extending
recurrent neural networks to cumulative damage models. We propose a new
recurrent neural network cell designed to merge physics-informed and
data-driven layers. With that, engineers and scientists have the chance to use
physics-informed layers to model parts that are well understood (e.g., fatigue
crack growth) and use data-driven layers to model parts that are poorly
characterized (e.g., internal loads). A simple numerical experiment is used to
present the main features of the proposed physics-informed recurrent neural
network for damage accumulation. The test problem consist of predicting fatigue
crack length for a synthetic fleet of airplanes subject to different mission
mixes. The model is trained using full observation inputs (far-field loads) and
very limited observation of outputs (crack length at inspection for only a
portion of the fleet). The results demonstrate that our proposed hybrid
physics-informed recurrent neural network is able to accurately model fatigue
crack growth even when the observed distribution of crack length does not match
with the (unobservable) fleet distribution.Comment: Data and codes (including our implementation for both the multi-layer
perceptron, the stress intensity and Paris law layers, the cumulative damage
cell, as well as python driver scripts) used in this manuscript are publicly
available on GitHub at https://github.com/PML-UCF/pinn. The data and code are
released under the MIT Licens
Customer-oriented risk assessment in Network Utilities
For companies that distribute services such as telecommunications, water, energy, gas, etc., quality perceived by the customers has a strong impact on the fulfillment of financial goals, positively increasing the demand and negatively increasing the risk of customer churn (loss of customers). Failures by these companies may cause customer affection in a massive way, augmenting the intention to leave the company. Therefore, maintenance performance and specifically service reliability has a strong influence on financial goals. This paper proposes a methodology to evaluate the contribution of the maintenance department in economic terms, based on service unreliability by network failures. The developed methodology aims to provide an analysis of failures to facilitate decision making about maintenance (preventive/predictive and corrective) costs versus negative impacts in end-customer invoicing based on the probability of losing customers. Survival analysis of recurrent failures with the General Renewal Process distribution is used for this novel purpose with the intention to be applied as a standard procedure to calculate the expected maintenance financial impact, for a given period of time. Also, geographical areas of coverage are distinguished, enabling the comparison of different technical or management alternatives. Two case studies in a telecommunications services company are presented in order to illustrate the applicability of the methodology
Development of an ontology for aerospace engine components degradation in service
This paper presents the development of an ontology for component service degradation. In this paper, degradation mechanisms in gas turbine metallic components are used for a case study to explain how a taxonomy within an ontology can be validated. The validation method used in this paper uses an iterative process and sanity checks. Data extracted from on-demand textual information are filtered and grouped into classes of degradation mechanisms. Various concepts are systematically and hierarchically arranged for use in the service maintenance ontology. The allocation of the mechanisms to the AS-IS ontology presents a robust data collection hub. Data integrity is guaranteed when the TO-BE ontology is introduced to analyse processes relative to various failure events. The initial evaluation reveals improvement in the performance of the TO-BE domain ontology based on iterations and updates with recognised mechanisms. The information extracted and collected is required to improve service k nowledge and performance feedback which are important for service engineers. Existing research areas such as natural language processing, knowledge management, and information extraction were also examined
Supporting group maintenance through prognostics-enhanced dynamic dependability prediction
Condition-based maintenance strategies adapt maintenance planning through the integration of online condition monitoring of assets. The accuracy and cost-effectiveness of these strategies can be improved by integrating prognostics predictions and grouping maintenance actions respectively. In complex industrial systems, however, effective condition-based maintenance is intricate. Such systems are comprised of repairable assets which can fail in different ways, with various effects, and typically governed by dynamics which include time-dependent and conditional events. In this context, system reliability prediction is complex and effective maintenance planning is virtually impossible prior to system deployment and hard even in the case of condition-based maintenance. Addressing these issues, this paper presents an online system maintenance method that takes into account the system dynamics. The method employs an online predictive diagnosis algorithm to distinguish between critical and non-critical assets. A prognostics-updated method for predicting the system health is then employed to yield well-informed, more accurate, condition-based suggestions for the maintenance of critical assets and for the group-based reactive repair of non-critical assets. The cost-effectiveness of the approach is discussed in a case study from the power industry
Statistical inference of transmission fidelity of DNA methylation patterns over somatic cell divisions in mammals
We develop Bayesian inference methods for a recently-emerging type of
epigenetic data to study the transmission fidelity of DNA methylation patterns
over cell divisions. The data consist of parent-daughter double-stranded DNA
methylation patterns with each pattern coming from a single cell and
represented as an unordered pair of binary strings. The data are technically
difficult and time-consuming to collect, putting a premium on an efficient
inference method. Our aim is to estimate rates for the maintenance and de novo
methylation events that gave rise to the observed patterns, while accounting
for measurement error. We model data at multiple sites jointly, thus using
whole-strand information, and considerably reduce confounding between
parameters. We also adopt a hierarchical structure that allows for variation in
rates across sites without an explosion in the effective number of parameters.
Our context-specific priors capture the expected stationarity, or
near-stationarity, of the stochastic process that generated the data analyzed
here. This expected stationarity is shown to greatly increase the precision of
the estimation. Applying our model to a data set collected at the human FMR1
locus, we find that measurement errors, generally ignored in similar studies,
occur at a nontrivial rate (inappropriate bisulfite conversion error: 1.6
with 80 CI: 0.9--2.3). Accounting for these errors has a substantial
impact on estimates of key biological parameters. The estimated average failure
of maintenance rate and daughter de novo rate decline from 0.04 to 0.024 and
from 0.14 to 0.07, respectively, when errors are accounted for. Our results
also provide evidence that de novo events may occur on both parent and daughter
strands: the median parent and daughter de novo rates are 0.08 (80 CI:
0.04--0.13) and 0.07 (80 CI: 0.04--0.11), respectively.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS297 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Early Quantitative Assessment of Non-Functional Requirements
Non-functional requirements (NFRs) of software systems are a well known source of uncertainty in effort estimation. Yet, quantitatively approaching NFR early in a project is hard. This paper makes a step towards reducing the impact of uncertainty due to NRF. It offers a solution that incorporates NFRs into the functional size quantification process. The merits of our solution are twofold: first, it lets us quantitatively assess the NFR modeling process early in the project, and second, it lets us generate test cases for NFR verification purposes. We chose the NFR framework as a vehicle to integrate NFRs into the requirements modeling process and to apply quantitative assessment procedures. Our solution proposal also rests on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. We extend its use for NFR testing purposes, which is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the advantages of our approach and the open questions related to its design as well
A method for assessing the success and failure of community-level interventions in the presence of network diffusion, social reinforcement, and related social effects
Prevention and intervention work done within community settings often face
unique analytic challenges for rigorous evaluations. Since community prevention
work (often geographically isolated) cannot be controlled in the same way other
prevention programs and these communities have an increased level of
interpersonal interactions, rigorous evaluations are needed. Even when the
`gold standard' randomized control trials are implemented within community
intervention work, the threats to internal validity can be called into question
given informal social spread of information in closed network settings. A new
prevention evaluation method is presented here to disentangle the social
influences assumed to influence prevention effects within communities. We
formally introduce the method and it's utility for a suicide prevention program
implemented in several Alaska Native villages. The results show promise to
explore eight sociological measures of intervention effects in the face of
social diffusion, social reinforcement, and direct treatment. Policy and
research implication are discussed.Comment: 18 pages, 5 figure
- …