179 research outputs found
ISBIS 2016: Meeting on Statistics in Business and Industry
This Book includes the abstracts of the talks presented at the 2016 International Symposium on Business and Industrial Statistics, held at Barcelona, June 8-10, 2016, hosted at the Universitat Politècnica de Catalunya - Barcelona TECH, by the Department of Statistics and Operations Research. The location of the meeting was at ETSEIB Building (Escola Tecnica Superior d'Enginyeria Industrial) at Avda Diagonal 647.
The meeting organizers celebrated the continued success of ISBIS and ENBIS society, and the meeting draw together the international community of statisticians, both academics and industry professionals, who share the goal of making statistics the foundation for decision making in business and related applications. The Scientific Program Committee was constituted by:
David Banks, Duke University
AmÃlcar Oliveira, DCeT - Universidade Aberta and CEAUL
Teresa A. Oliveira, DCeT - Universidade Aberta and CEAUL
Nalini Ravishankar, University of Connecticut
Xavier Tort Martorell, Universitat Politécnica de Catalunya, Barcelona TECH
Martina Vandebroek, KU Leuven
Vincenzo Esposito Vinzi, ESSEC Business Schoo
Modelo de apoio à decisão para a manutenção condicionada de equipamentos produtivos
Doctoral Thesis for PhD degree in Industrial and Systems EngineeringIntroduction: This thesis describes a methodology to combine Bayesian control chart
and CBM (Condition-Based Maintenance) for developing a new integrated model. In
maintenance management, it is a challenging task for decision-maker to conduct an
appropriate and accurate decision. Proper and well-performed CBM models are
beneficial for maintenance decision making. The integration of Bayesian control chart
and CBM is considered as an intelligent model and a suitable strategy for forecasting
items failures as well as allow providing an effectiveness maintenance cost. CBM
models provides lower inventory costs for spare parts, reduces unplanned outage, and
minimize the risk of catastrophic failure, avoiding high penalties associated with losses
of production or delays, increasing availability. However, CBM models need new
aspects and the integration of new type of information in maintenance modeling that can
improve the results. Objective: The thesis aims to develop a new methodology based on
Bayesian control chart for predicting failures of item incorporating simultaneously two
types of data: key quality control measurement and equipment condition parameters. In
other words, the project research questions are directed to give the lower maintenance
costs for real process control. Method: The mathematical approach carried out in this
study for developing an optimal Condition Based Maintenance policy included the
Weibull analysis for verifying the Markov property, Delay time concept used for
deterioration modeling and PSO and Monte Carlo simulation. These models are used for
finding the upper control limit and the interval monitoring that minimizes the
(maintenance) cost function. Result: The main contribution of this thesis is that the
proposed model performs better than previous models in which the hypothesis of using
simultaneously data about condition equipment parameters and quality control
measurements improve the effectiveness of integrated model Bayesian control chart for
Condition Based Maintenance.Introdução: Esta tese descreve uma metodologia para combinar Bayesian control chart
e CBM (Condition- Based Maintenance) para desenvolver um novo modelo integrado.
Na gestão da manutenção, é importante que o decisor possa tomar decisões apropriadas
e corretas. Modelos CBM bem concebidos serão muito benéficos nas tomadas de
decisão sobre manutenção. A integração dos gráficos de controlo Bayesian e CBM é
considerada um modelo inteligente e uma estratégica adequada para prever as falhas de
componentes bem como produzir um controlo de custos de manutenção. Os modelos
CBM conseguem definir custos de inventário mais baixos para as partes de substituição,
reduzem interrupções não planeadas e minimizam o risco de falhas catastróficas,
evitando elevadas penalizações associadas a perdas de produção ou atrasos, aumentando
a disponibilidade. Contudo, os modelos CBM precisam de alterações e a integração de
novos tipos de informação na modelação de manutenção que permitam melhorar os
resultados.Objetivos: Esta tese pretende desenvolver uma nova metodologia baseada
Bayesian control chart para prever as falhas de partes, incorporando dois tipos de
dados: medições-chave de controlo de qualidade e parâmetros de condição do
equipamento. Por outras palavras, as questões de investigação são direcionadas para
diminuir custos de manutenção no processo de controlo.Métodos: Os modelos
matemáticos implementados neste estudo para desenvolver uma polÃtica ótima de CBM
incluÃram a análise de Weibull para verificação da propriedade de Markov, conceito de
atraso de tempo para a modelação da deterioração, PSO e simulação de Monte Carlo.
Estes modelos são usados para encontrar o limite superior de controlo e o intervalo de
monotorização para minimizar a função de custos de manutenção.Resultados: A
principal contribuição desta tese é que o modelo proposto melhora os resultados dos
modelos anteriores, baseando-se na hipótese de que, usando simultaneamente dados dos
parâmetros dos equipamentos e medições de controlo de qualidade. Assim obtém-se
uma melhoria a eficácia do modelo integrado de Bayesian control chart para a
manutenção condicionada
Probability and Statistics in Aerospace Engineering
This monograph was prepared to give the practicing engineer a clear understanding of probability and statistics with special consideration to problems frequently encountered in aerospace engineering. It is conceived to be both a desktop reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject
A data analytics approach to gas turbine prognostics and health management
As a consequence of the recent deregulation in the electrical power production industry, there has been a shift in the traditional ownership of power plants and the way they are operated. To hedge their business risks, the many new private entrepreneurs enter into long-term service agreement (LTSA) with third parties for their operation and maintenance activities. As the major LTSA providers, original equipment manufacturers have invested huge amounts of money to develop preventive maintenance strategies to minimize the occurrence of costly unplanned outages resulting from failures of the equipments covered under LTSA contracts. As a matter of fact, a recent study by the Electric Power Research Institute estimates the cost benefit of preventing a failure of a General Electric 7FA or 9FA technology compressor at 20 million.
Therefore, in this dissertation, a two-phase data analytics approach is proposed to use the existing monitoring gas path and vibration sensors data to first develop a proactive strategy that systematically detects and validates catastrophic failure precursors so as to avoid the failure; and secondly to estimate the residual time to failure of the unhealthy items. For the first part of this work, the time-frequency technique of the wavelet packet transforms is used to de-noise the noisy sensor data. Next, the time-series signal of each sensor is decomposed to perform a multi-resolution analysis to extract its features. After that, the probabilistic principal component analysis is applied as a data fusion technique to reduce the number of the potentially correlated multi-sensors measurement into a few uncorrelated principal components. The last step of the failure precursor detection methodology, the anomaly detection decision, is in itself a multi-stage process. The obtained principal components from the data fusion step are first combined into a one-dimensional reconstructed signal representing the overall health assessment of the monitored systems. Then, two damage indicators of the reconstructed signal are defined and monitored for defect using a statistical process control approach. Finally, the Bayesian evaluation method for hypothesis testing is applied to a computed threshold to test for deviations from the healthy band.
To model the residual time to failure, the anomaly severity index and the anomaly duration index are defined as defects characteristics. Two modeling techniques are investigated for the prognostication of the survival time after an anomaly is detected: the deterministic regression approach, and parametric approximation of the non-parametric Kaplan-Meier plot estimator. It is established that the deterministic regression provides poor prediction estimation. The non parametric survival data analysis technique of the Kaplan-Meier estimator provides the empirical survivor function of the data set comprised of both non-censored and right censored data. Though powerful because no a-priori predefined lifetime distribution is made, the Kaplan-Meier result lacks the flexibility to be transplanted to other units of a given fleet. The parametric analysis of survival data is performed with two popular failure analysis distributions: the exponential distribution and the Weibull distribution. The conclusion from the parametric analysis of the Kaplan-Meier plot is that the larger the data set, the more accurate is the prognostication ability of the residual time to failure model.PhDCommittee Chair: Mavris, Dimitri; Committee Member: Jiang, Xiaomo; Committee Member: Kumar, Virendra; Committee Member: Saleh, Joseph; Committee Member: Vittal, Sameer; Committee Member: Volovoi, Vital
An Exposition on Bayesian Inference
The Bayesian approach to probability and statistics is described, a brief history of Bayesianism is related, differences between Bayesian and Frequentist schools of statistics are defined, protential applications are investigated, and a literature survey is presented in the form of a machine-sort card file.
Bayesian thought is increasing in favor among statisticians because of its ability to attack problems that are unassailable from the Frequentist approach. It should become more popular among practitioners because of the flexibility it allows experimenters and the ease with which prior knowledge can be combined with experimental data. (82 pages
Generalized Completed Local Binary Patterns for Time-Efficient Steel Surface Defect Classification
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted ncomponent of this work in other works.Efficient defect classification is one of the most important preconditions to achieve online quality inspection for hot-rolled strip steels. It is extremely challenging owing to various defect appearances, large intraclass variation, ambiguous interclass distance, and unstable gray values. In this paper, a generalized completed local binary patterns (GCLBP) framework is proposed. Two variants of improved completed local binary patterns (ICLBP) and improved completed noise-invariant local-structure patterns (ICNLP) under the GCLBP framework are developed for steel surface defect classification. Different from conventional local binary patterns variants, descriptive information hidden in nonuniform patterns is innovatively excavated for the better defect representation. This paper focuses on the following aspects. First, a lightweight searching algorithm is established for exploiting the dominant nonuniform patterns (DNUPs). Second, a hybrid pattern code mapping mechanism is proposed to encode all the uniform patterns and DNUPs. Third, feature extraction is carried out under the GCLBP framework. Finally, histogram matching is efficiently accomplished by simple nearest-neighbor classifier. The classification accuracy and time efficiency are verified on a widely recognized texture database (Outex) and a real-world steel surface defect database [Northeastern University (NEU)]. The experimental results promise that the proposed method can be widely applied in online automatic optical inspection instruments for hot-rolled strip steel.Peer reviewe
Reliability applied to maintenance
The thesis covers studies conducted during 1976-79 under a
Science Research Council contract to examine the uses of reliability
information in decision-making in maintenance in the process industries.
After a discussion of the ideal data system, four practical studies
of process plants are described involving both Pareto and distribution
analysis. In two of these studies the maintenance policy was changed
and the effect on failure modes and frequency observed. Hyper-exponentially
distributed failure intervals were found to be common and were explained
after observation of maintenance work practices and development of
theory as being due to poor workmanship and parts. The fallacy that
constant failure rate necessarily implies the optimality of maintenance
only at failure is discussed.
Two models for the optimisation of inspection intervals are
developed; both assume items give detectable warning of impending failure.
The first is based upon constant risk of failure between successive
inspections 'and Weibull base failure distribution~ Results show that
an inspection/on-condition maintenance regime can be cost effective
even when the failure rate is falling and may be better than periodiC
renewals for an increasing failure situation. The second model is first-order Markov. Transition rate matrices are developed and solved
to compare continuous monitoring with inspections/on-condition
maintenance an a cost basis. The models incorporate planning delay
in starting maintenance after impending failure is detected.
The relationships between plant output and maintenance policy
as affected by the presence of redundancy and/or storage between stages
are examined, mainly through the literature but with some original
theoretical proposals.
It is concluded that reliability techniques have many applications
in the improvement of plant maintenance policy. Techniques abound,
but few firms are willing to take the step of faith to set up, even
temporarily, the data-collection facilities required to apply them.
There are over 350 references, many of which are reviewed in the
text, divided into chapter-related sectionso
Appendices include a review of Reliability Engineering Theory,
based on the author's draft for BS 5760(2) a discussion of the 'bath-tub
curves' applicability to maintained systems and the theory connecting
hyper-exponentially distributed failures with poor maintenance
practices
Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis
This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements)
Semi-supervised and Active Learning Models for Software Fault Prediction
As software continues to insinuate itself into nearly every aspect of our life, the quality of software has been an extremely important issue. Software Quality Assurance (SQA) is a process that ensures the development of high-quality software. It concerns the important problem of maintaining, monitoring, and developing quality software. Accurate detection of fault prone components in software projects is one of the most commonly practiced techniques that offer the path to high quality products without excessive assurance expenditures. This type of quality modeling requires the availability of software modules with known fault content developed in similar environment. However, collection of fault data at module level, particularly in new projects, is expensive and time-consuming. Semi-supervised learning and active learning offer solutions to this problem for learning from limited labeled data by utilizing inexpensive unlabeled data.;In this dissertation, we investigate semi-supervised learning and active learning approaches in the software fault prediction problem. The role of base learner in semi-supervised learning is discussed using several state-of-the-art supervised learners. Our results showed that semi-supervised learning with appropriate base learner leads to better performance in fault proneness prediction compared to supervised learning. In addition, incorporating pre-processing technique prior to semi-supervised learning provides a promising direction to further improving the prediction performance. Active learning, sharing the similar idea as semi-supervised learning in utilizing unlabeled data, requires human efforts for labeling fault proneness in its learning process. Empirical results showed that active learning supplemented by dimensionality reduction technique performs better than the supervised learning on release-based data sets
- …