17 research outputs found

    Bayesian Monitoring of Linear Profiles Using DEWMA Control Structures with Random X

    Get PDF
    The process structures of manufacturing industry are efficiently modeled using linear profiles. Classical and Bayesian set-ups are two well-appreciated schemes for designing control charts for the monitoring of process structures. Mostly in profiles monitoring the independent variables along with the process parameters are assumed fixed. There are manufacturing processes where these conditions may not hold. The advancement in technology and day-to-day changes in process structures caused the parametric uncertainty along with variability in explanatory variables. This paper considered the case of random X and assumes different conjugate and non-conjugate priors to handle parametric uncertainty using double exponentially weighted moving average (DEWMA) control charts. Three univariate DEWMA charts are designed for the monitoring of Y-intercepts, slopes, and error variances. The average run length criterion has been used to evaluate the proposed and competing charts. The wide spread relative study identifies that the proposed Bayesian DEWMA control charts are better than the competing charts based on early detection of out-of-control profiles, particularly for smaller value shifts. The Bayesian DEWMA charts using conjugate priors are the quickest in all as they take less sample points to show out-of-control profile. A case study has been considered to further justify the superiority of Bayesian DEWMA charts over competing charts. 2013 IEEE.The work of S. A. Abbasi was supported by the Qatar University under Project QUST-1-CAS-2018-41.Scopu

    Monitoring non-parametric profiles using adaptive EWMA control chart

    Get PDF
    To monitor the quality of a process in statistical process control (SPC), considering a functional relationship between a dependent variable and one or more independent variables (which is denoted as profile monitoring) is becoming an increasingly common approach. Most of the studies in the SPC literature considered parametric approaches in which the functional relationship has the same form in the in-control (IC) and out-of-control (OC) situations. Non-parametric profiles, which have a different functional relationship in the OC conditions are very common. This paper designs a novel control chart to monitor not only the regression parameters but also the variation of the profiles in Phase II applications using an adaptive approach. Adaptive control charts adjust the final statistic with regard to information of the previous samples. The proposed method considers the relative distance of the chart statistic to the control limits as a tendency index and provides some outcomes about the process condition. The results of Monte Carlo simulations show the superiority of the proposed monitoring scheme in comparison with the common non-parametric control charts. 2022, The Author(s).The publication of this article was funded by Qatar National Library.Scopu

    Observability and Economic aspects of Fault Detection and Diagnosis Using CUSUM based Multivariate Statistics

    Get PDF
    This project focuses on the fault observability problem and its impact on plant performance and profitability. The study has been conducted along two main directions. First, a technique has been developed to detect and diagnose faulty situations that could not be observed by previously reported methods. The technique is demonstrated through a subset of faults typically considered for the Tennessee Eastman Process (TEP); which have been found unobservable in all previous studies. The proposed strategy combines the cumulative sum (CUSUM) of the process measurements with Principal Component Analysis (PCA). The CUSUM is used to enhance faults under conditions of small fault/signal to noise ratio while the use of PCA facilitates the filtering of noise in the presence of highly correlated data. Multivariate indices, namely, T2 and Q statistics based on the cumulative sums of all available measurements were used for observing these faults. The ARLo.c was proposed as a statistical metric to quantify fault observability. Following the faults detection, the problem of fault isolation is treated. It is shown that for the particular faults considered in the TEP problem, the contribution plots are not able to properly isolate the faults under consideration. This motivates the use of the CUSUM based PCA technique previously used for detection, for unambiguously diagnose the faults. The diagnosis scheme is performed by constructing a family of CUSUM based PCA models corresponding to each fault and then testing whether the statistical thresholds related to a particular faulty model is exceeded or not, hence, indicating occurrence or absence of the corresponding fault. Although the CUSUM based techniques were found successful in detecting abnormal situations as well as isolating the faults, long time intervals were required for both detection and diagnosis. The potential economic impact of these resulting delays motivates the second main objective of this project. More specifically, a methodology to quantify the potential economical loss due to unobserved faults when standard statistical monitoring charts are used is developed. Since most of the chemical and petrochemical plants are operated under closed loop scheme, the interaction of the control is also explicitly considered. An optimization problem is formulated to search for the optimal tradeoff between fault observability and closed loop performance. This optimization problem is solved in the frequency domain by using approximate closed loop transfer function models and in the time domain using a simulation based approach. The optimization in the time domain is applied to the TEP to solve for the optimal tuning parameters of the controllers that minimize an economic cost of the process

    Syndromic surveillance: reports from a national conference, 2003

    Get PDF
    Overview of Syndromic Surveillance -- What is Syndromic Surveillance? -- Linking Better Surveillance to Better Outcomes -- Review of the 2003 National Syndromic Surveillance Conference - Lessons Learned and Questions To Be Answered -- -- System Descriptions -- New York City Syndromic Surveillance Systems -- Syndrome and Outbreak Detection Using Chief-Complaint Data - Experience of the Real-Time Outbreak and Disease Surveillance Project -- Removing a Barrier to Computer-Based Outbreak and Disease Surveillance - The RODS Open Source Project -- National Retail Data Monitor for Public Health Surveillance -- National Bioterrorism Syndromic Surveillance Demonstration Program -- Daily Emergency Department Surveillance System - Bergen County, New Jersey -- Hospital Admissions Syndromic Surveillance - Connecticut, September 2001-November 2003 -- BioSense - A National Initiative for Early Detection and Quantification of Public Health Emergencies -- Syndromic Surveillance at Hospital Emergency Departments - Southeastern Virginia -- -- Research Methods -- Bivariate Method for Spatio-Temporal Syndromic Surveillance -- Role of Data Aggregation in Biosurveillance Detection Strategies with Applications from ESSENCE -- Scan Statistics for Temporal Surveillance for Biologic Terrorism -- Approaches to Syndromic Surveillance When Data Consist of Small Regional Counts -- Algorithm for Statistical Detection of Peaks - Syndromic Surveillance System for the Athens 2004 Olympic Games -- Taming Variability in Free Text: Application to Health Surveillance -- Comparison of Two Major Emergency Department-Based Free-Text Chief-Complaint Coding Systems -- How Many Illnesses Does One Emergency Department Visit Represent? Using a Population-Based Telephone Survey To Estimate the Syndromic Multiplier -- Comparison of Office Visit and Nurse Advice Hotline Data for Syndromic Surveillance - Baltimore-Washington, D.C., Metropolitan Area, 2002 -- Progress in Understanding and Using Over-the-Counter Pharmaceuticals for Syndromic Surveillance -- -- Evaluation -- Evaluation Challenges for Syndromic Surveillance - Making Incremental Progress -- Measuring Outbreak-Detection Performance By Using Controlled Feature Set Simulations -- Evaluation of Syndromic Surveillance Systems - Design of an Epidemic Simulation Model -- Benchmark Data and Power Calculations for Evaluating Disease Outbreak Detection Methods -- Bio-ALIRT Biosurveillance Detection Algorithm Evaluation -- ESSENCE II and the Framework for Evaluating Syndromic Surveillance Systems -- Conducting Population Behavioral Health Surveillance by Using Automated Diagnostic and Pharmacy Data Systems -- Evaluation of an Electronic General-Practitioner-Based Syndromic Surveillance System -- National Symptom Surveillance Using Calls to a Telephone Health Advice Service - United Kingdom, December 2001-February 2003 -- Field Investigations of Emergency Department Syndromic Surveillance Signals - New York City -- Should We Be Worried? Investigation of Signals Generated by an Electronic Syndromic Surveillance System - Westchester County, New York -- -- Public Health Practice -- Public Health Information Network - Improving Early Detection by Using a Standards-Based Approach to Connecting Public Health and Clinical Medicine -- Information System Architectures for Syndromic Surveillance -- Perspective of an Emergency Physician Group as a Data Provider for Syndromic Surveillance -- SARS Surveillance Project - Internet-Enabled Multiregion Surveillance for Rapidly Emerging Disease -- Health Information Privacy and Syndromic Surveillance SystemsPapers from the second annual National Syndromic Surveillance Conference convened by the New York City Department of Health and Mental Hygiene, the New York Academy of Medicine, and the CDC in New York City during Oct. 23-24, 2003. Published as the September 24, 2004 supplement to vol. 53 of MMWR. Morbidity and mortality weekly report.1571461

    Vol. 1, No. 2 (Full Issue)

    Get PDF

    An Integrated Approach to Performance Monitoring and Fault Diagnosis of Nuclear Power Systems

    Get PDF
    In this dissertation an integrated framework of process performance monitoring and fault diagnosis was developed for nuclear power systems using robust data driven model based methods, which comprises thermal hydraulic simulation, data driven modeling, identification of model uncertainty, and robust residual generator design for fault detection and isolation. In the applications to nuclear power systems, on the one hand, historical data are often not able to characterize the relationships among process variables because operating setpoints may change and thermal fluid components such as steam generators and heat exchangers may experience degradation. On the other hand, first-principle models always have uncertainty and are often too complicated in terms of model structure to design residual generators for fault diagnosis. Therefore, a realistic fault diagnosis method needs to combine the strength of first principle models in modeling a wide range of anticipated operation conditions and the strength of data driven modeling in feature extraction. In the developed robust data driven model-based approach, the changes in operation conditions are simulated using the first principle models and the model uncertainty is extracted from plant operation data such that the fault effects on process variables can be decoupled from model uncertainty and normal operation changes. It was found that the developed robust fault diagnosis method was able to eliminate false alarms due to model uncertainty and deal with changes in operating conditions throughout the lifetime of nuclear power systems. Multiple methods of robust data driven model based fault diagnosis were developed in this dissertation. A complete procedure based on causal graph theory and data reconciliation method was developed to investigate the causal relationships and the quantitative sensitivities among variables so that sensor placement could be optimized for fault diagnosis in the design phase. Reconstruction based Principal Component Analysis (PCA) approach was applied to deal with both simple faults and complex faults for steady state diagnosis in the context of operation scheduling and maintenance management. A robust PCA model-based method was developed to distinguish the differences between fault effects and model uncertainties. In order to improve the sensitivity of fault detection, a hybrid PCA model based approach was developed to incorporate system knowledge into data driven modeling. Subspace identification was proposed to extract state space models from thermal hydraulic simulations and a robust dynamic residual generator design algorithm was developed for fault diagnosis for the purpose of fault tolerant control and extension to reactor startup and load following operation conditions. The developed robust dynamic residual generator design algorithm is unique in that explicit identification of model uncertainty is not necessary. Finally, it was demonstrated that the developed new methods for the IRIS Helical Coil Steam Generator (HCSG) system. A simulation model was first developed for this system. It was revealed through steady state simulation that the primary coolant temperature profile could be used to indicate the water inventory inside the HCSG tubes. The performance monitoring and fault diagnosis module was then developed to monitor sensor faults, flow distribution abnormality, and heat performance degradation for both steady state and dynamic operation conditions. This dissertation bridges the gap between the theoretical research on computational intelligence and the engineering design in performance monitoring and fault diagnosis for nuclear power systems. The new algorithms have the potential of being integrated into the Generation III and Generation IV nuclear reactor I&C design after they are tested on current nuclear power plants or Generation IV prototype reactors

    Inferential Model Predictive Control Using Statistical Tools

    Get PDF
    With an ever increasing emphasis on reducing costs and improving quality control, the application of advanced process control in the bulk chemical and petrochemical industry is steadily rising. Two major areas of development are model-based control strategies and process sensors. This study deals with the application of multivariate statistical techniques for developing soft-sensors in an inferential model predictive control framework. McAvoy (2003) has proposed model predictive statistical process control (MP-SPC), a principal component (PC) score control methodology. MP-SPC was found to be very effective in reducing the variability in the quality variables without using any real-time, on-line quality or disturbance measurements. This work extends McAvoy's formulation to incorporate multiple manipulated variables and demonstrates the controller's performance under different disturbance scenarios and for an additional case study. Moreover, implementation issues critical to the success of the formulations considered such as controller tuning, measurement selection and model identification are also studied. A key feature is the emphasis on confirming the consistency of the cross-correlation between the selected measurements and the quality variable before on-line implementation and that between the scores and the quality variables after on-line implementation. An analysis of the controller's performance in dealing with disturbances of different frequencies, sizes and directions, as well as non-stationarities in the disturbance, reveals the robustness of the approach. The penalty on manipulated variable moves is the most effective tuning parameter. A unique scheme, developed in this study, takes advantage of the information contained in historical databases combined with plant testing to generate collinear PC score models. The proposed measurement selection algorithm ranks measurements that have a consistent cross-correlation with the quality variable according to their cross-correlation coefficient and lead time. Higher ranked variables are chosen as long as they make sufficiently large contributions to the PC score model. Several approaches for identifying dynamic score models are proposed. All approaches put greater emphasis on short term predictions. Two approaches utilize the statistics associated with the PC score models. The Hotelling's statistic and the Q-residual information may be used to remove outliers during pre-processing or may be incorporated as sample weights. The process dynamics and controller performance results presented in this study are simulations based on well-known, industrially benchmarked, test-bed models: the Tennessee Eastman challenge process and the azeotropic distillation tower of the Vinyl Acetate monomer process
    corecore