3,987 research outputs found

    Recognition of Process Disturbances for an SPC/EPC Stochastic System Using Support Vector Machine and Artificial Neural Network Approaches

    Get PDF
    Because of the excellent performance on monitoring and controlling an autocorrelated process, the integration of statistical process control (SPC) and engineering process control (EPC) has drawn considerable attention in recent years. Both theoretical and empirical findings have suggested that the integration of SPC and EPC can be an effective way to improve the quality of a process, especially when the underlying process is autocorrelated. However, because EPC compensates for the effects of underlying disturbances, the disturbance patterns are embedded and hard to be recognized. Effective recognition of disturbance patterns is a very important issue for process improvement since disturbance patterns would be associated with certain assignable causes which affect the process. In practical situations, after compensating by EPC, the underlying disturbance patterns could be of any mixture types which are totally different from the original patterns. This study proposes the integration of support vector machine (SVM) and artificial neural network (ANN) approaches to recognize the disturbance patterns of the underlying disturbances. Experimental results revealed that the proposed schemes are able to effectively recognize various disturbance patterns of an SPC/EPC system

    Quality 4.0 in action: Smart hybrid fault diagnosis system in plaster production

    Get PDF
    UIDB/00066/2020Industry 4.0 (I4.0) represents the Fourth Industrial Revolution in manufacturing, expressing the digital transformation of industrial companies employing emerging technologies. Factories of the future will enjoy hybrid solutions, while quality is the heart of all manufacturing systems regardless of the type of production and products. Quality 4.0 is a branch of I4.0 with the aim of boosting quality by employing smart solutions and intelligent algorithms. There are many conceptual frameworks and models, while the main challenge is to have the experience of Quality 4.0 in action at the workshop level. In this paper, a hybrid model based on a neural network (NN) and expert system (ES) is proposed for dealing with control chart patterns (CCPs). The idea is to have, instead of a passive descriptive model, a smart predictive model to recommend corrective actions. A construction plaster-producing company was used to present and evaluate the advantages of this novel approach, while the result shows the competency and eligibility of Quality 4.0 in action.publishersversionpublishe

    Hybrid Artificial Neural Networks Modeling for Faults Identification of a Stochastic Multivariate Process

    Get PDF
    Due to the recent rapid growth of advanced sensing and production technologies, the monitoring and diagnosis of multivariate process operating performance have drawn increasing interest in process industries. The multivariate statistical process control (MSPC) chart is one of the most commonly used tools for detecting process faults. However, an out-of-control MSPC signal only indicates that process faults have intruded the underlying process. Identifying which of the monitored quality variables is responsible for the MSPC signal is fairly difficult. Pinpointing the responsible variable is vital for process improvement because it effectively determines the root causes of the process faults. Accordingly, this identification has become an important research issue concerning recent multivariate process applications. In contrast with the traditional single classifier approach, the present study proposes hybrid modeling schemes to address problems that involve a large number of quality variables in a multivariate normal process. The proposed scheme includes multivariate adaptive regression splines (MARS), logistic regression (LR), and artificial neural network (ANN). By applying MARS and LR techniques, we may obtain fewer but more significant quality variables, which can serve as inputs to the ANN classifier. The performance of our proposed approaches was evaluated by conducting a series of experiments

    Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data

    Full text link
    Abstract Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be ‘team science’.http://deepblue.lib.umich.edu/bitstream/2027.42/134522/1/13742_2016_Article_117.pd

    Evaluation of face recognition algorithms under noise

    Get PDF
    One of the major applications of computer vision and image processing is face recognition, where a computerized algorithm automatically identifies a person’s face from a large image dataset or even from a live video. This thesis addresses facial recognition, a topic that has been widely studied due to its importance in many applications in both civilian and military domains. The application of face recognition systems has expanded from security purposes to social networking sites, managing fraud, and improving user experience. Numerous algorithms have been designed to perform face recognition with good accuracy. This problem is challenging due to the dynamic nature of the human face and the different poses that it can take. Regardless of the algorithm, facial recognition accuracy can be heavily affected by the presence of noise. This thesis presents a comparison of traditional and deep learning face recognition algorithms under the presence of noise. For this purpose, Gaussian and salt-andpepper noises are applied to the face images drawn from the ORL Dataset. The image recognition is performed using each of the following eight algorithms: principal component analysis (PCA), two-dimensional PCA (2D-PCA), linear discriminant analysis (LDA), independent component analysis (ICA), discrete cosine transform (DCT), support vector machine (SVM), convolution neural network (CNN) and Alex Net. The ORL dataset was used in the experiments to calculate the evaluation accuracy for each of the investigated algorithms. Each algorithm is evaluated with two experiments; in the first experiment only one image per person is used for training, whereas in the second experiment, five images per person are used for training. The investigated traditional algorithms are implemented with MATLAB and the deep learning algorithms approaches are implemented with Python. The results show that the best performance was obtained using the DCT algorithm with 92% dominant eigenvalues and 95.25 % accuracy, whereas for deep learning, the best performance was using a CNN with accuracy of 97.95%, which makes it the best choice under noisy conditions

    Enhancement and optimization of a multi-command-based brain-computer interface

    Get PDF
    Brain-computer interfaces (BCI) assist disabled person to control many appliances without any physically interaction (e.g., pressing a button). SSVEP is brain activities elicited by evoked signals that are observed by visual stimuli paradigm. In this dissertation were addressed the problems which are oblige more usability of BCI-system by optimizing and enhancing the performance using particular design. Main contribution of this work is improving brain reaction response depending on focal approaches

    Doctor of Philosophy

    Get PDF
    dissertationThe problem of information transfer between healthcare sectors and across the continuum of care was examined using a mixed methods approach. These methods include qualitative interviews, retrospective case reviews and an informatic gap analysis. Findings and conclusions are reported for each study. Qualitative interviews were conducted with 16 healthcare representatives from 4 disciplines (medicine, pharmacy, nursing, and social work) and 3 healthcare sectors (hospital, skilled nursing care and community care). Three key themes from a Joint Cognitive Systems theoretical model were used to examine qualitative findings. Agreement on cross-sector care goals is neither defined nor made explicit and in some instances working at cross purposes. Care goals and information paradigms change as patients move from hospitalbased crisis stabilization, diagnosis and treatment to a postdischarge care to home or skilled nursing recovery, function restoration, or end of life support. Control of the transfer process is variable across institutions with little feedback and feed-forward. Lack of knowledge, competency and information tracking threatens sector interdependencies with suspicion and distrust. Sixty-three patients discharged between 2006 and 2008 from hospitals to skilled nursing facilities were randomly selected and reviewed. Most notably missing are discharge summaries (30%), nursing assessments or notes (17%), and social work documents (25%). Advanced directives or living wills necessary for end of life support were present in only 6% of the cases. The presence of information on activities of daily living (ADLs), other disabling conditions, and nutrition was associated with positive outcomes at the 0.001, 0.04 and 0.08levels. Consistent geriatric information transfer across the continuum is needed for relevant care management. An interoperability gap analysis conducted on the LINC (Linking Information Necessary for Care) transfer form determined its interoperability to be the semantic level 0. Detailed Clinical Models representing care management processes are challenged by the lack of consensus in terminology standards across sectors. Construction of information transfer solutions compliant with the Centers of Medicare and Medicaid Services (CMS) Stage 2 meaningful use criteria must address syntactic and semantic standards, map sector terminologies within care management processes, and account for the lack of standard terminologies in allied health domains

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    An investigation on automatic systems for fault diagnosis in chemical processes

    Get PDF
    Plant safety is the most important concern of chemical industries. Process faults can cause economic loses as well as human and environmental damages. Most of the operational faults are normally considered in the process design phase by applying methodologies such as Hazard and Operability Analysis (HAZOP). However, it should be expected that failures may occur in an operating plant. For this reason, it is of paramount importance that plant operators can promptly detect and diagnose such faults in order to take the appropriate corrective actions. In addition, preventive maintenance needs to be considered in order to increase plant safety. Fault diagnosis has been faced with both analytic and data-based models and using several techniques and algorithms. However, there is not yet a general fault diagnosis framework that joins detection and diagnosis of faults, either registered or non-registered in records. Even more, less efforts have been focused to automate and implement the reported approaches in real practice. According to this background, this thesis proposes a general framework for data-driven Fault Detection and Diagnosis (FDD), applicable and susceptible to be automated in any industrial scenario in order to hold the plant safety. Thus, the main requirement for constructing this system is the existence of historical process data. In this sense, promising methods imported from the Machine Learning field are introduced as fault diagnosis methods. The learning algorithms, used as diagnosis methods, have proved to be capable to diagnose not only the modeled faults, but also novel faults. Furthermore, Risk-Based Maintenance (RBM) techniques, widely used in petrochemical industry, are proposed to be applied as part of the preventive maintenance in all industry sectors. The proposed FDD system together with an appropriate preventive maintenance program would represent a potential plant safety program to be implemented. Thus, chapter one presents a general introduction to the thesis topic, as well as the motivation and scope. Then, chapter two reviews the state of the art of the related fields. Fault detection and diagnosis methods found in literature are reviewed. In this sense a taxonomy that joins both Artificial Intelligence (AI) and Process Systems Engineering (PSE) classifications is proposed. The fault diagnosis assessment with performance indices is also reviewed. Moreover, it is exposed the state of the art corresponding to Risk Analysis (RA) as a tool for taking corrective actions to faults and the Maintenance Management for the preventive actions. Finally, the benchmark case studies against which FDD research is commonly validated are examined in this chapter. The second part of the thesis, integrated by chapters three to six, addresses the methods applied during the research work. Chapter three deals with the data pre-processing, chapter four with the feature processing stage and chapter five with the diagnosis algorithms. On the other hand, chapter six introduces the Risk-Based Maintenance techniques for addressing the plant preventive maintenance. The third part includes chapter seven, which constitutes the core of the thesis. In this chapter the proposed general FD system is outlined, divided in three steps: diagnosis model construction, model validation and on-line application. This scheme includes a fault detection module and an Anomaly Detection (AD) methodology for the detection of novel faults. Furthermore, several approaches are derived from this general scheme for continuous and batch processes. The fourth part of the thesis presents the validation of the approaches. Specifically, chapter eight presents the validation of the proposed approaches in continuous processes and chapter nine the validation of batch process approaches. Chapter ten raises the AD methodology in real scaled batch processes. First, the methodology is applied to a lab heat exchanger and then it is applied to a Photo-Fenton pilot plant, which corroborates its potential and success in real practice. Finally, the fifth part, including chapter eleven, is dedicated to stress the final conclusions and the main contributions of the thesis. Also, the scientific production achieved during the research period is listed and prospects on further work are envisaged.La seguridad de planta es el problema más inquietante para las industrias químicas. Un fallo en planta puede causar pérdidas económicas y daños humanos y al medio ambiente. La mayoría de los fallos operacionales son previstos en la etapa de diseño de un proceso mediante la aplicación de técnicas de Análisis de Riesgos y de Operabilidad (HAZOP). Sin embargo, existe la probabilidad de que pueda originarse un fallo en una planta en operación. Por esta razón, es de suma importancia que una planta pueda detectar y diagnosticar fallos en el proceso y tomar las medidas correctoras adecuadas para mitigar los efectos del fallo y evitar lamentables consecuencias. Es entonces también importante el mantenimiento preventivo para aumentar la seguridad y prevenir la ocurrencia de fallos. La diagnosis de fallos ha sido abordada tanto con modelos analíticos como con modelos basados en datos y usando varios tipos de técnicas y algoritmos. Sin embargo, hasta ahora no existe la propuesta de un sistema general de seguridad en planta que combine detección y diagnosis de fallos ya sea registrados o no registrados anteriormente. Menos aún se han reportado metodologías que puedan ser automatizadas e implementadas en la práctica real. Con la finalidad de abordar el problema de la seguridad en plantas químicas, esta tesis propone un sistema general para la detección y diagnosis de fallos capaz de implementarse de forma automatizada en cualquier industria. El principal requerimiento para la construcción de este sistema es la existencia de datos históricos de planta sin previo filtrado. En este sentido, diferentes métodos basados en datos son aplicados como métodos de diagnosis de fallos, principalmente aquellos importados del campo de “Aprendizaje Automático”. Estas técnicas de aprendizaje han resultado ser capaces de detectar y diagnosticar no sólo los fallos modelados o “aprendidos”, sino también nuevos fallos no incluidos en los modelos de diagnosis. Aunado a esto, algunas técnicas de mantenimiento basadas en riesgo (RBM) que son ampliamente usadas en la industria petroquímica, son también propuestas para su aplicación en el resto de sectores industriales como parte del mantenimiento preventivo. En conclusión, se propone implementar en un futuro no lejano un programa general de seguridad de planta que incluya el sistema de detección y diagnosis de fallos propuesto junto con un adecuado programa de mantenimiento preventivo. Desglosando el contenido de la tesis, el capítulo uno presenta una introducción general al tema de esta tesis, así como también la motivación generada para su desarrollo y el alcance delimitado. El capítulo dos expone el estado del arte de las áreas relacionadas al tema de tesis. De esta forma, los métodos de detección y diagnosis de fallos encontrados en la literatura son examinados en este capítulo. Asimismo, se propone una taxonomía de los métodos de diagnosis que unifica las clasificaciones propuestas en el área de Inteligencia Artificial y de Ingeniería de procesos. En consecuencia, se examina también la evaluación del performance de los métodos de diagnosis en la literatura. Además, en este capítulo se revisa y reporta el estado del arte correspondiente al “Análisis de Riesgos” y a la “Gestión del Mantenimiento” como técnicas complementarias para la toma de medidas correctoras y preventivas. Por último se abordan los casos de estudio considerados como puntos de referencia en el campo de investigación para la aplicación del sistema propuesto. La tercera parte incluye el capítulo siete, el cual constituye el corazón de la tesis. En este capítulo se presenta el esquema o sistema general de diagnosis de fallos propuesto. El sistema es dividido en tres partes: construcción de los modelos de diagnosis, validación de los modelos y aplicación on-line. Además incluye un modulo de detección de fallos previo a la diagnosis y una metodología de detección de anomalías para la detección de nuevos fallos. Por último, de este sistema se desglosan varias metodologías para procesos continuos y por lote. La cuarta parte de esta tesis presenta la validación de las metodologías propuestas. Específicamente, el capítulo ocho presenta la validación de las metodologías propuestas para su aplicación en procesos continuos y el capítulo nueve presenta la validación de las metodologías correspondientes a los procesos por lote. El capítulo diez valida la metodología de detección de anomalías en procesos por lote reales. Primero es aplicada a un intercambiador de calor escala laboratorio y después su aplicación es escalada a un proceso Foto-Fenton de planta piloto, lo cual corrobora el potencial y éxito de la metodología en la práctica real. Finalmente, la quinta parte de esta tesis, compuesta por el capítulo once, es dedicada a presentar y reafirmar las conclusiones finales y las principales contribuciones de la tesis. Además, se plantean las líneas de investigación futuras y se lista el trabajo desarrollado y presentado durante el periodo de investigación
    corecore