3,509 research outputs found

    Subspace decomposition and critical phase selection based cumulative quality analysis for multiphase batch processes

    Get PDF
    Quality analysis and prediction have been of great significance to ensure consistent and high product quality for chemical engineering processes. However, previous methods have rarely analyzed the cumulative quality effect which is of typical nature for batch processes. That is, with time development, the process variation will determine the final product quality in a cumulative manner. Besides, they can not get an early sense of the quality nature. In this paper, a quantitative index is defined which can check ahead of time whether the product quality result from accumulation or the addition of successive process variations and cumulative quality effect will be addressed for quality analysis and prediction of batch processes. Several crucial issues will be solved to explore the cumulative quality effect. First, a quality-relevant sequential phase partition method is proposed to separate multiple phases from batch processes by using fast search and find of density peaks clustering (FSFDP) algorithm. Second, after phase partition, a phase-wise cumulative quality analysis method is proposed based on subspace decomposition which can explore the non-repetitive quality-relevant information (NRQRI) from the process variation at each time within each phase. NRQRI refers to the quality-relevant process variations at each time that are orthogonal to those of previous time and thus represents complementary quality information which is the key index to cumulatively explain quality variations time-wise. Third, process-wise cumulative quality analysis is conducted where a critical phase selection strategy is developed to identify critical-to-cumulative-quality phases and quality predictions from critical phases are integrated to exclude influences of uncritical phases. By the two-level cumulative quality analysis (i.e., phase-wise and process-wise), it is feasible to judge whether the quality has the cumulative effect in advance and thus proper quality prediction model can be developed by identifying critical-to-cumulative-quality phases. The feasibility and performance of the proposed algorithm are illustrated by a typical chemical engineering process, injection molding

    Real-Time Monitoring and Fault Diagnostics in Roll-To-Roll Manufacturing Systems

    Full text link
    A roll-to-roll (R2R) process is a manufacturing technique involving continuous processing of a flexible substrate as it is transferred between rotating rolls. It integrates many additive and subtractive processing techniques to produce rolls of product in an efficient and cost-effective way due to its high production rate and mass quantity. Therefore, the R2R processes have been increasingly implemented in a wide range of manufacturing industries, including traditional paper/fabric production, plastic and metal foil manufacturing, flexible electronics, thin film batteries, photovoltaics, graphene films production, etc. However, the increasing complexity of R2R processes and high demands on product quality have heightened the needs for effective real-time process monitoring and fault diagnosis in R2R manufacturing systems. This dissertation aims at developing tools to increase system visibility without additional sensors, in order to enhance real-time monitoring, and fault diagnosis capability in R2R manufacturing systems. First, a multistage modeling method is proposed for process monitoring and quality estimation in R2R processes. Product-centric and process-centric variation propagation are introduced to characterize variation propagation throughout the system. The multistage model mainly focuses on the formulation of process-centric variation propagation, which uniquely exists in R2R processes, and the corresponding product quality measurements with both physical knowledge and sensor data analysis. Second, a nonlinear analytical redundancy method is proposed for sensor validation to ensure the accuracy of sensor measurements for process and quality control. Parity relations based on nonlinear observation matrix are formulated to characterize system dynamics and sensor measurements. Robust optimization is designed to identify the coefficient of parity relations that can tolerate a certain level of measurement noise and system disturbances. The effect of the change of operating conditions on the value of the optimal objective function – parity residuals and the optimal design variables – parity coefficients are evaluated with sensitivity analysis. Finally, a multiple model approach for anomaly detection and fault diagnosis is introduced to improve the diagnosability under different operating regimes. The growing structure multiple model system (GSMMS) is employed, which utilizes Voronoi sets to automatically partition the entire operating space into smaller operating regimes. The local model identification problem is revised by formulating it into an optimization problem based on the loss minimization framework and solving with the mini-batch stochastic gradient descent method instead of least squares algorithms. This revision to the GSMMS method expands its capability to handle the local model identification problems that cannot be solved with a closed-form solution. The effectiveness of the models and methods are determined with testbed data from an R2R process. The results show that those proposed models and methods are effective tools to understand variation propagation in R2R processes and improve estimation accuracy of product quality by 70%, identify the health status of sensors promptly to guarantee data accuracy for modeling and decision making, and reduce false alarm rate and increase detection power under different operating conditions. Eventually, those tools developed in this thesis contribute to increase the visibility of R2R manufacturing systems, improve productivity and reduce product rejection rate.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146114/1/huanyis_1.pd

    Scalable Architecture for Integrated Batch and Streaming Analysis of Big Data

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Sciences, 2015As Big Data processing problems evolve, many modern applications demonstrate special characteristics. Data exists in the form of both large historical datasets and high-speed real-time streams, and many analysis pipelines require integrated parallel batch processing and stream processing. Despite the large size of the whole dataset, most analyses focus on specific subsets according to certain criteria. Correspondingly, integrated support for efficient queries and post- query analysis is required. To address the system-level requirements brought by such characteristics, this dissertation proposes a scalable architecture for integrated queries, batch analysis, and streaming analysis of Big Data in the cloud. We verify its effectiveness using a representative application domain - social media data analysis - and tackle related research challenges emerging from each module of the architecture by integrating and extending multiple state-of-the-art Big Data storage and processing systems. In the storage layer, we reveal that existing text indexing techniques do not work well for the unique queries of social data, which put constraints on both textual content and social context. To address this issue, we propose a flexible indexing framework over NoSQL databases to support fully customizable index structures, which can embed necessary social context information for efficient queries. The batch analysis module demonstrates that analysis workflows consist of multiple algorithms with different computation and communication patterns, which are suitable for different processing frameworks. To achieve efficient workflows, we build an integrated analysis stack based on YARN, and make novel use of customized indices in developing sophisticated analysis algorithms. In the streaming analysis module, the high-dimensional data representation of social media streams poses special challenges to the problem of parallel stream clustering. Due to the sparsity of the high-dimensional data, traditional synchronization method becomes expensive and severely impacts the scalability of the algorithm. Therefore, we design a novel strategy that broadcasts the incremental changes rather than the whole centroids of the clusters to achieve scalable parallel stream clustering algorithms. Performance tests using real applications show that our solutions for parallel data loading/indexing, queries, analysis tasks, and stream clustering all significantly outperform implementations using current state-of-the-art technologies

    Dynamic re-optimization techniques for stream processing engines and object stores

    Get PDF
    Large scale data storage and processing systems are strongly motivated by the need to store and analyze massive datasets. The complexity of a large class of these systems is rooted in their distributed nature, extreme scale, need for real-time response, and streaming nature. The use of these systems on multi-tenant, cloud environments with potential resource interference necessitates fine-grained monitoring and control. In this dissertation, we present efficient, dynamic techniques for re-optimizing stream-processing systems and transactional object-storage systems.^ In the context of stream-processing systems, we present VAYU, a per-topology controller. VAYU uses novel methods and protocols for dynamic, network-aware tuple-routing in the dataflow. We show that the feedback-driven controller in VAYU helps achieve high pipeline throughput over long execution periods, as it dynamically detects and diagnoses any pipeline-bottlenecks. We present novel heuristics to optimize overlays for group communication operations in the streaming model.^ In the context of object-storage systems, we present M-Lock, a novel lock-localization service for distributed transaction protocols on scale-out object stores to increase transaction throughput. Lock localization refers to dynamic migration and partitioning of locks across nodes in the scale-out store to reduce cross-partition acquisition of locks. The service leverages the observed object-access patterns to achieve lock-clustering and deliver high performance. We also present TransMR, a framework that uses distributed, transactional object stores to orchestrate and execute asynchronous components in amorphous data-parallel applications on scale-out architectures

    A Review of Kernel Methods for Feature Extraction in Nonlinear Process Monitoring

    Get PDF
    Kernel methods are a class of learning machines for the fast recognition of nonlinear patterns in any data set. In this paper, the applications of kernel methods for feature extraction in industrial process monitoring are systematically reviewed. First, we describe the reasons for using kernel methods and contextualize them among other machine learning tools. Second, by reviewing a total of 230 papers, this work has identified 12 major issues surrounding the use of kernel methods for nonlinear feature extraction. Each issue was discussed as to why they are important and how they were addressed through the years by many researchers. We also present a breakdown of the commonly used kernel functions, parameter selection routes, and case studies. Lastly, this review provides an outlook into the future of kernel-based process monitoring, which can hopefully instigate more advanced yet practical solutions in the process industries
    • …
    corecore