12 research outputs found

    Evaluation of nonlinear scaling and transformation for nonlinear process fault detection

    No full text
    On-line fault detection of nonlinear processes involving dynamic dependencies and similar/overlapping fault signatures, is a fairly challenging and daunting task. Early detection and unambiguous diagnosis require that the monitoring approaches are able to deal with these daunting features. This paper compares two broad multivariate statistical approaches proposed in the literature for the detection task: (i) nonlinear transformations to generate linear maps and their dynamic variants in high dimensional feature space, as exemplified by kernel principal component analysis and dynamic kernel principal component analysis, and (ii) nonlinear scaling of the data to promote better self aggregation of data classes and hence improved discrimination, as exemplified by correspondence analysis. Using the Tennessee Eastman benchmark problem, we compare the performance of the above methods with respect to the known metrics such as detection delays, false alarm rates (Type I error) and missed detection rates (Type II error). As well, we compare the methods on the basis of computational cost and provide summarizing remarks on the ease of deployment and maintenance of such approaches for plant-wide fault detection of complex chemical processes

    Data reduction algorithm based on principle of distributional equivalence for fault diagnosis

    No full text
    Historical data based fault diagnosis methods exploit two key strengths of multivariate statistical approaches, viz.: (i) data compression ability, and (ii) discriminatory ability. It has been shown that correspondence analysis (CA) is superior to principal components analysis (PCA) on both these counts (Detroja, Gudi, Patwardhan, & Roy, 2006a), and hence is more suited for the task of fault detection and isolation (FDI). In this paper, we propose a CA based methodology for fault diagnosis that can facilitate significant data reduction as well as better discrimination. The proposed methodology is based on the principle of distributional equivalence (PDE). The PDE is a property unique to the CA algorithm and can be very useful in analyzing large datasets. The principle, when applied to historical data sets for FDI, can significantly reduce the data matrix size without significantly affecting the discriminatory ability of the CA algorithm. This can significantly reduce computational load during statistical model building. The data reduction ability of the proposed methodology is demonstrated using a simulation case study involving benchmark quadruple tank laboratory process. The proposed methodology when applied to experimental data obtained from the quadruple tank process also demonstrated data reduction capabilities of the principle of distributional equivalence. The above aspect has also been validated for large-scale data sets using the benchmark Tennessee Eastman process simulation case stud

    Data reduction and fault diagnosis using principle of distributional equivalence

    No full text
    Historical data based fault diagnosis methods exploit two key strengths of the multivariate statistical tool being used: i) data compression ability, and ii) discriminatory ability. It has been shown that correspondence analysis (CA) is superior to principal components analysis (PCA) on both these counts[1], and hence is more suited for the task of fault detection and isolation(FDI). In this paper, we propose a methodology for fault diagnosis that can facilitate significant data reduction as well as better discrimination. The proposed methodology is based on the principle of distributional equivalence (PDE). The PDE is a property unique to CA and can be very useful in analyzing large datasets. The principle, when applied to historical data sets for FDI, can significantly reduce the data matrix size without significantly affecting the discriminatory ability of the CA algorithm. The data reduction ability of the proposed methodology is demonstrated using a simulation case study involving benchmark quadruple tank laboratory process. The above aspect is also validated for large scale system using benchmark Tennessee Eastman process simulation case study

    Data reduction and fault diagnosis using principle of distributional equivalence

    No full text
    Historical data based fault diagnosis methods exploit two key strengths of the multivariate statistical tool being used: i) data compression ability, and ii) discriminatory ability. It has been shown that correspondence analysis (CA) is superior to principal components analysis (PCA) on both these counts[1], and hence is more suited for the task of fault detection and isolation(FDI). In this paper, we propose a methodology for fault diagnosis that can facilitate significant data reduction as well as better discrimination. The proposed methodology is based on the principle of distributional equivalence (PDE). The PDE is a property unique to CA and can be very useful in analyzing large datasets. The principle, when applied to historical data sets for FDI, can significantly reduce the data matrix size without significantly affecting the discriminatory ability of the CA algorithm. The data reduction ability of the proposed methodology is demonstrated using a simulation case study involving benchmark quadruple tank laboratory process. The above aspect is also validated for large scale system using benchmark Tennessee Eastman process simulation case study
    corecore