34 research outputs found

    Kernel Belief Propagation

    Full text link
    We propose a nonparametric generalization of belief propagation, Kernel Belief Propagation (KBP), for pairwise Markov random fields. Messages are represented as functions in a reproducing kernel Hilbert space (RKHS), and message updates are simple linear operations in the RKHS. KBP makes none of the assumptions commonly required in classical BP algorithms: the variables need not arise from a finite domain or a Gaussian distribution, nor must their relations take any particular parametric form. Rather, the relations between variables are represented implicitly, and are learned nonparametrically from training data. KBP has the advantage that it may be used on any domain where kernels are defined (Rd, strings, groups), even where explicit parametric models are not known, or closed form expressions for the BP updates do not exist. The computational cost of message updates in KBP is polynomial in the training data size. We also propose a constant time approximate message update procedure by representing messages using a small number of basis functions. In experiments, we apply KBP to image denoising, depth prediction from still images, and protein configuration prediction: KBP is faster than competing classical and nonparametric approaches (by orders of magnitude, in some cases), while providing significantly more accurate results

    ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ ํ•™์Šต ๋ฐ ์ถ”๋ก ๊ณผ ๊ทธ๋ž˜ํ”„ ๋ผ์˜๋ฅผ ํ™œ์šฉํ•œ ๊ณต์ • ์ด์ƒ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ ๋ฐฉ๋ฒ•๋ก 

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ™”ํ•™์ƒ๋ฌผ๊ณตํ•™๋ถ€, 2019. 2. ์ด์›๋ณด.Fault detection and diagnosis (FDD) is an essential part of safe plant operation. Fault detection refers to the process of detecting the occurrence of a fault quickly and accurately, and representative methods include the use of principal component analysis (PCA), and autoencoders (AE). Fault diagnosis is the process of isolating the root cause node of the fault, then determining the fault propagation path to identify the characteristic of the fault. Among the various methods, data-driven methods are the most widely-used, due to their applicability and good performance compared to analytical and knowledge-based methods. Although many studies have been conducted regarding FDD, no methodology for conducting every step of FDD exists, where the fault is effectively detected and diagnosed. Moreover, existing methods have limited applicability and show limited performance. Previous fault detection methods show loss of variable characteristics in dimensionality reduction methods and have large computational loads, leading to poor performance for complex faults. Likewise, preceding fault diagnosis methods show inaccurate fault isolation results, and biased fault propagation path analysis as a consequence of implementing knowledge-based characteristics for construction of digraphs of process variable relationships. Thus a comprehensive methodology for FDD which shows good performance for complex faults and variable relationships, is required. In this study, an efficient and effective comprehensive FDD methodology based on Markov random fields (MRF) modelling is proposed. MRFs provide an effective means for modelling complex variable relationships, and allows efficient computation of marginal probability of the process variables, leading to good performance regarding FDD. First, a fault detection framework for process variables, integrating the MRF modelling and structure learning with iterative graphical lasso is proposed. Graphical lasso is an algorithm for learning the structure of MRFs, and is applicable to large variable sets since it approximates the MRF structure by assuming the relationships between variables to be Gaussian. By iteratively applying the graphical lasso to monitored variables, the variable set is subdivided into smaller groups, and consequently the computational cost of MRF inference is mitigated allowing efficient fault detection. After variable groups are obtained through iterative graphical lasso, they are subject to the MRF monitoring framework that is proposed in this work. The framework obtains the monitoring statistics by calculating the probability density of the variable groups through kernel density estimation, and the monitoring limits are obtained separately for each group by using a false alarm rate of 5%. Second, a fault isolation and propagation path analysis methodology is proposed, where the conditional marginal probability of each variable is computed via inference, then is used to calculate the conditional contribution of individual variables during the occurrence of a fault. Using the kernel belief propagation (KBP) algorithm, which is an algorithm for learning and inferencing MRFs comprising continuous variables, the parameters of MRF are trained using normal process data, then the individual conditional contribution of each variable is calculated for every sample of the fault process data. By analyzing the magnitude and reaction speed of the conditional contribution of individual variables, the root fault node can be isolated and the fault propagation path can be determined effectively. Finally, the proposed methodology is verified by applying it to the well-known Tennessee Eastman process (TEP) model. Since the TEP has been used as a benchmark process over the past years for verifying various FDD methods, it serves the purpose of performance comparison. Also, since it consists of multiple units and has complex variable relationships such as recycle loops, it is suitable for verifying the performance of the proposed methodology. Application results show that the proposed methodology performs better compared to state-of-the-art FDD algorithms, in terms of both fault detection and diagnosis. Fault detection results showed that all 28 faults designed inside the TEP model were detected with a fault detection accuracy of over 95%, which is higher than any other previously proposed fault detection method. Also, the method showed good fault isolation and propagation path analysis results, where the root-cause node for every fault was detected correctly, and the characteristics of the initiated faults were identified through fault propagation path analysis.๊ณต์ • ์ด์ƒ์˜ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ ์‹œ์Šคํ…œ์€ ์•ˆ์ „ํ•œ ๊ณต์ • ์šด์˜์— ํ•„์ˆ˜์ ์ธ ๋ถ€๋ถ„์ด๋‹ค. ์ด์ƒ ๊ฐ์ง€๋Š” ์ด์ƒ์ด ๋ฐœ์ƒํ–ˆ์„ ๊ฒฝ์šฐ ์ฆ‰๊ฐ์ ์œผ๋กœ ์ด๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ๊ฐ์ง€ํ•˜๋Š” ํ”„๋กœ์„ธ์Šค๋ฅผ ์˜๋ฏธํ•˜๋ฉฐ, ๋Œ€ํ‘œ์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ๋Š” ์ฃผ์„ฑ๋ถ„ ๋ถ„์„ ๋ฐ ์˜คํ† ์ธ์ฝ”๋”๋ฅผ ํ™œ์šฉํ•œ ๊ฐ์ง€ ๋ฐฉ๋ฒ•๋ก ์ด ์žˆ๋‹ค. ์ด์ƒ ์ง„๋‹จ์€ ๊ฒฐํ•จ์˜ ๊ทผ๋ณธ ์›์ธ์ด ๋˜๋Š” ๋…ธ๋“œ๋ฅผ ๊ฒฉ๋ฆฌํ•˜๊ณ , ์ด์ƒ์˜ ์ „ํŒŒ ๊ฒฝ๋กœ๋ฅผ ํƒ์ง€ํ•˜์—ฌ ์ด์ƒ์˜ ํŠน์„ฑ์„ ์‹๋ณ„ํ•˜๋Š” ํ”„๋กœ์„ธ์Šค์ด๋‹ค. ๊ณต์ • ์ด์ƒ์˜ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ ๋ฐฉ๋ฒ•๋ก ์—๋Š” ๋ชจ๋ธ ๋ถ„์„ ๋ฐฉ๋ฒ•๋ก , ์ง€์‹ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•๋ก  ๋“ฑ์˜ ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•๋ก ์ด ์žˆ์ง€๋งŒ, ๊ณต์ •์— ๋Œ€ํ•œ ์ ์šฉ ๊ฐ€๋Šฅ์„ฑ๊ณผ ์„ฑ๋Šฅ ์ธก๋ฉด์—์„œ ๊ฐ€์žฅ ์œ ์šฉํ•˜๋‹ค๊ณ  ์•Œ๋ ค์ ธ ์žˆ๋Š” ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•๋ก ์ด ๋„๋ฆฌ ํ™œ์šฉ๋˜๊ณ  ์žˆ๋‹ค. ๊ณต์ • ์ด์ƒ์˜ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•๋ก ์€ ๋‹ค๋ฐฉ๋ฉด์œผ๋กœ ์—ฐ๊ตฌ๋˜์–ด ์™”์ง€๋งŒ, ์ด์ƒ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ์„ ๋ชจ๋‘ ํšจ๊ณผ์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋ก ์€ ์†Œ์ˆ˜์— ๋ถˆ๊ณผํ•˜๋ฉฐ, ์กด์žฌํ•˜๊ณ  ์žˆ๋Š” ๋ฐฉ๋ฒ•๋ก ๋“ค ์—ญ์‹œ ๋‘ ๋ถ„์•ผ ๋ชจ๋‘์—์„œ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋Š” ๊ฒฝ์šฐ๋Š” ์—†๋‹ค. ์ด๋Š” ๊ธฐ์กด ๋ฐฉ๋ฒ•๋ก ๋“ค์˜ ์ ์šฉ ๊ฐ€๋Šฅ์„ฑ์ด ์ œํ•œ๋˜์–ด ์žˆ์œผ๋ฉฐ ๊ณต์ •์— ์ ์šฉ์‹œ ์ œํ•œ๋œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์ด์ƒ ๊ฐ์ง€์˜ ๊ฒฝ์šฐ, ๋Œ€์šฉ๋Ÿ‰์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•  ๋•Œ ๋ฐœ์ƒํ•˜๋Š” ๊ณผ๋ถ€ํ•˜๋กœ ์ธํ•œ ๊ฐ์ง€ ๋Šฅ๋ ฅ์˜ ์ €ํ•˜, ์ฐจ์› ์ถ•์†Œ ๋ฐฉ๋ฒ•๋ก ๋“ค์„ ์‚ฌ์šฉํ•  ์‹œ ์ด์— ๋”ฐ๋ฅธ ๋ณ€์ˆ˜ ํŠน์„ฑ ๋ฐ˜์˜์˜ ๋ถ€์ •ํ™•์„ฑ, ๊ทธ๋ฆฌ๊ณ  ์ถ•์†Œ๋œ ์ฐจ์›์—์„œ์˜ ๊ณ„์‚ฐ์œผ๋กœ ์ธํ•˜์—ฌ ๋ณตํ•ฉ์ ์ธ ํ˜•ํƒœ์˜ ์ด์ƒ์„ ๊ฐ์ง€ํ•ด ๋‚ด์ง€ ๋ชปํ•˜๋Š” ๋ฌธ์ œ ๋“ฑ์ด ์žˆ๋‹ค. ์ด์ƒ ์ง„๋‹จ์˜ ๊ฒฝ์šฐ ์ด์ƒ์˜ ์›์ธ์ด ๋˜๋Š” ๋…ธ๋“œ์˜ ๊ฒฉ๋ฆฌ ๋ฐ ์ด์ƒ ์ „ํŒŒ ๊ฒฝ๋กœ์— ๋Œ€ํ•œ ๋ถ„์„์ด ๋ถ€์ •ํ™•ํ•œ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์€๋ฐ, ์ด๋Š” ์ฐจ์› ์ถ•์†Œ๋กœ ์ธํ•˜์—ฌ ๊ณต์ • ๋ณ€์ˆ˜์˜ ํŠน์„ฑ์ด ์†Œ์‹ค๋˜๋Š” ์„ฑ์งˆ์ด ์žˆ๊ณ , ๋ฐฉํ–ฅ์„ฑ ๊ทธ๋ž˜ํ”„๋ฅผ ํ™œ์šฉํ•  ์‹œ ๊ณต์ •์— ๋Œ€ํ•œ ์„ ํ–‰ ์ง€์‹์„ ์ ์šฉํ•จ์œผ๋กœ์จ ํŽธํ–ฅ๋œ ์ด์ƒ ์ง„๋‹จ ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋Š” ๊ฒฝ์šฐ๋“ค์ด ๋ฐœ์ƒํ•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๊ธฐ์กด ๋ฐฉ๋ฒ•๋ก ๋“ค์— ๋Œ€ํ•œ ์ด๋Ÿฌํ•œ ํ•œ๊ณ„์ ๋“ค์„ ๊ณ ๋ คํ•ด ๋ดค์„๋•Œ, ๋ณ€์ˆ˜ ๊ฐ๊ฐ์˜ ํŠน์„ฑ์ด ์†Œ์‹ค๋˜์ง€ ์•Š๋„๋กํ•˜์—ฌ ํšจ๊ณผ์ ์œผ๋กœ ์ด์ƒ์— ๋Œ€ํ•œ ๊ฐ์ง€์™€ ์ง„๋‹จ์„ ๋ชจ๋‘ ์ˆ˜ํ–‰ํ•ด ๋‚ผ ์ˆ˜ ์žˆ์œผ๋ฉด์„œ๋„, ๊ณ„์‚ฐ์ƒ์˜ ํšจ์œจ์„ฑ์„ ๊ฐ–์ถ˜, ์ด์ƒ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ์— ๋Œ€ํ•œ ํ†ตํ•ฉ๋œ ๋ฐฉ๋ฒ•๋ก ์˜ ๊ฐœ๋ฐœ์ด ์‹œ๊ธ‰ํ•˜๋‹ค๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ ๋ชจ๋ธ๋ง๊ณผ ๊ทธ๋ž˜ํ”„ ๋ผ์˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœํ•˜์—ฌ, ์ด์ƒ์— ๋Œ€ํ•œ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ์„ ๋ชจ๋‘ ์ˆ˜ํ–‰ํ•ด ๋‚ผ ์ˆ˜ ์žˆ๋Š” ํ†ตํ•ฉ์ ์ธ ๊ณต์ • ๋ชจ๋‹ˆํ„ฐ๋ง ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค. ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ๋Š” ๋น„์„ ํ˜•์ ์ด๊ณ  ๋น„์ •๊ทœ์ ์ธ ๋ณ€์ˆ˜ ๊ด€๊ณ„๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋ชจ๋ธ๋งํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ฃผ๊ณ , ์ด์ƒ ๋ฐœ์ƒ ์ƒํ™ฉ์—์„œ์˜ ๋ชจ๋‹ˆํ„ฐ๋ง ํ†ต๊ณ„๊ฐ’ ๊ณ„์‚ฐ์‹œ์— ๊ฐ ๋ณ€์ˆ˜์˜ ํŠน์„ฑ์„ ๋ฐ˜์˜ํ•˜์—ฌ ํ™•๋ฅ  ๊ณ„์‚ฐ์„ ํ•ด ๋‚ผ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ํšจ๊ณผ์ ์ธ ์ด์ƒ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ ์ˆ˜๋‹จ์ด ๋œ๋‹ค. ๊ธฐ๋ณธ์ ์œผ๋กœ ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ๋Š” ํ™•๋ฅ ๊ฐ’ ๊ณ„์‚ฐ์‹œ์˜ ๋ถ€ํ•˜๊ฐ€ ํฌ์ง€๋งŒ, ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๊ทธ๋ž˜ํ”„ ๋ผ์˜ ๋ฐฉ๋ฒ•๋ก ์„ ์ถ”๊ฐ€์ ์œผ๋กœ ํ•จ๊ป˜ ํ™œ์šฉํ•˜์—ฌ ๊ณ„์‚ฐ ์ƒ์˜ ๋ถ€ํ•˜๋ฅผ ์ค„์ด๊ณ  ํšจ์œจ์ ์œผ๋กœ ์ด์ƒ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ์„ ํ•ด๋‚ผ ์ˆ˜ ์žˆ๋„๋ก ํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆ๋œ ๋‚ด์šฉ๋“ค์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค. ์ฒซ์งธ, ๊ณต์ • ๋ณ€์ˆ˜๋ฅผ ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ ํ˜•ํƒœ๋กœ ๋ชจ๋ธ๋งํ•˜๊ณ , ๊ทธ๋ž˜ํ”„ ๋ผ์˜๋ฅผ ํ™œ์šฉํ•ด ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ์˜ ๊ตฌ์กฐ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ๊ทธ๋ž˜ํ”„ ๋ผ์˜๋Š” ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ์˜ ๊ตฌ์กฐ๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•œ ๋ฐฉ๋ฒ•๋ก ์ธ๋ฐ, ๋ณ€์ˆ˜ ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๊ฐ€์šฐ์Šค ํ•จ์ˆ˜์˜ ํ˜•ํƒœ๋กœ ๊ฐ€์ •ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค๋ณ€์ˆ˜ ์‹œ์Šคํ…œ์—์„œ๋„ ํšจ์œจ์ ์œผ๋กœ ๊ทธ๋ž˜ํ”„ ๊ตฌ์กฐ๋ฅผ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด์ค€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋ฐ˜๋ณต์  ๊ทธ๋ž˜ํ”„ ๋ผ์˜๋ฅผ ์ œ์•ˆํ•˜์—ฌ ๋ชจ๋“  ๊ณต์ • ๋ณ€์ˆ˜๋“ค์ด ์ƒ๊ด€๊ด€๊ณ„๊ฐ€ ๋†’์€ ๋ณ€์ˆ˜ ์ง‘๋‹จ์œผ๋กœ ๋ฌถ์ผ ์ˆ˜ ์žˆ๋„๋ก ํ•˜์˜€๋‹ค. ์ด๋ฅผ ํ™œ์šฉํ•˜๋ฉด ์ „์ฒด ๊ณต์ • ๋ณ€์ˆ˜ ์ง‘๋‹จ์„ ๋‹ค์ˆ˜์˜ ์†Œ์ง‘๋‹จ์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜๊ณ  ๊ฐ๊ฐ์— ๋Œ€ํ•œ ๊ทธ๋ž˜ํ”„ ๊ตฌ์กฐ๋ฅผ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜๋Š”๋ฐ, ํฌ๊ฒŒ ๋‘ ๊ฐ€์ง€์˜ ํšจ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค. ์šฐ์„ ์ ์œผ๋กœ ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ ํ™•๋ฅ  ๊ณ„์‚ฐ์˜ ๋Œ€์ƒ์ด ๋˜๋Š” ๋ณ€์ˆ˜์˜ ๊ฐœ์ˆ˜๋ฅผ ์ค„์—ฌ์คŒ์œผ๋กœ์จ ๊ณ„์‚ฐ ๋ถ€ํ•˜๋ฅผ ์ค„์ด๊ณ  ํšจ์œจ์ ์ธ ์ด์ƒ ๊ฐ์ง€๊ฐ€ ์ด๋ฃจ์–ด์งˆ ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค. ๋˜ํ•œ ์ƒ๊ด€๊ด€๊ณ„๊ฐ€ ๋†’์€ ์ง‘๋‹จ๋ผ๋ฆฌ ๋ฌถ์—ฌ์„œ ๋ชจ๋ธ๋ง ๋œ ๊ทธ๋ž˜ํ”„๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ด์ƒ์˜ ์ง„๋‹จ ๊ณผ์ •์—์„œ ๊ณต์ • ๋ณ€์ˆ˜ ๊ฐ„์˜ ๊ด€๊ณ„ ํŒŒ์•… ๋ฐ ์ „ํŒŒ ๊ฒฝ๋กœ ๋ถ„์„์„ ์šฉ์ดํ•˜๋„๋ก ํ•ด์ค€๋‹ค. ๋‘ ๋ฒˆ์งธ๋กœ, ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ์˜ ํ™•๋ฅ  ์ถ”๋ก ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์—ฌ ํšจ๊ณผ์ ์œผ๋กœ ์ด์ƒ ๊ฐ์ง€๊ฐ€ ์ด๋ฃจ์–ด์งˆ ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋ฐ˜๋ณต์  ๊ทธ๋ž˜ํ”„ ๋ผ์˜๋ฅผ ํ†ตํ•ด ์–ป์–ด์ง„ ๋‹ค์ˆ˜์˜ ๋ณ€์ˆ˜ ์†Œ์ง‘๋‹จ์— ๋Œ€ํ•˜์—ฌ ๊ฐ๊ฐ ํ™•๋ฅ  ์ถ”๋ก ์„ ์ ์šฉํ•˜์—ฌ ์ด์ƒ ๊ฐ์ง€๋ฅผ ์ง„ํ–‰ํ•˜๊ฒŒ ๋˜๋Š”๋ฐ, ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋ก ์—์„œ๋Š” ์ปค๋„ ๋ฐ€๋„ ์ถ”์ • ๋ฐฉ๋ฒ•๋ก ์„ ํ™œ์šฉํ•˜์˜€๋‹ค. ์ •์ƒ ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๊ฐ ๋ณ€์ˆ˜๋“ค์— ๋Œ€ํ•œ ์ปค๋„ ๋ฐ€๋„์˜ ๋Œ€์—ญํญ์„ ํ•™์Šตํ•˜๊ณ , ์ด์ƒ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ฐœ์ƒํ•  ์‹œ ์ด๋ฅผ ํ™œ์šฉํ•œ ์ปค๋„ ๋ฐ€๋„ ์ถ”์ •๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด์ƒ๊ฐ์‹œ ํ†ต๊ณ„์น˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ฒŒ ๋œ๋‹ค. ์ด๋•Œ ํ—ˆ์œ„ ์ง„๋‹จ์œจ์„ 5%๋กœ ๊ฐ€์ •ํ•˜์—ฌ ๊ฐ๊ฐ์˜ ์†Œ์ง‘๋‹จ์— ๋Œ€ํ•œ ๊ณต์ • ๊ฐ์ง€ ๊ธฐ์ค€์„ ์„ ์„ค์ •ํ•˜์˜€๊ณ , ์ด์ƒ๊ฐ์‹œ ํ†ต๊ณ„์น˜๊ฐ€ ๊ณต์ • ๊ฐ์‹œ ๊ธฐ์ค€์„ ๋ณด๋‹ค ๋‚ฎ๊ฒŒ ๋  ๊ฒฝ์šฐ ์ด์ƒ์ด ๊ฐ์ง€๋œ๋‹ค. ์„ธ ๋ฒˆ์งธ๋กœ, ์ด์ƒ ๋ฐœ์ƒ ์‹œ ์›์ธ์ด ๋˜๋Š” ๋ณ€์ˆ˜์˜ ๊ฒฉ๋ฆฌ ๋ฐ ์ด์ƒ ์ „ํŒŒ ๊ฒฝ๋กœ ๋ถ„์„์„ ํšจ๊ณผ์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ์ œ์‹œ๋œ ๋ฐฉ๋ฒ•๋ก ์—์„œ๋Š” ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ์˜ ํ™•๋ฅ  ์ถ”๋ก  ๊ณผ์ •์„ ํ™œ์šฉํ•˜์—ฌ ์ด์ƒ ๋ฐœ์ƒ ์‹œ ๊ฐ ๋ณ€์ˆ˜์˜ ์กฐ๊ฑด๋ถ€ ํ•œ๊ณ„ ํ™•๋ฅ ์„ ๊ณ„์‚ฐํ•˜๊ณ , ์ด๋ฅผ ํ™œ์šฉํ•ด ์ƒˆ๋กญ๊ฒŒ ์ •์˜๋œ ์กฐ๊ฑด๋ถ€ ๊ธฐ์—ฌ๋„ ๊ฐ’์„ ๊ณ„์‚ฐํ•˜์—ฌ, ์ด์ƒ์— ๋Œ€ํ•œ ๊ฐ ๋ณ€์ˆ˜์˜ ๊ธฐ์—ฌ๋„๋ฅผ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค. ์ด ๊ณผ์ •์—์„œ๋Š” ์ปค๋„ ์‹ ๋ขฐ๋„ ์ „ํŒŒ ๋ฐฉ๋ฒ•๋ก ์ด ์‚ฌ์šฉ๋˜๋Š”๋ฐ, ์ด๋Š” ์—ฐ์† ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง€๋Š” ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ์— ๋Œ€ํ•˜์—ฌ ํ™•๋ฅ  ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์ด๋‹ค. ์ปค๋„ ์‹ ๋ขฐ๋„ ์ „ํŒŒ๋ฒ•์„ ์‚ฌ์šฉํ•˜๋ฉด ์ •์ƒ ์ƒํƒœ์˜ ๊ณต์ • ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋งˆ๋ฅด์ฝ”ํ”„ ๋žœ๋ค ํ•„๋“œ๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ ๊ฐ’๋“ค์„ ํ•™์Šตํ•˜๊ณ , ์ด์ƒ ๋ฐœ์ƒ์‹œ ์ด์ƒ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•˜์—ฌ ๊ฐ ๋ณ€์ˆ˜์˜ ์กฐ๊ฑด๋ถ€ ๊ธฐ์—ฌ๋„ ๊ฐ’์„ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋œ๋‹ค. ์ด ๋•Œ ๊ณ„์‚ฐ๋œ ์กฐ๊ฑด๋ถ€ ๊ธฐ์—ฌ๋„ ๊ฐ’์˜ ํฌ๊ธฐ์™€, ์ด์ƒ ๋ฐœ์ƒ ์ดํ›„ ๊ฐ ๋ณ€์ˆ˜์˜ ์กฐ๊ฑด๋ถ€ ๊ธฐ์—ฌ๋„ ๊ฐ’์˜ ๋ณ€ํ™” ๋ฐ˜์‘ ์†๋„๋ฅผ ์ข…ํ•ฉ์ ์œผ๋กœ ํŒ๋‹จํ•˜์—ฌ, ์ด์ƒ์˜ ์›์ธ ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ๊ฒฉ๋ฆฌ์™€ ์ด์ƒ ์ „ํŒŒ ๊ฒฝ๋กœ ๋ถ„์„์„ ํšจ๊ณผ์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ œ์•ˆ๋œ ์ด์ƒ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ ๋ฐฉ๋ฒ•๋ก ์˜ ์„ฑ๋Šฅ์„ ๊ฒ€์ฆํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ํ…Œ๋„ค์‹œ ์ด์ŠคํŠธ๋งŒ ๊ณต์ • ๋ชจ๋ธ์— ์ด๋ฅผ ์ ์šฉํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ๋ถ„์„ํ•˜์˜€๋‹ค. ํ…Œ๋„ค์‹œ ์ด์ŠคํŠธ๋งŒ ๊ณต์ •์€ ์ˆ˜๋…„๊ฐ„ ๊ณต์ • ๊ฐ์‹œ ๋ฐฉ๋ฒ•๋ก ์„ ๊ฒ€์ฆํ•˜๊ธฐ ์œ„ํ•œ ๋ฒค์น˜๋งˆํฌ ๊ณต์ •์œผ๋กœ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜์–ด ์™”๊ธฐ ๋•Œ๋ฌธ์—, ์ œ์‹œ๋œ ๋ฐฉ๋ฒ•๋ก ์„ ์ด์— ์ ์šฉํ•ด ๋ด„์œผ๋กœ์จ ๋‹ค๋ฅธ ๊ณต์ • ๊ฐ์‹œ ๋ฐฉ๋ฒ•๋ก ๋“ค๊ณผ์˜ ์„ฑ๋Šฅ์„ ๋น„๊ตํ•ด ๋ณผ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ ๋‹ค์ˆ˜์˜ ๋‹จ์œ„ ๊ณต์ •์„ ํฌํ•จํ•˜๊ณ  ์žˆ๊ณ , ์ˆœํ™˜์ ์ธ ๋ณ€์ˆ˜ ๊ด€๊ณ„ ์—ญ์‹œ ํฌํ•จํ•˜๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ œ์‹œ๋œ ๋ฐฉ๋ฒ•๋ก ์˜ ์„ฑ๋Šฅ์„ ์‹œํ—˜ํ•ด ๋ณด๊ธฐ์— ์ ํ•ฉํ–ˆ๋‹ค. ํ…Œ๋„ค์‹œ ์ด์ŠคํŠธ๋งŒ ๊ณต์ • ๋‚ด๋ถ€์—๋Š” 28๊ฐœ ์ข…๋ฅ˜์˜ ์ด์ƒ์ด ํ”„๋กœ๊ทธ๋žจ ์ƒ์— ๋‚ด์žฅ๋˜์–ด ์žˆ๋Š”๋ฐ, ์ œ์‹œ๋œ ๊ณต์ • ๊ฐ์ง€ ๋ฐฉ๋ฒ•๋ก ์„ ์ ์šฉํ•œ ๊ฒฐ๊ณผ ๋ชจ๋“  ์ด์ƒ์— ๋Œ€ํ•˜์—ฌ 96% ์ด์ƒ์˜ ๋†’์€ ์ด์ƒ ๊ฐ์ง€์œจ์„ ๋‚˜ํƒ€๋‚ด์—ˆ๋‹ค. ์ด๋Š” ๊ธฐ์กด์— ์ œ์‹œ๋œ ๊ณต์ • ๊ฐ์‹œ ๋ฐฉ๋ฒ•๋ก ๋“ค์— ๋น„ํ•˜์—ฌ ์›”๋“ฑํžˆ ๋†’์€ ์ˆ˜์น˜์˜€๋‹ค. ๋˜ํ•œ ์ด์ƒ ์ง„๋‹จ ์„ฑ๋Šฅ์„ ๋ถ„์„ํ•ด ๋ณธ ๊ฒฐ๊ณผ, ๋ชจ๋“  ์ด์ƒ์— ๋Œ€ํ•˜์—ฌ ์›์ธ์ด ๋˜๋Š” ๋…ธ๋“œ๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ์—ˆ๊ณ , ์ด์ƒ ์ „ํŒŒ ๊ฒฝ๋กœ ์—ญ์‹œ ์ •ํ™•ํ•˜๊ฒŒ ํƒ์ง€ํ•˜์—ฌ ๊ธฐ์กด ๋ฐฉ๋ฒ•๋ก ๋“ค๊ณผ๋Š” ์ฐจ๋ณ„ํ™”๋œ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋‚ด์—ˆ๋‹ค. ์ œ์‹œ๋œ ๋ฐฉ๋ฒ•๋ก ์„ ํ…Œ๋„ค์‹œ ์ด์ŠคํŠธ๋งŒ ๊ณต์ •์— ์ ์šฉํ•ด ๋ด„์œผ๋กœ์จ, ๋ณธ ์—ฐ๊ตฌ ๋‚ด์šฉ์ด ๊ณต์ • ์ด์ƒ์˜ ๊ฐ์ง€ ๋ฐ ์ง„๋‹จ์— ๋Œ€ํ•œ ํ†ตํ•ฉ์ ์ธ ๋ฐฉ๋ฒ•๋ก  ์ค‘์—์„œ ๊ฐ€์žฅ ์šฐ์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.Contents Abstract i Contents iv List of Tables vii List of Figures ix 1 Introduction 1 1.1 Research Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Research Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Markov Random Fields Modelling, Graphical Lasso, and Optimal Structure Learning 10 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Markov Random Fields . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Graphical Lasso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4 MRF Modelling & Structure Learning . . . . . . . . . . . . . . . . . 19 2.4.1 MRF modelling in process systems . . . . . . . . . . . . . . 19 2.4.2 Structure learning using iterative graphical lasso . . . . . . . 20 2.5 Application of Iterative Graphical Lasso on the TEP . . . . . . . . . . 24 3 Efficient Process Monitoring via the Integrated Use of Markov Random Fields Learning and the Graphical Lasso 31 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 MRF Monitoring Integrated with Graphical Lasso . . . . . . . . . . . 35 3.2.1 Step 1: Iterative graphical lasso . . . . . . . . . . . . . . . . 36 3.2.2 Step 2: MRF monitoring . . . . . . . . . . . . . . . . . . . . 36 3.3 Implementation of Glasso-MRF monitoring to the Tennessee Eastman process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3.1 Tennessee Eastman process . . . . . . . . . . . . . . . . . . 41 3.3.2 Glasso-MRF monitoring on TEP . . . . . . . . . . . . . . . . 48 3.3.3 Fault detection accuracy comparison with other monitoring techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.3.4 Fault detection speed & fault propagation . . . . . . . . . . . 95 4 Process Fault Diagnosis via Markov Random Fields Learning and Inference 101 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.2.1 Probabilistic graphical models & Markov random fields . . . 106 4.2.2 Kernel belief propagation . . . . . . . . . . . . . . . . . . . . 107 4.3 Fault Diagnosis via MRF Modeling . . . . . . . . . . . . . . . . . . 113 4.3.1 MRF structure learning via graphical lasso . . . . . . . . . . 116 4.3.2 Kernel belief propagation - bandwidth selection . . . . . . . . 116 4.3.3 Conditional contribution evaluation . . . . . . . . . . . . . . 117 4.4 Application Results & Discussion . . . . . . . . . . . . . . . . . . . 118 4.4.1 Two tank process . . . . . . . . . . . . . . . . . . . . . . . . 119 4.4.2 Tennessee Eastman process . . . . . . . . . . . . . . . . . . 137 5 Concluding Remarks 152 Bibliography 157 Nomenclature 169 Abstract (In Korean) 170Docto

    Probabilistic Graphical Models in RapidMiner

    Get PDF
    This Report describes the technical background and usage of the GraphMod plug-in for RapidMiner. The plug-in enables RapidMiner to load factor graphs and interpret Label and Attributes which are contained in an Example as assignments to random variables. A set of examples which belong to the same Batch is treated as assignment to a whole factor graph. New operators allow the estimation of factor weights, the computation of the single-node marginal probability functions and the computation of the most probable assignment for each Labelnode with several methods. All algorithms are optimized for parallel execution on common multi-core processors and NVIDIA CUDA capable many-core processors (also known as Graphics Processing Unit)

    Introductory Chapter: Prognostics - An Overview

    Get PDF
    Prognostics, in general, can be defined as โ€œknowledge beforehandโ€. Prognostics is usually identified with medical issues. Nowadays, due to the new advances in technologies and information systems, prognostic is beginning to be employed in other fields, e.g. engineering, financial, business, et

    Computing Functions of Random Variables via Reproducing Kernel Hilbert Space Representations

    Full text link
    We describe a method to perform functional operations on probability distributions of random variables. The method uses reproducing kernel Hilbert space representations of probability distributions, and it is applicable to all operations which can be applied to points drawn from the respective distributions. We refer to our approach as {\em kernel probabilistic programming}. We illustrate it on synthetic data, and show how it can be used for nonparametric structural equation models, with an application to causal inference

    Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages

    Get PDF
    We propose an efficient nonparametric strategy for learning a message operator in expectation propagation (EP), which takes as input the set of incoming messages to a factor node, and produces an outgoing message as output. This learned operator replaces the multivariate integral required in classical EP, which may not have an analytic expression. We use kernel-based regression, which is trained on a set of probability distributions representing the incoming messages, and the associated outgoing messages. The kernel approach has two main advantages: first, it is fast, as it is implemented using a novel two-layer random feature representation of the input message distributions; second, it has principled uncertainty estimates, and can be cheaply updated online, meaning it can request and incorporate new training data when it encounters inputs on which it is uncertain. In experiments, our approach is able to solve learning problems where a single message operator is required for multiple, substantially different data sets (logistic regression for a variety of classification problems), where it is essential to accurately assess uncertainty and to efficiently and robustly update the message operator.Comment: accepted to UAI 2015. Correct typos. Add more content to the appendix. Main results unchange

    Nonparametric Detection of Geometric Structures over Networks

    Full text link
    Nonparametric detection of existence of an anomalous structure over a network is investigated. Nodes corresponding to the anomalous structure (if one exists) receive samples generated by a distribution q, which is different from a distribution p generating samples for other nodes. If an anomalous structure does not exist, all nodes receive samples generated by p. It is assumed that the distributions p and q are arbitrary and unknown. The goal is to design statistically consistent tests with probability of errors converging to zero as the network size becomes asymptotically large. Kernel-based tests are proposed based on maximum mean discrepancy that measures the distance between mean embeddings of distributions into a reproducing kernel Hilbert space. Detection of an anomalous interval over a line network is first studied. Sufficient conditions on minimum and maximum sizes of candidate anomalous intervals are characterized in order to guarantee the proposed test to be consistent. It is also shown that certain necessary conditions must hold to guarantee any test to be universally consistent. Comparison of sufficient and necessary conditions yields that the proposed test is order-level optimal and nearly optimal respectively in terms of minimum and maximum sizes of candidate anomalous intervals. Generalization of the results to other networks is further developed. Numerical results are provided to demonstrate the performance of the proposed tests.Comment: Submitted for journal publication in November 2015. arXiv admin note: text overlap with arXiv:1404.029
    corecore