4,150 research outputs found

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    Full text link
    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences

    ๋งค๊ฐœ๋ถ„ํฌ๊ทผ์‚ฌ๋ฅผ ํ†ตํ•œ ๊ณต์ •์‹œ์Šคํ…œ ๊ณตํ•™์—์„œ์˜ ํ™•๋ฅ ๊ธฐ๊ณ„ํ•™์Šต ์ ‘๊ทผ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ™”ํ•™์ƒ๋ฌผ๊ณตํ•™๋ถ€, 2021.8. ์ด์ข…๋ฏผ.With the rapid development of measurement technology, higher quality and vast amounts of process data become available. Nevertheless, process data are โ€˜scarceโ€™ in many cases as they are sampled only at certain operating conditions while the dimensionality of the system is large. Furthermore, the process data are inherently stochastic due to the internal characteristics of the system or the measurement noises. For this reason, uncertainty is inevitable in process systems, and estimating it becomes a crucial part of engineering tasks as the prediction errors can lead to misguided decisions and cause severe casualties or economic losses. A popular approach to this is applying probabilistic inference techniques that can model the uncertainty in terms of probability. However, most of the existing probabilistic inference techniques are based on recursive sampling, which makes it difficult to use them for industrial applications that require processing a high-dimensional and massive amount of data. To address such an issue, this thesis proposes probabilistic machine learning approaches based on parametric distribution approximation, which can model the uncertainty of the system and circumvent the computational complexity as well. The proposed approach is applied for three major process engineering tasks: process monitoring, system modeling, and process design. First, a process monitoring framework is proposed that utilizes a probabilistic classifier for fault classification. To enhance the accuracy of the classifier and reduce the computational cost for its training, a feature extraction method called probabilistic manifold learning is developed and applied to the process data ahead of the fault classification. We demonstrate that this manifold approximation process not only reduces the dimensionality of the data but also casts the data into a clustered structure, making the classifier have a low dependency on the type and dimension of the data. By exploiting this property, non-metric information (e.g., fault labels) of the data is effectively incorporated and the diagnosis performance is drastically improved. Second, a probabilistic modeling approach based on Bayesian neural networks is proposed. The parameters of deep neural networks are transformed into Gaussian distributions and trained using variational inference. The redundancy of the parameter is autonomously inferred during the model training, and insignificant parameters are eliminated a posteriori. Through a verification study, we demonstrate that the proposed approach can not only produce high-fidelity models that describe the stochastic behaviors of the system but also produce the optimal model structure. Finally, a novel process design framework is proposed based on reinforcement learning. Unlike the conventional optimization methods that recursively evaluate the objective function to find an optimal value, the proposed method approximates the objective function surface by parametric probabilistic distributions. This allows learning the continuous action policy without introducing any cumbersome discretization process. Moreover, the probabilistic policy gives means for effective control of the exploration and exploitation rates according to the certainty information. We demonstrate that the proposed framework can learn process design heuristics during the solution process and use them to solve similar design problems.๊ณ„์ธก๊ธฐ์ˆ ์˜ ๋ฐœ๋‹ฌ๋กœ ์–‘์งˆ์˜, ๊ทธ๋ฆฌ๊ณ  ๋ฐฉ๋Œ€ํ•œ ์–‘์˜ ๊ณต์ • ๋ฐ์ดํ„ฐ์˜ ์ทจ๋“์ด ๊ฐ€๋Šฅํ•ด์กŒ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋งŽ์€ ๊ฒฝ์šฐ ์‹œ์Šคํ…œ ์ฐจ์›์˜ ํฌ๊ธฐ์— ๋น„ํ•ด์„œ ์ผ๋ถ€ ์šด์ „์กฐ๊ฑด์˜ ๊ณต์ • ๋ฐ์ดํ„ฐ๋งŒ์ด ์ทจ๋“๋˜๊ธฐ ๋•Œ๋ฌธ์—, ๊ณต์ • ๋ฐ์ดํ„ฐ๋Š” โ€˜ํฌ์†Œโ€™ํ•˜๊ฒŒ ๋œ๋‹ค. ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ๊ณต์ • ๋ฐ์ดํ„ฐ๋Š” ์‹œ์Šคํ…œ ๊ฑฐ๋™ ์ž์ฒด์™€ ๋”๋ถˆ์–ด ๊ณ„์ธก์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๋…ธ์ด์ฆˆ๋กœ ์ธํ•œ ๋ณธ์งˆ์ ์ธ ํ™•๋ฅ ์  ๊ฑฐ๋™์„ ๋ณด์ธ๋‹ค. ๋”ฐ๋ผ์„œ ์‹œ์Šคํ…œ์˜ ์˜ˆ์ธก๋ชจ๋ธ์€ ์˜ˆ์ธก ๊ฐ’์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ์„ ์ •๋Ÿ‰์ ์œผ๋กœ ๊ธฐ์ˆ ํ•˜๋Š” ๊ฒƒ์ด ์š”๊ตฌ๋˜๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ์˜ค์ง„์„ ์˜ˆ๋ฐฉํ•˜๊ณ  ์ž ์žฌ์  ์ธ๋ช… ํ”ผํ•ด์™€ ๊ฒฝ์ œ์  ์†์‹ค์„ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด์— ๋Œ€ํ•œ ๋ณดํŽธ์ ์ธ ์ ‘๊ทผ๋ฒ•์€ ํ™•๋ฅ ์ถ”์ •๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋Ÿฌํ•œ ๋ถˆํ™•์‹ค์„ฑ์„ ์ •๋Ÿ‰ํ™” ํ•˜๋Š” ๊ฒƒ์ด๋‚˜, ํ˜„์กดํ•˜๋Š” ์ถ”์ •๊ธฐ๋ฒ•๋“ค์€ ์žฌ๊ท€์  ์ƒ˜ํ”Œ๋ง์— ์˜์กดํ•˜๋Š” ํŠน์„ฑ์ƒ ๊ณ ์ฐจ์›์ด๋ฉด์„œ๋„ ๋‹ค๋Ÿ‰์ธ ๊ณต์ •๋ฐ์ดํ„ฐ์— ์ ์šฉํ•˜๊ธฐ ์–ด๋ ต๋‹ค๋Š” ๊ทผ๋ณธ์ ์ธ ํ•œ๊ณ„๋ฅผ ๊ฐ€์ง„๋‹ค. ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์—์„œ๋Š” ๋งค๊ฐœ๋ถ„ํฌ๊ทผ์‚ฌ์— ๊ธฐ๋ฐ˜ํ•œ ํ™•๋ฅ ๊ธฐ๊ณ„ํ•™์Šต์„ ์ ์šฉํ•˜์—ฌ ์‹œ์Šคํ…œ์— ๋‚ด์žฌ๋œ ๋ถˆํ™•์‹ค์„ฑ์„ ๋ชจ๋ธ๋งํ•˜๋ฉด์„œ๋„ ๋™์‹œ์— ๊ณ„์‚ฐ ํšจ์œจ์ ์ธ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋จผ์ €, ๊ณต์ •์˜ ๋ชจ๋‹ˆํ„ฐ๋ง์— ์žˆ์–ด ๊ฐ€์šฐ์‹œ์•ˆ ํ˜ผํ•ฉ ๋ชจ๋ธ (Gaussian mixture model)์„ ๋ถ„๋ฅ˜์ž๋กœ ์‚ฌ์šฉํ•˜๋Š” ํ™•๋ฅ ์  ๊ฒฐํ•จ ๋ถ„๋ฅ˜ ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ด๋•Œ ๋ถ„๋ฅ˜์ž์˜ ํ•™์Šต์—์„œ์˜ ๊ณ„์‚ฐ ๋ณต์žก๋„๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ์ €์ฐจ์›์œผ๋กœ ํˆฌ์˜์‹œํ‚ค๋Š”๋ฐ, ์ด๋ฅผ ์œ„ํ•œ ํ™•๋ฅ ์  ๋‹ค์–‘์ฒด ํ•™์Šต (probabilistic manifold learn-ing) ๋ฐฉ๋ฒ•์ด ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋ฐ์ดํ„ฐ์˜ ๋‹ค์–‘์ฒด (manifold)๋ฅผ ๊ทผ์‚ฌํ•˜์—ฌ ๋ฐ์ดํ„ฐ ํฌ์ธํŠธ ์‚ฌ์ด์˜ ์Œ๋ณ„ ์šฐ๋„ (pairwise likelihood)๋ฅผ ๋ณด์กดํ•˜๋Š” ํˆฌ์˜๋ฒ•์ด ์‚ฌ์šฉ๋œ๋‹ค. ์ด๋ฅผ ํ†ตํ•˜์—ฌ ๋ฐ์ดํ„ฐ์˜ ์ข…๋ฅ˜์™€ ์ฐจ์›์— ์˜์กด๋„๊ฐ€ ๋‚ฎ์€ ์ง„๋‹จ ๊ฒฐ๊ณผ๋ฅผ ์–ป์Œ๊ณผ ๋™์‹œ์— ๋ฐ์ดํ„ฐ ๋ ˆ์ด๋ธ”๊ณผ ๊ฐ™์€ ๋น„๊ฑฐ๋ฆฌ์  (non-metric) ์ •๋ณด๋ฅผ ํšจ์œจ์ ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ๊ฒฐํ•จ ์ง„๋‹จ ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ๋‘˜์งธ๋กœ, ๋ฒ ์ด์ง€์•ˆ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง(Bayesian deep neural networks)์„ ์‚ฌ์šฉํ•œ ๊ณต์ •์˜ ํ™•๋ฅ ์  ๋ชจ๋ธ๋ง ๋ฐฉ๋ฒ•๋ก ์ด ์ œ์‹œ๋˜์—ˆ๋‹ค. ์‹ ๊ฒฝ๋ง์˜ ๊ฐ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๊ฐ€์šฐ์Šค ๋ถ„ํฌ๋กœ ์น˜ํ™˜๋˜๋ฉฐ, ๋ณ€๋ถ„์ถ”๋ก  (variational inference)์„ ํ†ตํ•˜์—ฌ ๊ณ„์‚ฐ ํšจ์œจ์ ์ธ ํ›ˆ๋ จ์ด ์ง„ํ–‰๋œ๋‹ค. ํ›ˆ๋ จ์ด ๋๋‚œ ํ›„ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์œ ํšจ์„ฑ์„ ์ธก์ •ํ•˜์—ฌ ๋ถˆํ•„์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์†Œ๊ฑฐํ•˜๋Š” ์‚ฌํ›„ ๋ชจ๋ธ ์••์ถ• ๋ฐฉ๋ฒ•์ด ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๋ฐ˜๋„์ฒด ๊ณต์ •์— ๋Œ€ํ•œ ์‚ฌ๋ก€ ์—ฐ๊ตฌ๋Š” ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ๊ณต์ •์˜ ๋ณต์žกํ•œ ๊ฑฐ๋™์„ ํšจ๊ณผ์ ์œผ๋กœ ๋ชจ๋ธ๋ง ํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋ชจ๋ธ์˜ ์ตœ์  ๊ตฌ์กฐ๋ฅผ ๋„์ถœํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ถ„ํฌํ˜• ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์„ ์‚ฌ์šฉํ•œ ๊ฐ•ํ™”ํ•™์Šต์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ™•๋ฅ ์  ๊ณต์ • ์„ค๊ณ„ ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ตœ์ ์น˜๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ์žฌ๊ท€์ ์œผ๋กœ ๋ชฉ์  ํ•จ์ˆ˜ ๊ฐ’์„ ํ‰๊ฐ€ํ•˜๋Š” ๊ธฐ์กด์˜ ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋ก ๊ณผ ๋‹ฌ๋ฆฌ, ๋ชฉ์  ํ•จ์ˆ˜ ๊ณก๋ฉด (objective function surface)์„ ๋งค๊ฐœํ™” ๋œ ํ™•๋ฅ ๋ถ„ํฌ๋กœ ๊ทผ์‚ฌํ•˜๋Š” ์ ‘๊ทผ๋ฒ•์ด ์ œ์‹œ๋˜์—ˆ๋‹ค. ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ด์‚ฐํ™” (discretization)๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ์—ฐ์†์  ํ–‰๋™ ์ •์ฑ…์„ ํ•™์Šตํ•˜๋ฉฐ, ํ™•์‹ค์„ฑ (certainty)์— ๊ธฐ๋ฐ˜ํ•œ ํƒ์ƒ‰ (exploration) ๋ฐ ํ™œ์šฉ (exploi-tation) ๋น„์œจ์˜ ์ œ์–ด๊ฐ€ ํšจ์œจ์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค. ์‚ฌ๋ก€ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋Š” ๊ณต์ •์˜ ์„ค๊ณ„์— ๋Œ€ํ•œ ๊ฒฝํ—˜์ง€์‹ (heuristic)์„ ํ•™์Šตํ•˜๊ณ  ์œ ์‚ฌํ•œ ์„ค๊ณ„ ๋ฌธ์ œ์˜ ํ•ด๋ฅผ ๊ตฌํ•˜๋Š” ๋ฐ ์ด์šฉํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค€๋‹ค.Chapter 1 Introduction 1 1.1. Motivation 1 1.2. Outline of the thesis 5 Chapter 2 Backgrounds and preliminaries 9 2.1. Bayesian inference 9 2.2. Monte Carlo 10 2.3. Kullback-Leibler divergence 11 2.4. Variational inference 12 2.5. Riemannian manifold 13 2.6. Finite extended-pseudo-metric space 16 2.7. Reinforcement learning 16 2.8. Directed graph 19 Chapter 3 Process monitoring and fault classification with probabilistic manifold learning 20 3.1. Introduction 20 3.2. Methods 25 3.2.1. Uniform manifold approximation 27 3.2.2. Clusterization 28 3.2.3. Projection 31 3.2.4. Mapping of unknown data query 32 3.2.5. Inference 33 3.3. Verification study 38 3.3.1. Dataset description 38 3.3.2. Experimental setup 40 3.3.3. Process monitoring 43 3.3.4. Projection characteristics 47 3.3.5. Fault diagnosis 50 3.3.6. Computational Aspects 56 Chapter 4 Process system modeling with Bayesian neural networks 59 4.1. Introduction 59 4.2. Methods 63 4.2.1. Long Short-Term Memory (LSTM) 63 4.2.2. Bayesian LSTM (BLSTM) 66 4.3. Verification study 68 4.3.1. System description 68 4.3.2. Estimation of the plasma variables 71 4.3.3. Dataset description 72 4.3.4. Experimental setup 72 4.3.5. Weight regularization during training 78 4.3.6. Modeling complex behaviors of the system 80 4.3.7. Uncertainty quantification and model compression 85 Chapter 5 Process design based on reinforcement learning with distributional actor-critic networks 89 5.1. Introduction 89 5.2. Methods 93 5.2.1. Flowsheet hashing 93 5.2.2. Behavioral cloning 99 5.2.3. Neural Monte Carlo tree search (N-MCTS) 100 5.2.4. Distributional actor-critic networks (DACN) 105 5.2.5. Action masking 110 5.3. Verification study 110 5.3.1. System description 110 5.3.2. Experimental setup 111 5.3.3. Result and discussions 115 Chapter 6 Concluding remarks 120 6.1. Summary of the contributions 120 6.2. Future works 122 Appendix 125 A.1. Proof of Lemma 1 125 A.2. Performance indices for dimension reduction 127 A.3. Model equations for process units 130 Bibliography 132 ์ดˆ ๋ก 149๋ฐ•
    • โ€ฆ
    corecore