3,346 research outputs found

    A Magnetorheological Damper with Embedded Piezoelectric Force Sensor: Experiment and Modeling

    Get PDF
    This chapter describes configuration, fabrication, calibration and performance tests of the devised self-sensing MR damper firstly. Then, a black-box identification approach for modeling the forward and inverse dynamics of the self-sensing MR damper is presented, which is developed with the synthesis of NARX model and neural network within a Bayesian inference framework to have the ability of enhancing generalization.Department of Civil and Environmental Engineerin

    Enhancing VAEs for Collaborative Filtering: Flexible Priors & Gating Mechanisms

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์œตํ•ฉ๊ณผํ•™๋ถ€(๋””์ง€ํ„ธ์ •๋ณด์œตํ•ฉ์ „๊ณต),2019. 8. ์„œ๋ด‰์›.Since Matrix Factorization based linear models have been dominant in the Collaborative Filtering context for a long time in the past, Neural Network based CF Models for recommendation have started to gain attention recently. One branch of research is based on using deep generative models to model user preferences and Variational Autoencoders where shown to give state-of-the-art results. However, there are some potentially problematic characteristics of the current Variational Autoencoder for CF. The first is the too simplistic prior VAEs incorporate for learning the latent representations of user preference, which may be restricting the model from learning more expressive and richer latent variables that could boost recommendation performance. The other is the models inability to learn deeper representations with more than one hidden layer. Our goal is to incorporate appropriate techniques in order to mitigate the aforementioned problems of Variational Autoencoder CF and further improve the recommendation performance of VAE based Collaborative Fil-tering. We bring the VampPrior, which successfully made improvements for image generation to tackle the restrictive prior problem. We also adopt Gat-ed Linear Units (GLUs) which were used in stacked convolutions for lan-guage modeling to control information flow in the easily deepening auto-encoder framework. We show that such simple priors (in original VAEs) may be too restric-tive to fully model user preferences and setting a more flexible prior gives significant gains. We also show that VAMP priors coupled with gating mechanisms outperform SOTA results including the Variational Autoencoder for Collaborative Filtering by meaningful margins on 4 benchmark datasets (MovieLens, Netflix, Pinterest, Melon).์ตœ๊ทผ ๋‰ด๋Ÿด๋„ท ๊ธฐ๋ฐ˜ ํ˜‘์—…ํ•„ํ„ฐ๋ง ์ถ”์ฒœ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ์ฃผ๋ชฉ์„ ๋ฐ›๊ณ  ์žˆ๋‹ค. ๊ทธ ์ค‘ ํ•œ ๊ฐˆ๋ž˜์˜ ์—ฐ๊ตฌ๋Š” ๊นŠ์€ ์ƒ์„ฑ๋ชจํ˜• (Deep Generative Model)์„ ์ด์šฉํ•ด ์‚ฌ์šฉ์ž๋“ค์˜ ์„ ํ˜ธ๋ฅผ ๋ชจ๋ธ๋งํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ์ด์ค‘ Variational Autoencoder๋ฅผ (VAE) ์ด์šฉํ•œ ๋ฐฉ๋ฒ•์ด ์ตœ๊ทผ state-of-the-art (SOTA) ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ VAE๋ฅผ ์ด์šฉํ•œ ํ˜‘์—…ํ•„ํ„ฐ๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ˜„์žฌ ๋ช‡ ๊ฐ€์ง€์˜ ์ž ์žฌ์ ์ธ ๋ฌธ์ œ์ ๋“ค์„ ์ง€๋‹ˆ๊ณ  ์žˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” ์‚ฌ์šฉ์ž ์„ ํ˜ธ๋ฅผ ์••์ถ•ํ•˜๋Š” ์ž ์žฌ๋ณ€์ˆ˜๋ฅผ ํ•™์Šตํ•˜๋Š” ๊ณผ์ •์—์„œ ๋งค์šฐ ๋‹จ์ˆœํ•œ ์‚ฌ์ „๋ถ„ํฌ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ๋˜ ๋‹ค๋ฅธ ๋ฌธ์ œ์ ์€ ๋ชจ๋ธ์ด ํ˜„์žฌ ์—ฌ๋Ÿฌ ๋‹จ์„ ์ด์šฉํ•œ ๊นŠ์€ ์ธ์ฝ”๋”์™€ ๋””์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ๋ชปํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ตœ์‹ ๊ธฐ์ˆ ๋“ค์„ ํ™œ์šฉํ•˜์—ฌ ์•ž์„  ๋ฌธ์ œ์ ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ณ  VAE๋ฅผ ์ด์šฉํ•œ ํ˜‘์—…ํ•„ํ„ฐ๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ถ”์ฒœ์„ฑ๋Šฅ์„ ๋”์šฑ ๋†’์ด๋Š” ๊ฒƒ์ด ๋ชฉํ‘œ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ํ˜‘์—…ํ•„ํ„ฐ๋ง ๋ฌธ์ œ์— ๋” ๋ณต์žกํ•œ ์‚ฌ์ „๋ถ„ํฌ (Flexible Prior)๋ฅผ ์ ์šฉํ•œ ์ฒซ ์—ฐ๊ตฌ๋กœ์„œ, ๊ธฐ์กด์˜ ๋‹จ์ˆœํ•œ ์‚ฌ์ „๋ถ„ํฌ๊ฐ€ ๋ชจ๋ธ์˜ ํ‘œํ˜„๋ ฅ์„ ์ œํ•œํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋” ๋ณต์žกํ•œ ์‚ฌ์ „๋ถ„ํฌ๋ฅผ ์ •์˜ํ•จ์œผ๋กœ์จ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ๋”์šฑ ๋†’์ผ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ฌธ์ œ์—์„œ ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ธ VampPrior๋ฅผ ์ด์šฉํ•ด ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ๋˜ํ•œ VampPrior๋ฅผ Gating Mechanisim๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜์˜€์„ ๋•Œ ๊ธฐ์กด SOTA๋ฅผ ๋„˜์–ด์„œ๋Š” ์„ฑ๋Šฅ์„ ๋ณด์ž„์„ ์ถ”์ฒœ์•Œ๊ณ ๋ฆฌ์ฆ˜์—์„œ ์‚ฌ์šฉ๋˜๋Š” ๋Œ€ํ‘œ์ ์ธ ๋ฐ์ดํ„ฐ์…‹๋“ค์„ ํ†ตํ•ด ๋ณด์—ฌ์ค€๋‹ค.1 INTRODUCTION 1 1.1 Background and Motivation 1 1.2 Research Goal 3 1.3 Enhancing VAEs for Collaborative Filtering 3 1.4 Experiments 5 1.5 Contributions 5 2 RELATED WORK 7 2.1 Collaborative Filtering 7 2.1.1 Traditional methods & Matrix-Factorization based CF 8 2.1.2 Autoencoders for CF 12 2.2 Deep Generative Models (VAE) 17 2.2.1 Variational Bayes 18 2.2.2 Variational Autoencoder 18 2.3 Variational Autoencoder for Collaborative Filtering 20 2.3.1 VAE for CF 21 2.4 Recent research in Computer Vision & Deep Learning 24 2.4.1 VampPrior 24 2.4.2 Gated CNN 25 3 METHOD 28 3.1 Flexible Prior 29 3.1.1 Motivation 29 3.1.2 VampPrior 30 3.1.3 Hierarchical Stochastic Units 31 3.2 Gating Mechanism 32 3.2.1 Motivation 32 3.2.2 Gated Linear Units 34 4 EXPERIMENT 35 4.1 Setup 35 4.1.1 Baseline Models 35 4.1.2 Proposed Models 37 4.1.3 Strong Generalization 37 4.1.4 Evaluation Metrics 38 4.2 Datasets 38 4.3 Configurations 39 4.4 Results 40 4.4.1 Model Performance 40 4.4.5 Further Analysis on the Effect of Gating 44 5 CONCLUSION 45 Bibliography 47 ๊ตญ๋ฌธ์ดˆ๋ก 51Maste

    Learning with Limited Labeled Data in Biomedical Domain by Disentanglement and Semi-Supervised Learning

    Get PDF
    In this dissertation, we are interested in improving the generalization of deep neural networks for biomedical data (e.g., electrocardiogram signal, x-ray images, etc). Although deep neural networks have attained state-of-the-art performance and, thus, deployment across a variety of domains, similar performance in the clinical setting remains challenging due to its ineptness to generalize across unseen data (e.g., new patient cohort). We address this challenge of generalization in the deep neural network from two perspectives: 1) learning disentangled representations from the deep network, and 2) developing efficient semi-supervised learning (SSL) algorithms using the deep network. In the former, we are interested in designing specific architectures and objective functions to learn representations, where variations in the data are well separated, i.e., disentangled. In the latter, we are interested in designing regularizers that encourage the underlying neural function\u27s behavior toward a common inductive bias to avoid over-fitting the function to small labeled data. Our end goal is to improve the generalization of the deep network for the diagnostic model in both of these approaches. In disentangled representations, this translates to appropriately learning latent representations from the data, capturing the observed input\u27s underlying explanatory factors in an independent and interpretable way. With data\u27s expository factors well separated, such disentangled latent space can then be useful for a large variety of tasks and domains within data distribution even with a small amount of labeled data, thus improving generalization. In developing efficient semi-supervised algorithms, this translates to utilizing a large volume of the unlabelled dataset to assist the learning from the limited labeled dataset, commonly encountered situation in the biomedical domain. By drawing ideas from different areas within deep learning like representation learning (e.g., autoencoder), variational inference (e.g., variational autoencoder), Bayesian nonparametric (e.g., beta-Bernoulli process), learning theory (e.g., analytical learning theory), function smoothing (Lipschitz Smoothness), etc., we propose several leaning algorithms to improve generalization in the associated task. We test our algorithms on real-world clinical data and show that our approach yields significant improvement over existing methods. Moreover, we demonstrate the efficacy of the proposed models in the benchmark data and simulated data to understand different aspects of the proposed learning methods. We conclude by identifying some of the limitations of the proposed methods, areas of further improvement, and broader future directions for the successful adoption of AI models in the clinical environment

    A neural network approach to the modeling of blast furnace

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (leaves 65-69).by Angela X. Ge.M.Eng
    • โ€ฆ
    corecore