953 research outputs found
Enhancing VAEs for Collaborative Filtering: Flexible Priors & Gating Mechanisms
Neural network based models for collaborative filtering have started to gain
attention recently. One branch of research is based on using deep generative
models to model user preferences where variational autoencoders were shown to
produce state-of-the-art results. However, there are some potentially
problematic characteristics of the current variational autoencoder for CF. The
first is the too simplistic prior that VAEs incorporate for learning the latent
representations of user preference. The other is the model's inability to learn
deeper representations with more than one hidden layer for each network. Our
goal is to incorporate appropriate techniques to mitigate the aforementioned
problems of variational autoencoder CF and further improve the recommendation
performance. Our work is the first to apply flexible priors to collaborative
filtering and show that simple priors (in original VAEs) may be too restrictive
to fully model user preferences and setting a more flexible prior gives
significant gains. We experiment with the VampPrior, originally proposed for
image generation, to examine the effect of flexible priors in CF. We also show
that VampPriors coupled with gating mechanisms outperform SOTA results
including the Variational Autoencoder for Collaborative Filtering by meaningful
margins on 2 popular benchmark datasets (MovieLens & Netflix)
Enhancing VAEs for Collaborative Filtering: Flexible Priors & Gating Mechanisms
ํ์๋
ผ๋ฌธ(์์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :์ตํฉ๊ณผํ๊ธฐ์ ๋ํ์ ์ตํฉ๊ณผํ๋ถ(๋์งํธ์ ๋ณด์ตํฉ์ ๊ณต),2019. 8. ์๋ด์.Since Matrix Factorization based linear models have been dominant in the Collaborative Filtering context for a long time in the past, Neural Network based CF Models for recommendation have started to gain attention recently. One branch of research is based on using deep generative models to model user preferences and Variational Autoencoders where shown to give state-of-the-art results.
However, there are some potentially problematic characteristics of the current Variational Autoencoder for CF. The first is the too simplistic prior VAEs incorporate for learning the latent representations of user preference, which may be restricting the model from learning more expressive and richer latent variables that could boost recommendation performance. The other is the models inability to learn deeper representations with more than one hidden layer.
Our goal is to incorporate appropriate techniques in order to mitigate the aforementioned problems of Variational Autoencoder CF and further improve the recommendation performance of VAE based Collaborative Fil-tering. We bring the VampPrior, which successfully made improvements for image generation to tackle the restrictive prior problem. We also adopt Gat-ed Linear Units (GLUs) which were used in stacked convolutions for lan-guage modeling to control information flow in the easily deepening auto-encoder framework.
We show that such simple priors (in original VAEs) may be too restric-tive to fully model user preferences and setting a more flexible prior gives significant gains. We also show that VAMP priors coupled with gating mechanisms outperform SOTA results including the Variational Autoencoder for Collaborative Filtering by meaningful margins on 4 benchmark datasets (MovieLens, Netflix, Pinterest, Melon).์ต๊ทผ ๋ด๋ด๋ท ๊ธฐ๋ฐ ํ์
ํํฐ๋ง ์ถ์ฒ์๊ณ ๋ฆฌ์ฆ์ด ์ฃผ๋ชฉ์ ๋ฐ๊ณ ์๋ค. ๊ทธ ์ค ํ ๊ฐ๋์ ์ฐ๊ตฌ๋ ๊น์ ์์ฑ๋ชจํ (Deep Generative Model)์ ์ด์ฉํด ์ฌ์ฉ์๋ค์ ์ ํธ๋ฅผ ๋ชจ๋ธ๋งํ๋ ๋ฐฉ๋ฒ์ด๋ค. ์ด์ค Variational Autoencoder๋ฅผ (VAE) ์ด์ฉํ ๋ฐฉ๋ฒ์ด ์ต๊ทผ state-of-the-art (SOTA) ์ฑ๋ฅ์ ๋ณด์ฌ์ฃผ์๋ค. ๊ทธ๋ฌ๋ VAE๋ฅผ ์ด์ฉํ ํ์
ํํฐ๋ง ์๊ณ ๋ฆฌ์ฆ์ ํ์ฌ ๋ช ๊ฐ์ง์ ์ ์ฌ์ ์ธ ๋ฌธ์ ์ ๋ค์ ์ง๋๊ณ ์๋ค. ์ฒซ ๋ฒ์งธ๋ ์ฌ์ฉ์ ์ ํธ๋ฅผ ์์ถํ๋ ์ ์ฌ๋ณ์๋ฅผ ํ์ตํ๋ ๊ณผ์ ์์ ๋งค์ฐ ๋จ์ํ ์ฌ์ ๋ถํฌ๋ฅผ ์ฌ์ฉํ๋ค๋ ๊ฒ์ด๋ค. ๋ ๋ค๋ฅธ ๋ฌธ์ ์ ์ ๋ชจ๋ธ์ด ํ์ฌ ์ฌ๋ฌ ๋จ์ ์ด์ฉํ ๊น์ ์ธ์ฝ๋์ ๋์ฝ๋๋ฅผ ์ฌ์ฉํ์ง ๋ชปํ๊ณ ์๋ค๋ ๊ฒ์ด๋ค. ๋ณธ ์ฐ๊ตฌ๋ ์ต์ ๊ธฐ์ ๋ค์ ํ์ฉํ์ฌ ์์ ๋ฌธ์ ์ ๋ค์ ํด๊ฒฐํ๊ณ VAE๋ฅผ ์ด์ฉํ ํ์
ํํฐ๋ง ์๊ณ ๋ฆฌ์ฆ์ ์ถ์ฒ์ฑ๋ฅ์ ๋์ฑ ๋์ด๋ ๊ฒ์ด ๋ชฉํ์ด๋ค. ๋ณธ ์ฐ๊ตฌ๋ ํ์
ํํฐ๋ง ๋ฌธ์ ์ ๋ ๋ณต์กํ ์ฌ์ ๋ถํฌ (Flexible Prior)๋ฅผ ์ ์ฉํ ์ฒซ ์ฐ๊ตฌ๋ก์, ๊ธฐ์กด์ ๋จ์ํ ์ฌ์ ๋ถํฌ๊ฐ ๋ชจ๋ธ์ ํํ๋ ฅ์ ์ ํํ ์ ์์ผ๋ฉฐ ๋ ๋ณต์กํ ์ฌ์ ๋ถํฌ๋ฅผ ์ ์ํจ์ผ๋ก์จ ๋ชจ๋ธ์ ์ฑ๋ฅ์ ๋์ฑ ๋์ผ ์ ์์์ ๋ณด์๋ค. ์ด๋ฅผ ์ํด ์ด๋ฏธ์ง ์์ฑ ๋ฌธ์ ์์ ์ข์ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ธ VampPrior๋ฅผ ์ด์ฉํด ์คํ์ ์งํํ์๋ค. ๋ํ VampPrior๋ฅผ Gating Mechanisim๊ณผ ํจ๊ป ์ฌ์ฉํ์์ ๋ ๊ธฐ์กด SOTA๋ฅผ ๋์ด์๋ ์ฑ๋ฅ์ ๋ณด์์ ์ถ์ฒ์๊ณ ๋ฆฌ์ฆ์์ ์ฌ์ฉ๋๋ ๋ํ์ ์ธ ๋ฐ์ดํฐ์
๋ค์ ํตํด ๋ณด์ฌ์ค๋ค.1 INTRODUCTION 1
1.1 Background and Motivation 1
1.2 Research Goal 3
1.3 Enhancing VAEs for Collaborative Filtering 3
1.4 Experiments 5
1.5 Contributions 5
2 RELATED WORK 7
2.1 Collaborative Filtering 7
2.1.1 Traditional methods & Matrix-Factorization based CF 8
2.1.2 Autoencoders for CF 12
2.2 Deep Generative Models (VAE) 17
2.2.1 Variational Bayes 18
2.2.2 Variational Autoencoder 18
2.3 Variational Autoencoder for Collaborative Filtering 20
2.3.1 VAE for CF 21
2.4 Recent research in Computer Vision & Deep Learning 24
2.4.1 VampPrior 24
2.4.2 Gated CNN 25
3 METHOD 28
3.1 Flexible Prior 29
3.1.1 Motivation 29
3.1.2 VampPrior 30
3.1.3 Hierarchical Stochastic Units 31
3.2 Gating Mechanism 32
3.2.1 Motivation 32
3.2.2 Gated Linear Units 34
4 EXPERIMENT 35
4.1 Setup 35
4.1.1 Baseline Models 35
4.1.2 Proposed Models 37
4.1.3 Strong Generalization 37
4.1.4 Evaluation Metrics 38
4.2 Datasets 38
4.3 Configurations 39
4.4 Results 40
4.4.1 Model Performance 40
4.4.5 Further Analysis on the Effect of Gating 44
5 CONCLUSION 45
Bibliography 47
๊ตญ๋ฌธ์ด๋ก 51Maste
ใญใในใๆง๏ผๆญฃๅๅ๏ผใฟในใฏไธๅคๆงใซ้ขใใๅคๅใชใผใใจใณใณใผใใฎๆนๅ
ไบฌ้ฝๅคงๅญฆๆฐๅถใป่ชฒ็จๅๅฃซๅๅฃซ(ๆ
ๅ ฑๅญฆ)็ฒ็ฌฌ24725ๅทๆ
ๅ็ฌฌ813ๅทๆฐๅถ||ๆ
||137(้ๅฑๅณๆธ้คจ)ไบฌ้ฝๅคงๅญฆๅคงๅญฆ้ขๆ
ๅ ฑๅญฆ็ ็ฉถ็ง็ฅ่ฝๆ
ๅ ฑๅญฆๅฐๆป(ไธปๆป)ๆๆ ้นฟๅณถ ไน
ๅฃ, ๆๆ ๅฑฑๆฌ ็ซ ๅ, ๆๆ ๅๅท ๆญฃไฟๅญฆไฝ่ฆๅ็ฌฌ4ๆก็ฌฌ1้
่ฉฒๅฝDoctor of InformaticsKyoto UniversityDFA
- โฆ