511 research outputs found
DJ-MC: A Reinforcement-Learning Agent for Music Playlist Recommendation
In recent years, there has been growing focus on the study of automated
recommender systems. Music recommendation systems serve as a prominent domain
for such works, both from an academic and a commercial perspective. A
fundamental aspect of music perception is that music is experienced in temporal
context and in sequence. In this work we present DJ-MC, a novel
reinforcement-learning framework for music recommendation that does not
recommend songs individually but rather song sequences, or playlists, based on
a model of preferences for both songs and song transitions. The model is
learned online and is uniquely adapted for each listener. To reduce exploration
time, DJ-MC exploits user feedback to initialize a model, which it subsequently
updates by reinforcement. We evaluate our framework with human participants
using both real song and playlist data. Our results indicate that DJ-MC's
ability to recommend sequences of songs provides a significant improvement over
more straightforward approaches, which do not take transitions into account.Comment: -Updated to the most recent and completed version (to be presented at
AAMAS 2015) -Updated author list. in Autonomous Agents and Multiagent Systems
(AAMAS) 2015, Istanbul, Turkey, May 201
Current Challenges and Visions in Music Recommender Systems Research
Music recommender systems (MRS) have experienced a boom in recent years,
thanks to the emergence and success of online streaming services, which
nowadays make available almost all music in the world at the user's fingertip.
While today's MRS considerably help users to find interesting music in these
huge catalogs, MRS research is still facing substantial challenges. In
particular when it comes to build, incorporate, and evaluate recommendation
strategies that integrate information beyond simple user--item interactions or
content-based descriptors, but dig deep into the very essence of listener
needs, preferences, and intentions, MRS research becomes a big endeavor and
related publications quite sparse.
The purpose of this trends and survey article is twofold. We first identify
and shed light on what we believe are the most pressing challenges MRS research
is facing, from both academic and industry perspectives. We review the state of
the art towards solving these challenges and discuss its limitations. Second,
we detail possible future directions and visions we contemplate for the further
evolution of the field. The article should therefore serve two purposes: giving
the interested reader an overview of current challenges in MRS research and
providing guidance for young researchers by identifying interesting, yet
under-researched, directions in the field
Quick Lists: Enriched Playlist Embeddings for Future Playlist Recommendation
Recommending playlists to users in the context of a digital music service is
a difficult task because a playlist is often more than the mere sum of its
parts. We present a novel method for generating playlist embeddings that are
invariant to playlist length and sensitive to local and global track ordering.
The embeddings also capture information about playlist sequencing, and are
enriched with side information about the playlist user. We show that these
embeddings are useful for generating next-best playlist recommendations, and
that side information can be used for the cold start problem
λμ‘°νμ΅μ ν΅ν μ½ν μΈ κΈ°λ° μμ μΆμ²μμμ λΉμ νΈλ λ°μ
νμλ
Όλ¬Έ(μμ¬) -- μμΈλνκ΅λνμ : μ΅ν©κ³ΌνκΈ°μ λνμ μ§λ₯μ 보μ΅ν©νκ³Ό, 2023. 2. μ΄κ΅κ΅¬.Advanced music recommendation systems are being introduced along with the development of machine learning. However, it is essential to design a music recommendation system that can increase user satisfaction by understanding users music tastes, not by the complexity of models. Although several studies related to music recommendation systems exploiting negative preferences have shown performance improvements, there was a lack of explanation on how they led to better recommendations.
In this work, we analyze the role of negative preference in users music tastes by comparing music recommendation models with contrastive learning exploiting prefer- ence (CLEP) but with three different training strategies - exploiting preferences of both positive and negative (CLEP-PN), positive only (CLEP-P), and negative only (CLEP- N). We evaluate the effectiveness of the negative preference by validating each system with a small amount of personalized data obtained via survey and further illuminate the possibility of exploiting negative preference in music recommendations. Our experimental results show that CLEP-N outperforms the other two in accuracy and false positive rate. Furthermore, the proposed training strategies produced a consistent tendency regardless of different types of front-end musical feature extractors, proving the stability of the proposed method.λ¨Έμ λ¬λμ λ°μ κ³Ό ν¨κ» μ΄λ₯Ό νμ©ν λ€μν μμ
μΆμ² μμ€ν
μ΄ λμ
λκ³ μ λ€. κ·Έλ¬λ μμ
μΆμ² μμ€ν
μ λν μ¬μ©μμ λ§μ‘±λλ₯Ό λμ΄κΈ° μν΄μλ λ¨μν 볡μ‘νκ³ μ±λ₯μ΄ μ’μ λͺ¨λΈμ μ μ©νλ κ²μ΄ μλ, μ¬μ©μμ μμ
μ·¨ν₯μ λν μ΄ν΄ κ° λ°μλ μμ
μΆμ² μμ€ν
μ μ€κ³ν΄μΌ νλ€. λΉμ νΈλλ₯Ό νμ©ν μμ
μΆμ² μμ€ν
μμ μ¬λ¬ μ°κ΅¬μμ μ μλμλλ°, λΉμ νΈλλ₯Ό λ°μν¨μΌλ‘μ¨ μ±λ₯μ΄ ν₯μλ¨μ 보μ μ§λ§ λΉμ νΈλλ₯Ό λ°μνλ κ²μ΄ ꡬ체μ μΌλ‘ μ΄λ»κ² λ λμ μΆμ²μΌλ‘ μ΄μ΄μ‘λμ§μ λν μ€λͺ
μ λΆμ‘±νλ€.
λ³Έ μ°κ΅¬λ₯Ό ν΅ν΄ μ°λ¦¬λ μ νΈλμ λΉμ νΈλλ₯Ό λ€λ₯΄κ² μ μ©νμ¬ νλ ¨λ λμ‘° νμ΅ λͺ¨λΈ(Contrastive Learning Exploiting Preference, CLEP)μ λΉκ΅ λΆμν¨μΌλ‘μ¨ μ¬μ© μμ μμ
μ·¨ν₯μμ λΉμ νΈλκ° μ΄λ€ μν μ κ°μ§λμ§μ λν΄ μμλ³΄κ³ μ νλ€. λ³Έ μ°κ΅¬μμ μκ°νλ λͺ¨λΈμ λ°μνκ³ μ νλ μ νΈλμ λ°λΌ λ€λ₯΄κ² νμ΅λλ μΈ κ° μ§ λͺ¨λΈμ μ νΈλμ λΉμ νΈλλ₯Ό λͺ¨λ λ°μν λͺ¨λΈ(CLEP-PN), μ νΈλλ§μ λ°μν λͺ¨λΈ(CLEP-P), λΉμ νΈλλ§μ λ°μν λͺ¨λΈ(CLEP-N)λ‘ λλλ€.
λ³Έμ°κ΅¬μμμ μνκ°λͺ¨λΈμνλ ¨λ°νκ°λ₯Όμν΄μμ€λ¬Έμ‘°μ¬λ₯Όν΅ν΄κ°μΈμ νΈ λκ° ν¬ν¨λ μλμ λ°μ΄ν°μ
μ ꡬμΆνμλ€. ꡬμΆν λ°μ΄ν°μ
μ λν΄ κ° λͺ¨λΈλ€μ νκ° κ²°κ³Όλ₯Ό λΉκ΅νμ¬ μμ
μ·¨ν₯μμμ λΉμ νΈλμ νΉμ§κ³Ό μμ
μΆμ² μμ€ν
μμ λΉμ νΈλλ₯Ό νμ©ν μ μλ κ°λ₯μ±μ λν΄ μΆκ°λ‘ μ‘°λͺ
νλ€. λν, μμ
λ°μ΄ν°λ‘λΆν° νΉμ§μ μΆμΆνλ κ³Όμ μμ μ¬μ νμ΅λ μλ‘ λ€λ₯Έ μΈ κ°μ§ λͺ¨λΈμ μ΄μ©νμμΌλ©°, νΉμ§ μΆμΆκΈ°μ 무κ΄νκ² μΌκ΄λ κ²½ν₯μ±μ κ²°κ³Όλ₯Ό λ³΄μ¬ μ μ λ°©λ²μ μμ μ±μ μ
μ¦ νμλ€.1 Introduction 6
1.1 Motivation 6
1.2 Research Questions 9
2 Background 11
2.1 Background Theories 11
2.1.1 Recommender Systems 11
2.1.2 Music Recommendation System 14
2.1.3 Contrastive Learning 16
2.2 Related Works 17
2.2.1 Content-based Music Recommendation 17
2.2.2 Recommendation Systems Exploiting Negative Preference 20
3 Methods 22
3.1 Feature Extraction 22
3.1.1 Contrastive Learning of Musical Representations 24
3.1.2 Music Effects Encoder 25
3.1.3 Jukebox 25
3.2 Contrastive Learning Exploiting Preference (CLEP) 26
3.3 Preference Prediction 29
4 Experiments 30
4.1 Experimental Setups 30
4.2 User Preference Dataset 31
4.3 Evaluation 35
4.3.1 Evaluation Metric 35
4.3.2 Experimental Results 37
5 Results and Discussion 43
6 Conclusion 48
6.1 Contribution 48
6.1.1 Novel Approach on Content-Based Music Recommendation 49
6.1.2 Comprehension of Music Preference 51
6.2 Limitation and Future Works 51μ
Context Aware Music Recommendation and Playlist Generation
There are many reasons people listen to music, and the type of music is largely determined by what the listener may be doing while they listen. For example, one may listen to one type of music while commuting, another while exercising, and yet another while relaxing. Without access to the physiological state of the user, current music recommendation methods rely on collaborative filtering - recommending music based on what other similar users listen to - and content based filtering - recommending songs based on their similarities to songs the user already prefers. With the rise in popularity of smart devices and activity trackers, physiological context can be a new channel to inform music recommendations. We propose deep learning solutions for context aware recommendation and playlist generation. Specifically, we use variational autoencoders (VAEs) to create a song embedding. We then explore multi-task multi-layer perceptrons (MLPs) and Gaussian mixture models to recommend songs based on context. We generate artificial user data to train and test our models in online learning and supervised learning settings
Talk the Walk: Synthetic Data Generation for Conversational Music Recommendation
Recommendation systems are ubiquitous yet often difficult for users to
control and adjust when recommendation quality is poor. This has motivated the
development of conversational recommendation systems (CRSs), with control over
recommendations provided through natural language feedback. However, building
conversational recommendation systems requires conversational training data
involving user utterances paired with items that cover a diverse range of
preferences. Such data has proved challenging to collect scalably using
conventional methods like crowdsourcing. We address it in the context of
item-set recommendation, noting the increasing attention to this task motivated
by use cases like music, news and recipe recommendation. We present a new
technique, TalkTheWalk, that synthesizes realistic high-quality conversational
data by leveraging domain expertise encoded in widely available curated item
collections, showing how these can be transformed into corresponding item set
curation conversations. Specifically, TalkTheWalk generates a sequence of
hypothetical yet plausible item sets returned by a system, then uses a language
model to produce corresponding user utterances. Applying TalkTheWalk to music
recommendation, we generate over one million diverse playlist curation
conversations. A human evaluation shows that the conversations contain
consistent utterances with relevant item sets, nearly matching the quality of
small human-collected conversational data for this task. At the same time, when
the synthetic corpus is used to train a CRS, it improves Hits@100 by 10.5
points on a benchmark dataset over standard baselines and is preferred over the
top-performing baseline in an online evaluation
- β¦