220 research outputs found

    DJ-MC: A Reinforcement-Learning Agent for Music Playlist Recommendation

    Full text link
    In recent years, there has been growing focus on the study of automated recommender systems. Music recommendation systems serve as a prominent domain for such works, both from an academic and a commercial perspective. A fundamental aspect of music perception is that music is experienced in temporal context and in sequence. In this work we present DJ-MC, a novel reinforcement-learning framework for music recommendation that does not recommend songs individually but rather song sequences, or playlists, based on a model of preferences for both songs and song transitions. The model is learned online and is uniquely adapted for each listener. To reduce exploration time, DJ-MC exploits user feedback to initialize a model, which it subsequently updates by reinforcement. We evaluate our framework with human participants using both real song and playlist data. Our results indicate that DJ-MC's ability to recommend sequences of songs provides a significant improvement over more straightforward approaches, which do not take transitions into account.Comment: -Updated to the most recent and completed version (to be presented at AAMAS 2015) -Updated author list. in Autonomous Agents and Multiagent Systems (AAMAS) 2015, Istanbul, Turkey, May 201

    hpDJ: An automated DJ with floorshow feedback

    No full text
    Many radio stations and nightclubs employ Disk-Jockeys (DJs) to provide a continuous uninterrupted stream or “mix” of dance music, built from a sequence of individual song-tracks. In the last decade, commercial pre-recorded compilation CDs of DJ mixes have become a growth market. DJs exercise skill in deciding an appropriate sequence of tracks and in mixing 'seamlessly' from one track to the next. Online access to large-scale archives of digitized music via automated music information retrieval systems offers users the possibility of discovering many songs they like, but the majority of consumers are unlikely to want to learn the DJ skills of sequencing and mixing. This paper describes hpDJ, an automatic method by which compilations of dance-music can be sequenced and seamlessly mixed by computer, with minimal user involvement. The user may specify a selection of tracks, and may give a qualitative indication of the type of mix required. The resultant mix can be presented as a continuous single digital audio file, whether for burning to CD, or for play-out from a personal playback device such as an iPod, or for play-out to rooms full of dancers in a nightclub. Results from an early version of this system have been tested on an audience of patrons in a London nightclub, with very favourable results. Subsequent to that experiment, we designed technologies which allow the hpDJ system to monitor the responses of crowds of dancers/listeners, so that hpDJ can dynamically react to those responses from the crowd. The initial intention was that hpDJ would monitor the crowd’s reaction to the song-track currently being played, and use that response to guide its selection of subsequent song-tracks tracks in the mix. In that version, it’s assumed that all the song-tracks existed in some archive or library of pre-recorded files. However, once reliable crowd-monitoring technology is available, it becomes possible to use the crowd-response data to dynamically “remix” existing song-tracks (i.e, alter the track in some way, tailoring it to the response of the crowd) and even to dynamically “compose” new song-tracks suited to that crowd. Thus, the music played by hpDJ to any particular crowd of listeners on any particular night becomes a direct function of that particular crowd’s particular responses on that particular night. On a different night, the same crowd of people might react in a different way, leading hpDJ to create different music. Thus, the music composed and played by hpDJ could be viewed as an “emergent” property of the dynamic interaction between the computer system and the crowd, and the crowd could then be viewed as having collectively collaborated on composing the music that was played on that night. This en masse collective composition raises some interesting legal issues regarding the ownership of the composition (i.e.: who, exactly, is the author of the work?), but revenue-generating businesses can nevertheless plausibly be built from such technologies

    GeoTracks: adaptive music for everyday journeys

    Get PDF
    Listening to music on the move is an everyday activity for many people. This paper proposes geotracks and geolists, music tracks and playlists of existing music that are aligned and adapted to specific journeys. We describe how everyday walking journeys such as commutes to work and existing popular music tracks can each be analysed, decomposed and then brought together, using musical adaptations including skipping and repeating parts of tracks, dynamically remixing tracks and cross-fades. Using a naturalistic experiment we compared walking while listening to geotracks (dynamically adapted using GPS location information) to walking while listening to a fixed playlist. Overall, participants enjoyed the walk more when listening to the adaptive geotracks. However adapting the lengths of tracks appeared to detract from the experience of the music in some situations and for some participants, revealing trade-offs in achieving fine-grained alignment of music and walking journeys

    Methods and Datasets for DJ-Mix Reverse Engineering

    Get PDF
    International audienceDJ techniques are an important part of popular music culture. However, they are also not sufficiently investigated by researchers due to the lack of annotated datasets of DJ mixes. Thus, this paper aims at filling this gap by introducing novel methods to automatically deconstruct and annotate recorded mixes for which the constituent tracks are known. A rough alignment first estimates where in the mix each track starts, and which time-stretching factor was applied. Second, a sample-precise alignment is applied to determine the exact offset of each track in the mix. Third, we propose a new method to estimate the cue points and the fade curves which operates in the time-frequency domain to increase its robustness to interference with other tracks. The proposed methods are finally evaluated on our new publicly available DJ-mix dataset. This dataset contains automatically generated beat-synchronous mixes based on freely available music tracks, and the ground truth about the placement of tracks in a mix

    Sequential decision making in artificial musical intelligence

    Get PDF
    Over the past 60 years, artificial intelligence has grown from a largely academic field of research to a ubiquitous array of tools and approaches used in everyday technology. Despite its many recent successes and growing prevalence, certain meaningful facets of computational intelligence have not been as thoroughly explored. Such additional facets cover a wide array of complex mental tasks which humans carry out easily, yet are difficult for computers to mimic. A prime example of a domain in which human intelligence thrives, but machine understanding is still fairly limited, is music. Over the last decade, many researchers have applied computational tools to carry out tasks such as genre identification, music summarization, music database querying, and melodic segmentation. While these are all useful algorithmic solutions, we are still a long way from constructing complete music agents, able to mimic (at least partially) the complexity with which humans approach music. One key aspect which hasn't been sufficiently studied is that of sequential decision making in musical intelligence. This thesis strives to answer the following question: Can a sequential decision making perspective guide us in the creation of better music agents, and social agents in general? And if so, how? More specifically, this thesis focuses on two aspects of musical intelligence: music recommendation and human-agent (and more generally agent-agent) interaction in the context of music. The key contributions of this thesis are the design of better music playlist recommendation algorithms; the design of algorithms for tracking user preferences over time; new approaches for modeling people's behavior in situations that involve music; and the design of agents capable of meaningful interaction with humans and other agents in a setting where music plays a roll (either directly or indirectly). Though motivated primarily by music-related tasks, and focusing largely on people's musical preferences, this thesis also establishes that insights from music-specific case studies can also be applicable in other concrete social domains, such as different types of content recommendation. Showing the generality of insights from musical data in other contexts serves as evidence for the utility of music domains as testbeds for the development of general artificial intelligence techniques. Ultimately, this thesis demonstrates the overall usefulness of taking a sequential decision making approach in settings previously unexplored from this perspectiveComputer Science

    GeoTracks: Adaptive music for everyday journeys

    Get PDF
    Listening to music on the move is an everyday activity for many people. This paper proposes geotracks and geolists, music tracks and playlists of existing music that are aligned and adapted to specific journeys. We describe how everyday walking journeys such as commutes to work and existing popular music tracks can each be analysed, decomposed and then brought together, using musical adaptations including skipping and repeating parts of tracks, dynamically remixing tracks and cross-fades. Using a naturalistic experiment we compared walking while listening to geotracks (dynamically adapted using GPS location information) to walking while listening to a fixed playlist. Overall, participants enjoyed the walk more when listening to the adaptive geotracks. However adapting the lengths of tracks appeared to detract from the experience of the music in some situations and for some participants, revealing trade-offs in achieving fine-grained alignment of music and walking journeys

    A User-Adaptive Automated DJ Web App with Object-Based Audio and Crowd-Sourced Decision Trees

    Get PDF
    We describe the concepts behind a web-based minimal-UI DJ system that adapts to the user’s preference via sim- ple interactive decisions and feedback on taste. Starting from a preset decision tree modeled on common DJ prac- tice, the system can gradually learn a more customised and user-specific tree. At the core of the system are structural representations of the musical content based on semantic au- dio technologies and inferred from features extracted from the audio directly in the browser. These representations are gradually combined into a representation of the mix which could then be saved and shared with other users. We show how different types of transitions can be modeled using sim- ple musical constraints. Potential applications of the system include crowd-sourced data collection, both on temporally aligned playlisting and musical preference
    • …
    corecore