433 research outputs found

    "More of an art than a science": Supporting the creation of playlists and mixes

    Get PDF
    This paper presents an analysis of how people construct playlists and mixes. Interviews with practitioners and postings made to a web site are analyzed using a grounded theory approach to extract themes and categorizations. The information sought is often encapsulated as music information retrieval tasks, albeit not as the traditional "known item search" paradigm. The collated data is analyzed and trends identified and discussed in relation to music information retrieval algorithms that could help support such activity

    Using Song Social Tags and Topic Models to Describe and Compare Playlists

    Get PDF
    Playlists are a natural delivery method for music recommendation and discovery systems. Recommender systems offering playlists must strive to make them relevant and enjoyable. In this paper we survey many current means of generating and evaluating playlists. We present a means of comparing playlists in a reduced dimensional space through the use of aggregated tag clouds and topic models. To evaluate the fitness of this measure, we perform prototypical retrieval tasks on playlists taken from radio station logs gathered from Radio Paradise and Yes.com, using tags from Last.fm with the result showing better than random performance when using the query playlist's station as ground truth, while failing to do so when using time of day as ground truth. We then discuss possible applications for this measurement technique as well as ways it might be improved

    Developing a context-aware automatic playlist generator (CAAPG)

    Get PDF
    Thesis submitted to the Department of Computer Science, Ashesi University College, in partial fulfillment of Bachelor of Science degree in Computer Science, April 2014The current digitization of music and the sheer volume of the musical content available to listeners on local devices, such as mobile phones and IPod has been revolutionary. This trend has changed the way humans interact and experience their music. Music listeners can listen to their songs on the move. The most recent trend in the music industry is that users can organize and search for their songs based on emotions. However, most users have to manually create their playlists for particular situations. The work that this entails is cumbersome and sometimes negates the experience of the listener. The intuitive response to this problem is developing an automatic playlist generating (APG) system. Research on APG mostly focuses on using traditional metadata and audio similarity methods to create a playlist. In addition APG is seen as a static problem [1]. This means that APG is seen as a problem that does not change, however music listeners are always changing their listening habits. This thesis supports and follows from the argument made in Chi chung-yi’s work - that the APG problem is a continuous optimization problem. Additionally, in this paper I also argue that the best way to give users a good listening experience is to understand the user’s preference(s) depending on the context. Context here simply mean the features that make up the environmental space in which the system is being used. The main idea in this paper is to show the importance of emotional categorization in the generation of playlist content, while simultaneously mapping those categories to the user’s context based on the users past activities on the system. Reinforcement learning is the method used in this thesis to generate a personalized playlist, based on the context of use and the user’s emotional preference. After implementing the system we use two hypothetical users to simulate the use of our system. Various metrics are defined to measure the performance of this approach.Ashesi University Colleg

    Unblind Your Apps: Predicting Natural-Language Labels for Mobile GUI Components by Deep Learning

    Full text link
    According to the World Health Organization(WHO), it is estimated that approximately 1.3 billion people live with some forms of vision impairment globally, of whom 36 million are blind. Due to their disability, engaging these minority into the society is a challenging problem. The recent rise of smart mobile phones provides a new solution by enabling blind users' convenient access to the information and service for understanding the world. Users with vision impairment can adopt the screen reader embedded in the mobile operating systems to read the content of each screen within the app, and use gestures to interact with the phone. However, the prerequisite of using screen readers is that developers have to add natural-language labels to the image-based components when they are developing the app. Unfortunately, more than 77% apps have issues of missing labels, according to our analysis of 10,408 Android apps. Most of these issues are caused by developers' lack of awareness and knowledge in considering the minority. And even if developers want to add the labels to UI components, they may not come up with concise and clear description as most of them are of no visual issues. To overcome these challenges, we develop a deep-learning based model, called LabelDroid, to automatically predict the labels of image-based buttons by learning from large-scale commercial apps in Google Play. The experimental results show that our model can make accurate predictions and the generated labels are of higher quality than that from real Android developers.Comment: Accepted to 42nd International Conference on Software Engineerin

    Track Co-occurrence Analysis of Users' Music Listening History

    Get PDF
    Music services provide listeners access to great numbers of available tracks. It is time consuming for listeners to find potential favorite ones. Music listeners increasingly want playlists to be created automatically. This study examines the relationship between background knowledge about music and track co-occurrence frequency in users’ music listening history and builds a multiple linear regression model to predict the track co-occurrence. So given a seed track, the model can find out which track is most likely to co-occur. A simple objective evaluation compares predicted track with tracks in the users’ listening history. 13 out of 15 test tracks find the highest rank predicted track in the same listening history.Master of Science in Information Scienc

    Emotion, Content & Context in Sound and Music

    Get PDF
    Computer game sound is particularly dependent upon the use of both sound artefacts and music. Sound and music are media rich in information. Audio and music processing can be approached from a range of perspectives which may or may not consider the meaning and purpose of this information. Computer music and digital audio are being advanced through investigations into emotion, content analysis, and context, and this chapter attempts to highlight the value of considering the information content present in sound, the context of the user being exposed to the sound, and the emotional reactions and interactions that are possible between the user and game sound. We demonstrate that by analysing the information present within media and considering the applications and purpose of a particular type of information, developers can improve user experiences and reduce overheads while creating more suitable, efficient applications. Some illustrated examples of our research projects that employ these theories are provided. Although the examples of research and development applications are not always examples from computer game sound, they can be related back to computer games. We aim to stimulate the reader’s imagination and thought in these areas, rather than attempt to drive the reader down one particular path

    Automatic music playlist generation using affective technologies

    Get PDF
    This paper discusses how human emotion could be quantified using contextual and physiological information that has been gathered from a range of sensors, and how this data could then be used to automatically generate music playlists. I begin by discussing existing affective systems that automatically generate playlists based on human emotion. I then consider the current work in audio description analysis. A system is proposed that measures human emotion based on contextual and physiological data using a range of sensors. The sensors discussed to invoke such contextual characteristics range from temperature and light to EDA (electro dermal activity) and ECG (electrocardiogram). The concluding section describes the progress achieved so far, which includes defining datasets using a conceptual design, microprocessor electronics and data acquisition using MatLab. Lastly, there is brief discussion of future plans to develop this research

    Crowdsourcing Emotions in Music Domain

    Get PDF
    An important source of intelligence for music emotion recognition today comes from user-provided community tags about songs or artists. Recent crowdsourcing approaches such as harvesting social tags, design of collaborative games and web services or the use of Mechanical Turk, are becoming popular in the literature. They provide a cheap, quick and efficient method, contrary to professional labeling of songs which is expensive and does not scale for creating large datasets. In this paper we discuss the viability of various crowdsourcing instruments providing examples from research works. We also share our own experience, illustrating the steps we followed using tags collected from Last.fm for the creation of two music mood datasets which are rendered public. While processing affect tags of Last.fm, we observed that they tend to be biased towards positive emotions; the resulting dataset thus contain more positive songs than negative ones

    ARTE: Automated Generation of Realistic Test Inputs for Web APIs

    Get PDF
    Automated test case generation for web APIs is a thriving research topic, where test cases are frequently derived from the API specification. However, this process is only partially automated since testers are usually obliged to manually set meaningful valid test inputs for each input parameter. In this article, we present ARTE, an approach for the automated extraction of realistic test data for web APIs from knowledge bases like DBpedia. Specifically, ARTE leverages the specification of the API parameters to automatically search for realistic test inputs using natural language processing, search-based, and knowledge extraction techniques. ARTE has been integrated into RESTest, an open-source testing framework for RESTful APIs, fully automating the test case generation process. Evaluation results on 140 operations from 48 real-world web APIs show that ARTE can efficiently generate realistic test inputs for 64.9% of the target parameters, outperforming the state-of-the-art approach SAIGEN (31.8%). More importantly, ARTE supported the generation of over twice as many valid API calls (57.3%) as random generation (20%) and SAIGEN (26%), leading to a higher failure detection capability and uncovering several real-world bugs. These results show the potential of ARTE for enhancing existing web API testing tools, achieving an unprecedented level of automationJunta de AndalucĂ­a APOLO (US-1264651)Junta de AndalucĂ­a EKIPMENT-PLUS (P18-FR-2895)Ministerio de Ciencia, InnovaciĂłn y Universidades RTI2018-101204-B-C21 (HORATIO)Ministerio de Ciencia, InnovaciĂłn y Universidades RED2018-102472-
    • 

    corecore