33 research outputs found

    Music emotion recognition: a multimodal machine learning approach

    Get PDF
    Music emotion recognition (MER) is an emerging domain of the Music Information Retrieval (MIR) scientific community, and besides, music searches through emotions are one of the major selection preferred by web users. As the world goes to digital, the musical contents in online databases, such as Last.fm have expanded exponentially, which require substantial manual efforts for managing them and also keeping them updated. Therefore, the demand for innovative and adaptable search mechanisms, which can be personalized according to users’ emotional state, has gained increasing consideration in recent years. This thesis concentrates on addressing music emotion recognition problem by presenting several classification models, which were fed by textual features, as well as audio attributes extracted from the music. In this study, we build both supervised and semisupervised classification designs under four research experiments, that addresses the emotional role of audio features, such as tempo, acousticness, and energy, and also the impact of textual features extracted by two different approaches, which are TF-IDF and Word2Vec. Furthermore, we proposed a multi-modal approach by using a combined feature-set consisting of the features from the audio content, as well as from context-aware data. For this purpose, we generated a ground truth dataset containing over 1500 labeled song lyrics and also unlabeled big data, which stands for more than 2.5 million Turkish documents, for achieving to generate an accurate automatic emotion classification system. The analytical models were conducted by adopting several algorithms on the crossvalidated data by using Python. As a conclusion of the experiments, the best-attained performance was 44.2% when employing only audio features, whereas, with the usage of textual features, better performances were observed with 46.3% and 51.3% accuracy scores considering supervised and semi-supervised learning paradigms, respectively. As of last, even though we created a comprehensive feature set with the combination of audio and textual features, this approach did not display any significant improvement for classification performanc

    CHORAL DIRECTORS PERCEPTIONS OF CHORAL TONE

    Get PDF
    CHORAL DIRECTORS PERCEPTIONS OF CHORAL TON

    A personalized hybrid music recommender based on empirical estimation of user-timbre preference

    Get PDF
    Automatic recommendation system as a subject of machine learning has been undergoing a rapid development in the recent decade along with the trend of big data. Particularly, music recommendation is a highlighted topic because of its commercial value coming from the large music industry. Popular online music recommendation services, including Spotify, Pandora and Last.FM use similarity-based approaches to generate recommendations. In this thesis work, I propose a personalized music recommendation approach that is based on probability estimation without any similarity calculation involved. In my system, each user gets a score for every piece of music. The score is obtained by combining two estimated probabilities of an acceptance. One estimated probability is based on the user’s preferences on timbres. Another estimated probability is the empirical acceptance rate of a music piece. The weighted arithmetic mean is evaluated to be the best performing combination function. An online demonstration of my system is available at www.shuyang.eu/plg/. Demonstrating recommendation results show that the system works effectively. Through the algorithm analysis on my system, we can see that my system has good reactivity and scalability without suffering cold start problem. The accuracy of my recommendation approach is evaluated with Million Song Dataset. My system achieves a pairwise ranking accuracy of 0.592, which outperforms random ranking (0.5) and ranking by popularity (0.557). Unfortunately, I have not found any other music recommendation method evaluated with ranking accuracy yet. As a comparison, Page Rank algorithm (for web page ranking) has a pairwise ranking accuracy of 0.567

    Automated generation of movie tributes

    Get PDF
    O objetivo desta tese é gerar um tributo a um filme sob a forma de videoclip, considerando como entrada um filme e um segmento musical coerente. Um tributo é considerado um vídeo que contém os clips mais significativos de um filme, reproduzidos sequencialmente, enquanto uma música toca. Nesta proposta, os clips a constar do tributo final são o resultado da sumarização das legendas do filme com um algoritmo de sumarização genérico. É importante que o artefacto seja coerente e fluido, pelo que há a necessidade de haver um equilíbrio entre a seleção de conteúdo importante e a seleção de conteúdo que esteja em harmonia com a música. Para tal, os clips são filtrados de forma a garantir que apenas aqueles que contêm a mesma emoção da música aparecem no vídeo final. Tal é feito através da extração de vetores de características áudio relacionadas com emoções das cenas às quais os clips pertencem e da música, e, de seguida, da sua comparação por meio do cálculo de uma medida de distância. Por fim, os clips filtrados preenchem a música cronologicamente. Os resultados foram positivos: em média, os tributos produzidos obtiveram 7 pontos, numa escala de 0 a 10, em critérios como seleção de conteúdo e coerência emocional, fruto de avaliação humana.This thesis’ purpose is to generate a movie tribute in the form of a videoclip for a given movie and music. A tribute is considered to be a video containing meaningful clips from the movie playing along with a cohesive music piece. In this work, we collect the clips by summarizing the movie subtitles with a generic summarization algorithm. It is important that the artifact is coherent and fluid, hence there is the need to balance between the selection of important content and the selection of content that is in harmony with the music. To achieve so, clips are filtered so as to ensure that only those that contain the same emotion as the music are chosen to appear in the final video. This is made by extracting vectors of emotion-related audio features from the scenes they belong to and from the music, and then comparing them with a distance measure. Finally, filtered clips fill the music length in a chronological order. Results were positive: on average, the produced tributes obtained scores of 7, on a scale from 0 to 10, on content selection, and emotional coherence criteria, from human evaluation

    An integrative computational modelling of music structure apprehension

    Get PDF
    corecore