15 research outputs found

    BEYOND THE LYRICS: THE REPRESENTATION OF ISLAMIC VALUES FROM "DOWNFALL - THE BATTLE OF UHUD"

    Get PDF
                Tujuan dari penelitian ini adalah untuk menguak nilai-nilai islami apa sajakah  yang ada dalam lirik “ Downfall : The Battle of Uhud”. Penelitian ini merupakan penelitian kualitatif. Peneliti menggunakan informasi tekstual yang ada lirik lagu “Downfall: The Battle of Uhud” sebagai sumber data yang akan di analisa. Koleksi data, pemberian kode pada nilai-nilai islami yang diambil dari kata atau frase yang ada pada lirik tersebut serta meninjau ulang data tersebut merupakan prosedur lanjutan sebagai dasar dari temuan dalam penelitian ini. Peneliti menggunakan content analysis untuk menganalisa data karena data yang dianalisa dalam bentuk dokumen. Hasilnya, ada beberapa nilai Islami yang bisa di gambarkan sebagai berikut: ketidakpatuhan akan membawa kesengsaraan, arogansi akan menjerumuskan manusia menjadi kufur, Iman yang teguh akan menghancurkan segala rintangan, dan menghindari sifat serakah. Berdasarkan temuan diatas, bias disimpulkan bahwa lirik dalam lagu ini menggambarkan banyak nilai Islami yang bias dijadikan acuan untuk melakukan kebajikan.Kata kunci: Nilai-nilai islami, Lirik Lagu

    Multimodal music information processing and retrieval: survey and future challenges

    Full text link
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion, and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years

    The Ethnic Lyrics Fetcher tool

    Get PDF

    Recuperação Inteligente de Letras de Músicas na Web

    Get PDF
    A tarefa de recuperação e extração automática de letras de músicas a partir da web é de grande importância para diferentes aplicações da área de recuperação de informações musicais (Music Information Retrieval). A maior parte das abordagens existentes para lidar com este problema dependem de recursos computacionais que, muitas vezes, estão indisponíveis para músicas que não são populares ou estão em idiomas que não são o Inglês. Neste artigo, é apresentado um sistema para a recuperação automática de letras de músicas na web, denominado Ethnic Lyrics Fetcher (ELF), que possui um novo mecanismo para a detecção e extração automática de letras de músicas. Para avaliar o sistema desenvolvido foram realizados dois experimentos. No primeiro experimento, o mecanismo de extracção de letras foi avaliado utilizando como base 12 websites que possuem letras de música em uma estrutura bem definida e também o método considerado o estado da arte para o problema. No segundo experimento, foi avaliado o desempenho do sistema desenvolvido como uma ferramenta de busca, identificação e extração de letras de músicas na web. A análise dos resultados experimentais obtidos mostraram que o ELF é uma ferramenta útil para auxiliar pesquisadores e usuários na recuperação de informações musicais

    Multimodal Music Information Processing and Retrieval: Survey and Future Challenges

    Get PDF
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval, and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years

    Using Automated Rhyme Detection to Characterize Rhyming Style in Rap Music

    Get PDF
    Imperfect and internal rhymes are two important features in rap music previously ignored in the music information retrieval literature. We developed a method of scoring potential rhymes using a probabilistic model based on phoneme frequencies in rap lyrics. We used this scoring scheme to automatically identify internal and line-final rhymes in song lyrics and demonstrated the performance of this method compared to rules-based models. We then calculated higher-level rhyme features and used them to compare rhyming styles in song lyrics from different genres, and for different rap artists. We found that these detected features corresponded to real- world descriptions of rhyming style and were strongly characteristic of different rappers, resulting in potential applications to style-based comparison, music recommendation, and authorship identification

    Content-Based Music Recommendation using Deep Learning

    Get PDF
    Music streaming services use recommendation systems to improve the customer experience by generating favorable playlists and by fostering the discovery of new music. State of the art recommendation systems use both collaborative filtering and content-based recommendation methods. Collaborative filtering suffers from the cold start problem; it can only make recommendations for music for which it has enough user data, so content-based methods are preferred. Most current content-based recommendation systems use convolutional neural networks on the spectrograms of track audio. The architectures are commonly borrowed directly from the field of computer vision. It is shown in this study that musically-motivated convolutional neural network architectures outperform architectures that are highly-optimized for image-related tasks. A content-based recommendation model is built using musically-motivated deep learning architectures. The model is shown to be able to map an artist onto an artist embedding space where its nearest neighbors by cosine similarity are related artists and make good recommendations. It is also shown that metadata, such as lyrics, artist origin, and year, significantly improve these mappings when combined with raw audio data
    corecore