75 research outputs found

    Multimodal music information processing and retrieval: survey and future challenges

    Full text link
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion, and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years

    N次創作動画におけるクリエータのコラボレーションに関する分析

    Get PDF
    オリジナルコンテンツから次々と新しい派生コンテンツが制作されるN 次創作活動では,複数人のクリエータがコラボレーションをしてひとつのコンテンツを制作することが盛んに行われている.本稿では,動画共有サービスに投稿された,音楽に関するN 次創作動画を対象として,コラボレーションがもたらす効果について分析する.具体的には,以下の3 つの観点から分析を行う:(1)コラボレーションが動画の視聴のされ方に与える影響,(2)コラボレーションがクリエータのアクティビティに与える影響,(3)コラボレーション関係に基づくクリエータの特性.分析の結果,コラボレーションによって制作された動画は再生数がより多くなること,コラボレーション動画を制作したクリエータはより長い期間N 次創作活動を行うこと,コラボレーションをしたクリエータのペアの25%以上は複数回のコラボレーションをしており,コラボレーションには一定の継続性があることなどが明らかになった

    Theoretical and applied issues on the impact of information on musical creativity: an information seeking behaviour perspective.

    Get PDF
    This century is an era of information and knowledge intensification. Novel information systems and services are developing through modern online information technologies. The rapid changes in the online information environment have greatly affected the way in which individuals search for music information and engage with musical creativity, within different music domains and for different purposes which involve composition, performance and improvisation, analysis and listening. The aim of this book chapter is to investigate the theoretical and practical issues relating to the impact of music information on musical creativity from an information seeking behavior perspective. Musical creativity is perceived as an intentional process which acts as a motivator for information seeking, leading to the utilization of different information resources and to the development of specific information seeking preferences. The chapter highlights the implications for research in this area and presents a research agenda for the interrelation between music information seeking and musical creativity

    Form-Aware, Real-Time Adaptive Music Generation for Interactive Experiences

    Get PDF
    Many experiences offered to the public through interactive theatre, theme parks, video games, and virtual environments use music to complement the participants’ activity. There is a range of approaches to this, from straightforward playback of ‘stings’, to looped phrases, to on-the-fly note generation. Within the latter, traditional genres and forms are often not represented, with the music instead being typically loose in form and structure. We present work in progress on a new method for realtime music generation that can preserve traditional musical genres whilst being reactive in form to the activities of participants. The results of simulating participant trajectories and the effect this has on the music generation algorithms are presented, showing that the approach can successfully handle variable length forms whilst remaining substantially within the given musical style

    Joint Multi-Pitch Detection Using Harmonic Envelope Estimation for Polyphonic Music Transcription

    Get PDF
    In this paper, a method for automatic transcription of music signals based on joint multiple-F0 estimation is proposed. As a time-frequency representation, the constant-Q resonator time-frequency image is employed, while a novel noise suppression technique based on pink noise assumption is applied in a preprocessing step. In the multiple-F0 estimation stage, the optimal tuning and inharmonicity parameters are computed and a salience function is proposed in order to select pitch candidates. For each pitch candidate combination, an overlapping partial treatment procedure is used, which is based on a novel spectral envelope estimation procedure for the log-frequency domain, in order to compute the harmonic envelope of candidate pitches. In order to select the optimal pitch combination for each time frame, a score function is proposed which combines spectral and temporal characteristics of the candidate pitches and also aims to suppress harmonic errors. For postprocessing, hidden Markov models (HMMs) and conditional random fields (CRFs) trained on MIDI data are employed, in order to boost transcription accuracy. The system was trained on isolated piano sounds from the MAPS database and was tested on classic and jazz recordings from the RWC database, as well as on recordings from a Disklavier piano. A comparison with several state-of-the-art systems is provided using a variety of error metrics, where encouraging results are indicated

    Multimodal Music Information Processing and Retrieval: Survey and Future Challenges

    Get PDF
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval, and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years
    corecore