19,890 research outputs found

    Is Vivaldi smooth and takete? Non-verbal sensory scales for describing music qualities

    Get PDF
    Studies on the perception of music qualities (such as induced or perceived emotions, performance styles, or timbre nuances) make a large use of verbal descriptors. Although many authors noted that particular music qualities can hardly be described by means of verbal labels, few studies have tried alternatives. This paper aims at exploring the use of non-verbal sensory scales, in order to represent different perceived qualities in Western classical music. Musically trained and untrained listeners were required to listen to six musical excerpts in major key and to evaluate them from a sensorial and semantic point of view (Experiment 1). The same design (Experiment 2) was conducted using musically trained and untrained listeners who were required to listen to six musical excerpts in minor key. The overall findings indicate that subjects\u2019 ratings on non-verbal sensory scales are consistent throughout and the results support the hypothesis that sensory scales can convey some specific sensations that cannot be described verbally, offering interesting insights to deepen our knowledge on the relationship between music and other sensorial experiences. Such research can foster interesting applications in the field of music information retrieval and timbre spaces explorations together with experiments applied to different musical cultures and contexts

    Current Challenges and Visions in Music Recommender Systems Research

    Full text link
    Music recommender systems (MRS) have experienced a boom in recent years, thanks to the emergence and success of online streaming services, which nowadays make available almost all music in the world at the user's fingertip. While today's MRS considerably help users to find interesting music in these huge catalogs, MRS research is still facing substantial challenges. In particular when it comes to build, incorporate, and evaluate recommendation strategies that integrate information beyond simple user--item interactions or content-based descriptors, but dig deep into the very essence of listener needs, preferences, and intentions, MRS research becomes a big endeavor and related publications quite sparse. The purpose of this trends and survey article is twofold. We first identify and shed light on what we believe are the most pressing challenges MRS research is facing, from both academic and industry perspectives. We review the state of the art towards solving these challenges and discuss its limitations. Second, we detail possible future directions and visions we contemplate for the further evolution of the field. The article should therefore serve two purposes: giving the interested reader an overview of current challenges in MRS research and providing guidance for young researchers by identifying interesting, yet under-researched, directions in the field

    Gender and the performance of music

    Get PDF
    This study evaluates propositions that have appeared in the literature that music phenomena are gendered. Were they present in the musical "message," gendered qualities might be imparted at any of three stages of the music-communication interchange: the process of composition, its realization into sound by the performer, or imposed by the listener in the process of perception. The research was designed to obtain empirical evidence to enable evaluation of claims of the presence of gendering at these three stages. Three research hypotheses were identified and relevant literature of music behaviors and perception reviewed. New instruments of measurement were constructed to test the three hypotheses: (i) two listening sequences each containing 35 extracts from published recordings of compositions of the classical music repertoire, (ii) four "music characteristics" scales, with polarities defined by verbal descriptors designed to assess the dynamic and emotional valence of the musical extracts featured in the listening sequences. 69 musically-trained listeners listened to the two sequences and were asked to identify the sex of the performing artist of each musical extract; a second group of 23 listeners evaluated the extracts applying the four music characteristics scales. Results did not support claims that music structures are inherently gendered, nor proposals that performers impart their own-sex-specific qualities to the music. It is concluded that gendered properties are imposed subjectively by the listener, and these are primarily related to the tempo of the music

    Radio programming for young adults : three New Zealand case studies : a thesis presented in partial fulfilment of the requirements for the degree of Master of Arts in Media Studies and Massey University

    Get PDF
    The central question posed by this thesis is how radio stations, and more specifically programme directors, attract and construct an audience of listeners aged between 18 and 25 years old. The thesis examines the political and social factors influencing broadcasters targeting young adults both in this country and internationally. It then analyses the broadcasts and programming strategies of three New Zealand radio stations - a student, an iwi and a commercial station. Broadcasting is examined on three levels: firstly the political and historical context of radio broadcasting is outlined including issues such as media ownership, government regulation and the structure of media institutions; secondly the daily operating practices of broadcasters are assessed, along with how programming choices are made in light of externally imposed constraints such as the desire to make a profit; and finally textual analysis is used to examine the material that is produced for broadcast. Programme directors are defined here as key gatekeepers because they determine the way a radio station sounds within the parameters of a particular format. Williams (1990) correctly maintains that broadcasting forms a continuous flow, but for ease of academic discussion each of these radio stations is analysed in terms of its music programming, advertising and promotion, news and information and DJ chat. Analysis of the verbal aspects of the broadcast draw on Goffman (1981), Brand and Scannell (1991) and Montgomery (1986). Music programming is discussed with reference to Rothenbuhler (1985) and unstructured interviews conducted by the researcher with the programme directors at each of the three stations. The New Zealand case studies exemplify international trends evident in radio stations which target 18 to 25 year olds. The programme directors in question presume this age group listens to the radio in the evenings, prefers music to talk and current affairs, likes newly released material rather than older songs and is likely to purchase leisure and entertainment products. The case studies provide a contemporary snap shot of how programme directors construct and perceive a specific radio audience. The thesis concludes that programmers targeting young adults use music to define the station's sound, construct an audience and sell advertising

    From surround to true 3-D

    Get PDF
    To progress from surround sound to true 3-D requires an updating of the psychoacoustical theories which underlie current technologies. This paper shows how J.J.Gibson’s ecological approach to perception can be applied to audio perception and used to derive 3-D audio technologies based on intelligent pattern recognition and active hypothesis testing. These technologies are suggested as methods which can be used to generate audio environments that are believable and can be explored

    Musemo: Express Musical Emotion Based on Neural Network

    Get PDF
    Department of Urban and Environmental Engineering (Convergence of Science and Arts)Music elicits emotional responses, which enable people to empathize with the emotional states induced by music, experience changes in their current feelings, receive comfort, and relieve stress (Juslin & Laukka, 2004). Music emotion recognition (MER) is a field of research that extracts emotions from music through various systems and methods. Interest in this field is increasing as researchers try to use it for psychiatric purposes. In order to extract emotions from music, MER requires music and emotion labels for each music. Many MER studies use emotion labels created by non-music-specific psychologists such as Russell???s circumplex model of affects (Russell, 1980) and Ekman???s six basic emotions (Ekman, 1999). However, Zentner, Grandjean, and Scherer suggest that emotions commonly used in music are subdivided into specific areas, rather than spread across the entire spectrum of emotions (Zentner, Grandjean, & Scherer, 2008). Thus, existing MER studies have difficulties with the emotion labels that are not widely agreed through musicians and listeners. This study proposes a musical emotion recognition model ???Musemo??? that follows the Geneva emotion music scale proposed by music psychologists based on a convolution neural network. We evaluate the accuracy of the model by varying the length of music samples used as input of Musemo and achieved RMSE (root mean squared error) performance of up to 14.91%. Also, we examine the correlation among emotion labels by reducing the Musemo???s emotion output vector to two dimensions through principal component analysis. Consequently, we can get results that are similar to the study that Vuoskoski and Eerola analyzed for the Geneva emotion music scale (Vuoskoski & Eerola, 2011). We hope that this study could be expanded to inform treatments to comfort those in need of psychological empathy in modern society.clos

    The Effect of Low Frequency Sound on Listening Level

    Get PDF
    A listener’s preferred listening level (PLL) for music under headphones has been found to be related to factors such as music genre, external noise, and headphone fit. The purpose of this study was to investigate the relationship between a listener’s PLL and the amount of low frequency sound in music. The study also investigated the relationship between a listener’s PLL, their music preference, and familiarity with the songs used in the experiment. For the study, 44 participants aged 18 to 35 years old with normal hearing were recruited from a university population. Participants completed listening tasks comprised of 16 experimental stimuli representing the pop, rock, and classical genres, as well as a self-selected song of their preference. High-pass filtering with corner frequencies of 100, 173, and 300 Hz was applied to 12 of the stimuli while 4 stimuli remained unfiltered. Participants adjusted the volume setting to their preference for each stimulus. A post-test survey was administered to rate the participants’ familiarity with the songs used in the listening task. A two-way repeated measures ANOVA analysis demonstrated that there were significant differences between the songs (p = 0.009) and the filter settings that removed low frequency sound (p = 0.009), as well as interaction effects between these groups (p = 0.018). A post-hoc analysis revealed that the PLLs for the classical song were significantly lower than the other 3 songs, and only the 300 Hz high-pass filter setting was significantly higher in PLL than the baseline “no filter” setting. No significant correlation was found between participant ranking of song familiarity and volume setting for that song. The use of a preferred or familiar song did not have a significant effect when measuring a listener’s PLL in this study. These results demonstrate that the absence of low frequency sound can lead to an increase in listener PLL for music. However, observations from the data revealed that this trend may not be true for all listeners. The real-implications of these findings suggest that a transducer with poor low-frequency response may lead to higher listener PLLs. Similar future studies should consider other methods to further clarify the influence of low frequency sound on PLL and how other known influences on PLL (i.e., environmental noise) may interact
    • 

    corecore