138 research outputs found

    Learning music from each other: synchronization, turn-taking, or imitation?

    Get PDF
    In an experimental study, we investigated how well novices can learn from each other in situations of technology-aided musical skill acquisition, comparing joint and solo learning, and learning through imitation, synchronization, and turn-taking. Fifty-four participants became familiar, either solo or in pairs, with three short musical melodies and then individually performed each from memory. Each melody was learned in a different way: participants from the solo group were asked via an instructional video to: 1) play in synchrony with the video, 2) take turns with the video, or 3) imitate the video. Participants from the duo group engaged in the same learning trials, but with a partner. Novices in both groups performed more accurately in pitch and time when learning in synchrony and turn-taking than in imitation. No differences were found between solo and joint learning. These results suggest that musical learning benefits from a shared, in-the-moment, musical experience, where responsibilities and cognitive resources are distributed between biological (i.e., peers) and hybrid (i.e., participant(s) and computer) assemblies

    The multi-hub academic conference : global, inclusive, culturally diverse, creative, sustainable

    Get PDF
    New conference formats are emerging in response to COVID-19 and climate change. Virtual conferences are sustainable and inclusive regardless of participant mobility (financial means, caring commitments, disability), but lack face-to-face contact. Hybrid conferences (physical meetings with additional virtual presentations) tend to discriminate against non-fliers and encourage unsustainable flying. Multi-hub conferences mix real and virtual interactions during talks and social breaks and are distributed across nominally equal hubs. We propose a global multi-hub solution in which all hubs interact daily in real time with all other hubs in parallel sessions by internet videoconferencing. Conference sessions are confined to three equally-spaced 4-h UTC timeslots. Local programs comprise morning and afternoon/evening sessions (recordings from night sessions can be watched later). Three reference hubs are located exactly 8 h apart; additional hubs are within 2 h and their programs are aligned with the closest reference hub. The conference experience at each hub depends on the number of local participants and the time difference to the nearest reference. Participants are motivated to travel to the nearest hub. Mobility-based discrimination is minimized. Lower costs facilitate diversity, equity, and inclusion. Academic quality, creativity, enjoyment, and low-carbon sustainability are simultaneously promoted

    Musical novices perform with equal accuracy when learning to drum alone or with a peer

    Get PDF
    The capacity of expert musicians to coordinate with each other when playing in ensembles or rehearsing has been widely investigated. However, little is known about the ability of novices to achieve satisfactory coordinated behaviour when making music together. We tested whether performance accuracy differs when novices play a newly learned drumming pattern with another musically untrained individual (duo group) or alone (solo group). A comparison between musical outcomes of the two groups revealed no significant differences concerning performative accuracy. An additional, exploratory examination of the degree of mutual influence between members of the duos suggested that they reciprocally affected each other when playing together. These findings indicate that a responsive auditory feedback involving surprises introduced by human errors could be part of pedagogical settings that employ repetition or imitation, thereby facilitating coordination among novices in a less prescribed fashion

    Automatic estimation of harmonic tension by distributed representation of chords

    Full text link
    The buildup and release of a sense of tension is one of the most essential aspects of the process of listening to music. A veridical computational model of perceived musical tension would be an important ingredient for many music informatics applications. The present paper presents a new approach to modelling harmonic tension based on a distributed representation of chords. The starting hypothesis is that harmonic tension as perceived by human listeners is related, among other things, to the expectedness of harmonic units (chords) in their local harmonic context. We train a word2vec-type neural network to learn a vector space that captures contextual similarity and expectedness, and define a quantitative measure of harmonic tension on top of this. To assess the veridicality of the model, we compare its outputs on a number of well-defined chord classes and cadential contexts to results from pertinent empirical studies in music psychology. Statistical analysis shows that the model's predictions conform very well with empirical evidence obtained from human listeners.Comment: 12 pages, 4 figures. To appear in Proceedings of the 13th International Symposium on Computer Music Multidisciplinary Research (CMMR), Porto, Portuga

    Music cognition as mental time travel.

    Get PDF
    As we experience a temporal flux of events our expectations of future events change. Such expectations seem to be central to our perception of affect in music, but we have little understanding of how expectations change as recent information is integrated. When music establishes a pitch centre (tonality), we rapidly learn to anticipate its continuation. What happens when anticipations are challenged by new events? Here we show that providing a melodic challenge to an established tonality leads to progressive changes in the impact of the features of the stimulus on listeners' expectations. The results demonstrate that retrospective analysis of recent events can establish new patterns of expectation that converge towards probabilistic interpretations of the temporal stream. These studies point to wider applications of understanding the impact of information flow on future prediction and its behavioural utility

    Detection of keyboard vibrations and effects on perceived piano quality

    Get PDF
    Two experiments were conducted on an upright and a grand piano, both either producing string vibrations or conversely being silent after the initial keypress, while pianists were listening to the feedback from a synthesizer through insulating headphones. In a quality experiment, participants unaware of the silent mode were asked to play freely and then rate the instrument according to a set of attributes and general preference. Participants preferred the vibrating over the silent setup, and preference ratings were associated to auditory attributes of richness and naturalness in the low and middle ranges. Another experiment on the same setup measured the detection of vibrations at the keyboard, while pianists played notes and chords of varying dynamics and duration. Sensitivity to string vibrations was highest in the lowest register and gradually decreased up to note D5. After the percussive transient, the tactile stimuli exhibited spectral peaks of acceleration whose perceptibility was demonstrated by tests conducted in active touch conditions. The two experiments confirm that piano performers perceive vibratory cues of strings mediated by spectral and spatial summations occurring in the Pacinian system in their fingertips, and suggest that such cues play a role in the evaluation of quality of the musical instrument

    Modelling the similarity of pitch collections with expectation tensors

    Get PDF
    Models of the perceived distance between pairs of pitch collections are a core component of broader models of music cognition. Numerous distance measures have been proposed, including voice-leading [1], psychoacoustic [2–4], and pitch and interval class distances [5]; but, so far, there has been no attempt to bind these different measures into a single mathematical or conceptual framework, nor to incorporate the uncertain or probabilistic nature of pitch perception. This paper embeds pitch collections in expectation tensors and shows how metrics between such tensors can model their perceived dissimilarity. Expectation tensors indicate the expected number of tones, ordered pairs of tones, ordered triples of tones, etc., that are heard as having any given pitch, dyad of pitches, triad of pitches, etc.. The pitches can be either absolute or relative (in which case the tensors are invariant with respect to transposition). Examples are given to show how the metrics accord with musical intuition

    Pitch Enumeration: Failure to Subitize in Audition

    Get PDF
    Background: Subitizing involves recognition mechanisms that allow effortless enumeration of up to four visual objects, however despite ample resolution experimental data suggest that only one pitch can be reliably enumerated. This may be due to the grouping of tones according to harmonic relationships by recognition mechanisms prior to fine pitch processing. Poorer frequency resolution of auditory information available to recognition mechanisms may lead to unrelated tones being grouped, resulting in underestimation of pitch number. Methods, Results and Conclusion: We tested whether pitch enumeration is better for chords of full harmonic complex tones, where grouping errors are less likely, than for complexes with fewer and less accurately tuned harmonics. Chords of low familiarity were used to mitigate the possibility that participants would recognize the chord itself and simply recall the number of pitches. We found that accuracy of pitch enumeration was less than the visual system overall, and underestimation of pitch number increased for stimuli containing fewer harmonics. We conclude that harmonically related tones are first grouped at the poorer frequency resolution of the auditory nerve, leading to poor enumeration of more than one pitch

    Emotional ratings and skin conductance response to visual, auditory and haptic stimuli

    Get PDF
    The human emotional reactions to stimuli delivered by different sensory modalities is a topic of interest for many disciplines, from Human-Computer-Interaction to cognitive sciences. Different databases of stimuli eliciting emotional reaction are available, tested on a high number of participants. Interestingly, stimuli within one database are always of the same type. In other words, to date, no data was obtained and compared from distinct types of emotion-eliciting stimuli from the same participant. This makes it difficult to use different databases within the same experiment, limiting the complexity of experiments investigating emotional reactions. Moreover, whereas the stimuli and the participants’ rating to the stimuli are available, physiological reactions of participants to the emotional stimuli are often recorded but not shared. Here, we test stimuli delivered either through a visual, auditory, or haptic modality in a within participant experimental design. We provide the results of our study in the form of a MATLAB structure including basic demographics on the participants, the participant’s self-assessment of his/her emotional state, and his/her physiological reactions (i.e., skin conductance)

    From Motion to Emotion : Accelerometer Data Predict Subjective Experience of Music

    Get PDF
    Music is often discussed to be emotional because it reflects expressive movements in audible form. Thus, a valid approach to measure musical emotion could be to assess movement stimulated by music. In two experiments we evaluated the discriminative power of mobile-device generated acceleration data produced by free movement during music listening for the prediction of ratings on the Geneva Emotion Music Scales (GEMS-9). The quality of prediction for different dimensions of GEMS varied between experiments for tenderness (R12(first experiment) = 0.50, R22(second experiment) = 0.39), nostalgia (R12 = 0.42, R22 = 0.30), wonder (R12 = 0.25, R22 = 0.34), sadness (R12 = 0.24, R22 = 0.35), peacefulness (R12 = 0.20, R22 = 0.35) and joy (R12 = 0.19, R22 = 0.33) and transcendence (R12 = 0.14, R22 = 0.00). For others like power (R12 = 0.42, R22 = 0.49) and tension (R12 = 0.28, R22 = 0.27) results could be almost reproduced. Furthermore, we extracted two principle components from GEMS ratings, one representing arousal and the other one valence of the experienced feeling. Both qualities, arousal and valence, could be predicted by acceleration data, indicating, that they provide information on the quantity and quality of experience. On the one hand, these findings show how music-evoked movement patterns relate to music-evoked feelings. On the other hand, they contribute to integrate findings from the field of embodied music cognition into music recommender systems
    • 

    corecore