364,118 research outputs found
Music interaction research in HCI
The ubiquity of music consumption is overarching. Statistics for digital music sales, streaming video videos, computer games, and illegal sharing all speak of a huge interest. At the same, an incredible amount of data about every day interactions (sales and use) with music is accumulating through new cloud services. However, there is an amazing lack of public knowledge about everyday music interaction. This panel discusses the state of music interaction as a part of digital media research. We consider why music interaction research has become so marginal in HCI and discuss how to revive it. Our two discussion themes are: orientation towards design vs. research in music related R&D, and the question if and how private, big data on music interactions could enlighten our understanding of ubiquitous media culture
Music and Aggression: The Impact of Sexual-Aggressive Song Lyrics on Aggression-Related Thoughts, Emotions, and Behavior Toward the Same and the Opposite Sex
Three studies examined the impact of sexual-aggressive song lyrics on aggressive thoughts, emotions, and behavior toward the same and the opposite sex. In Study 1, the authors directly manipulated whether male or female participants listened to misogynous or neutral song lyrics and measured actual aggressive behavior. Male participants who were exposed to misogynous song lyrics administered more hot chili sauce to a female than to a male confederate. Study 2 shed some light on the underlying psychological processes: Male participants who heard misogynous song lyrics recalled more negative attributes of women and reported more feelings of vengeance than when they heard neutral song lyrics. In addition, men-hating song lyrics had a similar effect on aggression-related responses of female participants toward men. Finally, Study 3 replicated the findings of the previous two studies with an alternative measure of aggressive behavior as well as a more subtle measure of aggressive cognitions. The results are discussed in the framework of the General Aggression Model
Norms and the meaning of omissive enabling conditions
People often reason about omissions. One line of research shows that people can distinguish between the semantics of omissive causes and omissive enabling conditions: for instance, not flunking out of college enabled you (but didn’t cause you) to graduate. Another line of work shows that people rely on the normative status of omissive events in inferring their causal role: if the outcome came about because the omission violated some norm, reasoners are more likely to select that omission as a cause. We designed a novel paradigm that tests how norms interact with the semantics of omissive enabling conditions. The paradigm concerns the circuitry of a mechanical device that plays music. Two experiments used the paradigm to stipulate norms and present a distinct set of possibilities to participants. Participants chose which causal verb best described the operations of the machine. The studies revealed that participants’ responses are best predicted by their tendency to consider the semantics of omissive relations. In contrast, norms had little to no effect in participants’ responses. We conclude by marshaling the evidence and considering what role norms may play in people’s understanding of omissions
Recommended from our members
Collaborative music interaction on tabletops: an HCI approach
With the advent of tabletop interaction, collaborative activities are better supported than they are on single-user PCs because there exists a physical shareable space, and interaction with digital data is more embodied and social. In sound and music computing, collaborative music making has traditionally been done using interconnected networks, but using separated computers. Musical tabletops introduce opportunities of playing in collaboration through sharing physically the same musical interface. However, few tabletop musical interfaces exploit this collaborative potential (e.g. the Reactable). We are interested in looking into how collaboration can be fully supported by means of musical tabletops for music performance in contrast with more traditional settings. We are also looking at whether collective musical engagement can be enhanced by providing more suitable interfaces to collaboration. In HCI and software development, we find an iterative process approach of design and evaluation—where evaluation allows us to identify key issues that can be addressed in the next design iteration of the system. Using a similar iterative approach, we plan to design and evaluate some tabletop musical interfaces. The aim is to understand what design choices can enhance and enrich collaboration and collective musical engagement on these systems. In this paper, we explain the evaluation methodologies we have undertaken in three preliminary pilot studies, and the lessons we have learned. Initial findings indicate that evaluating tabletop musical interfaces is a complex endeavour which requires an approach as close as possible to a real context, with an interdisciplinary approach provided by interaction analysis techniques
Ergogenic and psychological effects of synchronous music during circuit-type exercise
This is the post print version of the article. The official published version can be obtained from the link below.Objectives: Motivational music when synchronized with movement has been found to improve performance in anaerobic and aerobic endurance tasks, although gender differences pertaining to the potential benefits of such music have seldom been investigated. The present study addresses the psychological and ergogenic effects of synchronous music during circuit-type exercise. Design: A mixed-model design was employed in which there was a within-subjects factor (two experimental conditions and a control) and a between-subjects factor (gender). Methods: Participants (N ¼ 26) performed six circuit-type exercises under each of three synchronous conditions: motivational music, motivationally-neutral (oudeterous) music, and a metronome control. Dependent measures comprised anaerobic endurance, which was assessed using the number of repetitions performed prior to the failure to maintain synchronicity, and post-task affect, which was assessed using Hardy and Rejeski’s (1989) Feeling Scale. Mixed-model 3 (Condition) X 2 (Gender) ANOVAs, ANCOVAs, and MANOVA were used to analyze the data. Results: Synchronous music did not elicit significant (p < .05) ergogenic or psychological effects in isolation; rather, significant (p < .05) Condition X Gender interaction effects emerged for both total repetitions and mean affect scores. Women and men showed differential affective responses to synchronous music and men responded more positively than women to metronomic regulation of their movements. Women derived the greatest overall benefit from both music conditions. Conclusions: Men may place greater emphasis on the metronomic regulation of movement than the remaining, extra-rhythmical, musical qualities. Men and women appear to exhibit differential responses in terms of affective responses to synchronous music
Music in the first days of life
In adults, specific neural systems with right-hemispheric weighting are necessary to process pitch, melody and harmony, as well as structure and meaning emerging from musical sequences. To which extent does this neural specialization result from exposure to music or from neurobiological predispositions? We used fMRI to measure brain activity in 1 to 3 days old newborns while listening to Western tonal music, and to the same excerpts altered, so as to include tonal violations or dissonance. Music caused predominant right hemisphere activations in primary and higher-order auditory cortex. For altered music, activations were seen in the left inferior frontal cortex and limbic structures. Thus, the newborn's brain is able to plenty receive music and to figure out even small perceptual and structural differences in the music sequences. This neural architecture present at birth provides us the potential to process basic and complex aspects of music, a uniquely human capacity
Recommended from our members
Music-reading expertise modulates the visual span for English letters but not Chinese characters.
Recent research has suggested that the visual span in stimulus identification can be enlarged through perceptual learning. Since both English and music reading involve left-to-right sequential symbol processing, music-reading experience may enhance symbol identification through perceptual learning particularly in the right visual field (RVF). In contrast, as Chinese can be read in all directions, and components of Chinese characters do not consistently form a left-right structure, this hypothesized RVF enhancement effect may be limited in Chinese character identification. To test these hypotheses, here we recruited musicians and nonmusicians who read Chinese as their first language (L1) and English as their second language (L2) to identify music notes, English letters, Chinese characters, and novel symbols (Tibetan letters) presented at different eccentricities and visual field locations on the screen while maintaining central fixation. We found that in English letter identification, significantly more musicians achieved above-chance performance in the center-RVF locations than nonmusicians. This effect was not observed in Chinese character or novel symbol identification. We also found that in music note identification, musicians outperformed nonmusicians in accuracy in the center-RVF condition, consistent with the RVF enhancement effect in the visual span observed in English-letter identification. These results suggest that the modulation of music-reading experience on the visual span for stimulus identification depends on the similarities in the perceptual processes involved
- …
