35,541 research outputs found

    Emotions, Music, and Logos

    Get PDF
    The article introduces a cognitive and componential view of religious emotions. General emotions are claimed to consist of at least two compounds, the cognitive compound and the affective compound. Religious emotions are typically general emotions which are characterized by three specific conditions: they involve a thought of God or godlike, they are significant for a person feeling them and their meaning is derived from religious practices. The article discusses the notion of spiritual emotions in Ancient theology and compares the idea of it with emotions in music. By referring to the notion of mental language, it is argued that some religious emotions are like emotions in music and as such they can be interpreted as tones of Logos

    Researcher-led teaching:embodiment of academic practice

    Full text link
    This paper explores the embodied practices of leading researchers(and/or leading scholars/practitioners), suggesting that distinctive‘researcher-led teaching’ depends on educators who are willing and able to be their research in the teaching setting. We advocate an approach to the development of higher education pedagogy which makes lead-researchers the objects of inquiry and we summarise case study analyses (in neuroscience and humanities) where the knowledge-making‘signatures’ of academic leaders are used to exhibit the otherwise hidden identities of research. We distinguish between learning readymade knowledge and the process of knowledge in the making and point towards the importance of inquiry in the flesh. We develop a view of higher education teaching that depends upon academic status a priori, but we argue that this stance is inclusive because it has the propensity to locate students as participants in academic culture

    The effect of digital signage on shoppers' behavior: the role of the evoked experience

    Get PDF
    This paper investigates the role of digital signage as experience provider in retail spaces. The findings of a survey-based field experiment demonstrate that digital signage content high on sensory cues evokes affective experience and strengthens customers’ experiential processing route. In contrast, digital signage messages high on “features and benefits” information evoke intellectual experience and strengthen customers’ deliberative processing route. The affective experience is more strongly associated with the attitude towards the ad and the approach behavior towards the advertiser than the intellectual experience. The effect of an ad high on sensory cues on shoppers’ approach to the advertiser is stronger for first-time shoppers, and therefore important in generating loyalty. The findings indicate that the design of brand-related informational cues broadcast over digital in-store monitors affects shoppers’ information processing. The cues evoke sensory and affective experiences and trigger deliberative processes that lead to attitude construction and finally elicit approach behavior towards the advertisers

    Detection of emotions in Parkinson's disease using higher order spectral features from brain's electrical activity

    Get PDF
    Non-motor symptoms in Parkinson's disease (PD) involving cognition and emotion have been progressively receiving more attention in recent times. Electroencephalogram (EEG) signals, being an activity of central nervous system, can reflect the underlying true emotional state of a person. This paper presents a computational framework for classifying PD patients compared to healthy controls (HC) using emotional information from the brain's electrical activity

    Sociodemographic, psychological and politicocultural correlates in Flemish students' attitudes towards French and English

    Get PDF
    An analysis of 100 Flemish high-school students' attitudes towards French and English (both foreign languages) revealed complex links etween personality factors, gender, politicocultural identity, communicative behaviour and foreign language attitudes. Attitudes towards English were found to be much more positive than those towards French, despite the fact that the participants had enjoyed a longer and more intense formal instruction in French (it being their second language). The independent variables were found to have stronger effects for French than for English, with the exception of politicocultural identity of the participant, which had a strong effect on attitudes towards French but not English. Overall, it seems that social factors, including exposure to the foreign languages, are linked with lowerlevel personality dimensions and thus shape attitudes towards these languages

    Methodological considerations concerning manual annotation of musical audio in function of algorithm development

    Get PDF
    In research on musical audio-mining, annotated music databases are needed which allow the development of computational tools that extract from the musical audiostream the kind of high-level content that users can deal with in Music Information Retrieval (MIR) contexts. The notion of musical content, and therefore the notion of annotation, is ill-defined, however, both in the syntactic and semantic sense. As a consequence, annotation has been approached from a variety of perspectives (but mainly linguistic-symbolic oriented), and a general methodology is lacking. This paper is a step towards the definition of a general framework for manual annotation of musical audio in function of a computational approach to musical audio-mining that is based on algorithms that learn from annotated data. 1

    Speaker emotion can affect ambiguity production

    Get PDF
    Does speaker emotion affect degree of ambiguity in referring expressions? We used referential communication tasks preceded by mood induction to examine whether positive emotional valence may be linked to ambiguity of referring expressions. In Experiment 1, participants had to identify sequences of objects with homophonic labels (e.g., the animal bat, a baseball bat) for hypothetical addressees. This required modification of the homophones. Happy speakers were less likely to modify the second homophone to repair a temporary ambiguity (i.e., they were less likely to say … first cover the bat, then cover the baseball bat …). In Experiment 2, participants had to identify one of two identical objects in an object array, which required a modifying relative clause (the shark that's underneath the shoe). Happy speakers omitted the modifying relative clause twice as often as neutral speakers (e.g., by saying Put the shark underneath the sheep), thereby rendering the entire utterance ambiguous in the context of two sharks. The findings suggest that one consequence of positive mood appears to be more ambiguity in speech. This effect is hypothesised to be due to a less effortful processing style favouring an egocentric bias impacting perspective taking or monitoring of alignment of utterances with an addressee's perspective

    Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model

    Full text link
    Numerous studies in the field of music generation have demonstrated impressive performance, yet virtually no models are able to directly generate music to match accompanying videos. In this work, we develop a generative music AI framework, Video2Music, that can match a provided video. We first curated a unique collection of music videos. Then, we analysed the music videos to obtain semantic, scene offset, motion, and emotion features. These distinct features are then employed as guiding input to our music generation model. We transcribe the audio files into MIDI and chords, and extract features such as note density and loudness. This results in a rich multimodal dataset, called MuVi-Sync, on which we train a novel Affective Multimodal Transformer (AMT) model to generate music given a video. This model includes a novel mechanism to enforce affective similarity between video and music. Finally, post-processing is performed based on a biGRU-based regression model to estimate note density and loudness based on the video features. This ensures a dynamic rendering of the generated chords with varying rhythm and volume. In a thorough experiment, we show that our proposed framework can generate music that matches the video content in terms of emotion. The musical quality, along with the quality of music-video matching is confirmed in a user study. The proposed AMT model, along with the new MuVi-Sync dataset, presents a promising step for the new task of music generation for videos
    corecore