679 research outputs found

    Multimodal Affect Recognition: Current Approaches and Challenges

    Get PDF
    Many factors render multimodal affect recognition approaches appealing. First, humans employ a multimodal approach in emotion recognition. It is only fitting that machines, which attempt to reproduce elements of the human emotional intelligence, employ the same approach. Second, the combination of multiple-affective signals not only provides a richer collection of data but also helps alleviate the effects of uncertainty in the raw signals. Lastly, they potentially afford us the flexibility to classify emotions even when one or more source signals are not possible to retrieve. However, the multimodal approach presents challenges pertaining to the fusion of individual signals, dimensionality of the feature space, and incompatibility of collected signals in terms of time resolution and format. In this chapter, we explore the aforementioned challenges while presenting the latest scholarship on the topic. Hence, we first discuss the various modalities used in affect classification. Second, we explore the fusion of modalities. Third, we present publicly accessible multimodal datasets designed to expedite work on the topic by eliminating the laborious task of dataset collection. Fourth, we analyze representative works on the topic. Finally, we summarize the current challenges in the field and provide ideas for future research directions

    Network science and the effects of music on the human brain

    Get PDF
    Most people choose to listen to music that they prefer or like such as classical, country or rock. Previous research has focused on how different characteristics of music (i.e., classical versus country) affect the brain. Yet, when listening to preferred music regardless of the type--people report they often experience personal thoughts and memories. To date, understanding how this occurs in the brain has remained elusive. Using network science methods, I evaluated differences in functional brain connectivity when individuals listened to complete songs. Here the results reveal that a circuit important for internally focused thoughts, known as the default mode network, was most connected when listening to preferred music. The results also reveal that listening to a favorite song alters the connectivity between auditory brain areas and the hippocampus, a region responsible for memory and social emotion consolidation. Given that musical preferences are uniquely individualized phenomena and that music can vary in acoustic complexity and the presence or absence of lyrics, the consistency of these results was contrary to previous neuroscientific understanding. These findings may explain why comparable emotional and mental states can be experienced by people listening to music that differs as widely as Beethoven and Eminem. The neurobiological and neurorehabilitation implications of these results are discussed

    Music emotion recognition: a multimodal machine learning approach

    Get PDF
    Music emotion recognition (MER) is an emerging domain of the Music Information Retrieval (MIR) scientific community, and besides, music searches through emotions are one of the major selection preferred by web users. As the world goes to digital, the musical contents in online databases, such as Last.fm have expanded exponentially, which require substantial manual efforts for managing them and also keeping them updated. Therefore, the demand for innovative and adaptable search mechanisms, which can be personalized according to users’ emotional state, has gained increasing consideration in recent years. This thesis concentrates on addressing music emotion recognition problem by presenting several classification models, which were fed by textual features, as well as audio attributes extracted from the music. In this study, we build both supervised and semisupervised classification designs under four research experiments, that addresses the emotional role of audio features, such as tempo, acousticness, and energy, and also the impact of textual features extracted by two different approaches, which are TF-IDF and Word2Vec. Furthermore, we proposed a multi-modal approach by using a combined feature-set consisting of the features from the audio content, as well as from context-aware data. For this purpose, we generated a ground truth dataset containing over 1500 labeled song lyrics and also unlabeled big data, which stands for more than 2.5 million Turkish documents, for achieving to generate an accurate automatic emotion classification system. The analytical models were conducted by adopting several algorithms on the crossvalidated data by using Python. As a conclusion of the experiments, the best-attained performance was 44.2% when employing only audio features, whereas, with the usage of textual features, better performances were observed with 46.3% and 51.3% accuracy scores considering supervised and semi-supervised learning paradigms, respectively. As of last, even though we created a comprehensive feature set with the combination of audio and textual features, this approach did not display any significant improvement for classification performanc

    Tourism in Neuroscience Framework/Cultural Neuroscience, Mirror Neurons, Neuroethics

    Get PDF
    Tourism is a dynamic and competitive industry that requires the ability to constantly adapt to the changing needs and wishes of customers in the uncertain financial global environment posing the problem of attracting tourists. The aim of the paper is to research the involvement of neuroscience through cultural neuroscience, mirror neurons, neuroethics as a new approach to different aspects of tourism. We present the most important research in the field of tourism through existing literature, discuss the limitations of this approach and propose guidelines for future research. In a theoretical approach, given the specific tourist experiences, mirror neurons can contribute to explaining some important aspects of tourism. Investigations lead to a neurological context, where many modes are associated, the language utilizes a multimodal sensory motor system that includes the brain area (concept of empathy, characterization of the traditional anthropological relationship between the host and host of the Istrian region).Research on cultural neuroscience examines how cultural and genetic diversity shape the human mind, brain, and behavior in multiple time scales: state, ontogenesis, and phylogeny.We particularly emphasize the importance of medical tourism by including empirical research from different disciplines and ethical issues involving individual and population perspectives

    The neurobiology of cortical music representations

    Get PDF
    Music is undeniable one of humanity’s defining traits, as it has been documented since the earliest days of mankind, is present in all knowcultures and perceivable by all humans nearly alike. Intrigued by its omnipresence, researchers of all disciplines started the investigation of music’s mystical relationship and tremendous significance to humankind already several hundred years ago. Since comparably recently, the immense advancement of neuroscientific methods also enabled the examination of cognitive processes related to the processing of music. Within this neuroscience ofmusic, the vast majority of research work focused on how music, as an auditory stimulus, reaches the brain and howit is initially processed, aswell as on the tremendous effects it has on and can evoke through the human brain. However, intermediate steps, that is how the human brain achieves a transformation of incoming signals to a seemingly specialized and abstract representation of music have received less attention. Aiming to address this gap, the here presented thesis targeted these transformations, their possibly underlying processes and how both could potentially be explained through computational models. To this end, four projects were conducted. The first two comprised the creation and implementation of two open source toolboxes to first, tackle problems inherent to auditory neuroscience, thus also affecting neuroscientific music research and second, provide the basis for further advancements through standardization and automation. More precisely, this entailed deteriorated hearing thresholds and abilities in MRI settings and the aggravated localization and parcellation of the human auditory cortex as the core structure involved in auditory processing. The third project focused on the human’s brain apparent tuning to music by investigating functional and organizational principles of the auditory cortex and network with regard to the processing of different auditory categories of comparable social importance, more precisely if the perception of music evokes a is distinct and specialized pattern. In order to provide an in depth characterization of the respective patterns, both the segregation and integration of auditory cortex regions was examined. In the fourth and final project, a highly multimodal approach that included fMRI, EEG, behavior and models of varying complexity was utilized to evaluate how the aforementioned music representations are generated along the cortical hierarchy of auditory processing and how they are influenced by bottom-up and top-down processes. The results of project 1 and 2 demonstrated the necessity for the further advancement of MRI settings and definition of working models of the auditory cortex, as hearing thresholds and abilities seem to vary as a function of the used data acquisition protocol and the localization and parcellation of the human auditory cortex diverges drastically based on the approach it is based one. Project 3 revealed that the human brain apparently is indeed tuned for music by means of a specialized representation, as it evoked a bilateral network with a right hemispheric weight that was not observed for the other included categories. The result of this specialized and hierarchical recruitment of anterior and posterior auditory cortex regions was an abstract music component ix x SUMMARY that is situated in anterior regions of the superior temporal gyrus and preferably encodes music, regardless of sung or instrumental. The outcomes of project 4 indicated that even though the entire auditory cortex, again with a right hemispheric weight, is involved in the complex processing of music in particular, anterior regions yielded an abstract representation that varied excessively over time and could not sufficiently explained by any of the tested models. The specialized and abstract properties of this representation was furthermore underlined by the predictive ability of the tested models, as models that were either based on high level features such as behavioral representations and concepts or complex acoustic features always outperformed models based on single or simpler acoustic features. Additionally, factors know to influence auditory and thus music processing, like musical training apparently did not alter the observed representations. Together, the results of the projects suggest that the specialized and stable cortical representation of music is the outcome of sophisticated transformations of incoming sound signals along the cortical hierarchy of auditory processing that generate a music component in anterior regions of the superior temporal gyrus by means of top-down processes that interact with acoustic features, guiding their processing.Musik ist unbestreitbarer Weise eine der definierenden Eigenschaften des Menschen. Dokumentiert seit den frühesten Tagen der Menschheit und in allen bekannten Kulturen vorhanden, ist sie von allenMenschen nahezu gleichwahrnehmbar. Fasziniert von ihrerOmniprĂ€senz haben Wissenschaftler aller Disziplinen vor einigen hundert Jahren begonnen die mystische Beziehung zwischen Musik und Mensch, sowie ihre enorme Bedeutung für selbigen zu untersuchen. Seit einem vergleichsweise kurzem Zeitraum ist es durch den immensen Fortschritt neurowissenschafticher Methoden auch möglich die kognitiven Prozesse, welche an der Verarbeitung von Musik beteiligt, sind zu untersuchen. Innerhalb dieser Neurowissenschaft der Musik hat sich ein Großteil der Forschungsarbeit darauf konzentriert wie Musik, als auditorischer Stimulus, das menschliche Gehirn erreicht und wie sie initial verarbeitet wird, als auch welche kolossallen Effekte sie auf selbiges hat und auch dadurch bewirken kann. Jedoch haben die Zwischenschritte, also wie das menschliche Gehirn eintreffende Signale in eine scheinbar spezialisierte und abstrakte ReprĂ€sentation vonMusik umwandelt, vergleichsweise wenig Aufmerksamkeit erhalten. Um die dadurch entstandene Lücke zu adressieren, hat die hier vorliegende Dissertation diese Prozesse und wie selbige durch Modelle erklĂ€rt werden können in vier Projekten untersucht. Die ersten beiden Projekte beinhalteten die Herstellung und Implementierung von zwei Toolboxen um erstens, inhĂ€rente Probleme der auditorischen Neurowissenschaft, daher auch neurowissenschaftlicher Untersuchungen von Musik, zu verbessern und zweitens, eine Basis für weitere Fortschritte durch Standardisierung und Automatisierung zu schaffen. Im genaueren umfasste dies die stark beeintrĂ€chtigten Hörschwellen und –fĂ€higkeiten in MRT-Untersuchungen und die erschwerte Lokalisation und Parzellierung des menschlichen auditorischen Kortex als Kernstruktur auditiver Verarbeitung. Das dritte Projekt befasste sich mit der augenscheinlichen Spezialisierung von Musik im menschlichen Gehirn durch die Untersuchung funktionaler und organisatorischer Prinzipien des auditorischen Kortex und Netzwerks bezüglich der Verarbeitung verschiedener auditorischer Kategorien vergleichbarer sozialer Bedeutung, im genaueren ob die Wahrnehmung von Musik ein distinktes und spezialisiertes neuronalenMuster hervorruft. Umeine ausführliche Charakterisierung der entsprechenden neuronalen Muster zu ermöglichen wurde die Segregation und Integration der Regionen des auditorischen Kortex untersucht. Im vierten und letzten Projekt wurde ein hochmultimodaler Ansatz,welcher fMRT, EEG, Verhalten undModelle verschiedener KomplexitĂ€t beinhaltete, genutzt, umzu evaluieren, wie die zuvor genannten ReprĂ€sentationen von Musik entlang der kortikalen Hierarchie der auditorischen Verarbeitung generiert und wie sie möglicherweise durch Bottom-up- und Top-down-AnsĂ€tze beeinflusst werden. Die Ergebnisse von Projekt 1 und 2 demonstrierten die Notwendigkeit für weitere Verbesserungen von MRTUntersuchungen und die Definition eines Funktionsmodells des auditorischen Kortex, daHörxi xii ZUSAMMENFASSUNG schwellen und –fĂ€higkeiten stark in AbhĂ€ngigkeit der verwendeten Datenerwerbsprotokolle variierten und die Lokalisation, sowie Parzellierung des menschlichen auditorischen Kortex basierend auf den zugrundeliegenden AnsĂ€tzen drastisch divergiert. Projekt 3 zeigte, dass das menschliche Gehirn tatsĂ€chlich eine spezialisierte ReprĂ€sentation vonMusik enthĂ€lt, da selbige als einzige auditorische Kategorie ein bilaterales Netzwerk mit rechtshemisphĂ€rischer Gewichtung evozierte. Aus diesemNetzwerk, welches die Rekrutierung anteriorer und posteriorer Teile des auditorischen Kortex beinhaltete, resultierte eine scheinbar abstrakte ReprĂ€sentation von Musik in anterioren Regionen des Gyrus temporalis superior, welche prĂ€feriert Musik enkodiert, ungeachtet ob gesungen oder instrumental. Die Resultate von Projekt 4 deuten darauf hin, dass der gesamte auditorische Kortex, erneut mit rechtshemisphĂ€rischer Gewichtung, an der komplexen Verarbeitung vonMusik beteiligt ist, besonders aber anteriore Regionen, die bereits genannten abstrakte ReprĂ€sentation hervorrufen, welche sich exzessiv über die Zeitdauer derWahrnehmung verĂ€ndert und nicht hinreichend durch eines der getestetenModelle erklĂ€rt werden kann. Die spezialisierten und abstrakten Eigenschaften dieser ReprĂ€sentationen wurden weiterhin durch die prĂ€diktiven FĂ€higkeiten der getestetenModelle unterstrichen, daModelle, welche entweder auf höheren Eigenschaften wie VerhaltensreprĂ€sentationen und mentalen Konzepten oder komplexen akustischen Eigenschaften basierten, stets Modelle, welche auf niederen Attributen wie simplen akustischen Eigenschaften basierten, übertrafen. ZusĂ€tzlich konnte kein Effekt von Faktoren, wie z.B. musikalisches Training, welche bekanntermaßen auditorische und daherMusikverarbeitung beeinflussen, nachgewiesen werden. Zusammengefasst deuten die Ergebnisse der Projekte darauf, hin dass die spezialisierte und stabile kortikale ReprĂ€sentation vonMusik ein Resultat komplexer Prozesse ist, welche eintreffende Signale entlang der kortikalen Hierarchie auditorischer Verarbeitung in eine abstrakte ReprĂ€sentation vonMusik innerhalb anteriorer Regionen des Gyrus temporalis superior durch Top-Down-Prozesse, welche mit akustischen Eigenschaften interagieren und deren Verarbeitung steuern, umwandeln

    A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics

    Get PDF
    Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions

    Design of Cognitive Interfaces for Personal Informatics Feedback

    Get PDF
    • 

    corecore