4 research outputs found

    Creating a common language for soundscape research:Translation and validation of Dutch soundscape attributes

    Get PDF
    Much of the work into the understanding of our auditory environment, referred to as soundscape research, has emerged from international and interdisciplinary research. This has enabled growth in understanding and increased opportunities for optimising shared environments but has also formed one major obstacle: a lack of a common language to describe soundscapes. Therefore, the purpose of this study is to validate translated soundscape descriptors in Dutch as part of the Soundscape Attributes Translation Project (SATP). For this, an expert panel of seven soundscape researchers from The Netherlands and Flanders (Belgium) translated the original eight English attributes into Dutch. Subsequently, following standardised materials and procedures, a sample of 32 Dutch participants completed a listening experiment in which they rated 27 audio files on the eight soundscape attributes. Results show modest evidence indicating that the Dutch translations were applied similarly to the original English attributes, with a slight (but not statistically significant) bias towards Pleasantness and Eventfulness in the Dutch sample. Bayesian analysis supports these findings by showing that the translations for the opposing attributes Uneventful and Annoying fit less well compared to the other attributes. Despite some limitations and while further research is necessary, our findings are promising and suggest that, although not perfect, the Dutch translations of the English soundscape attributes could already be useful for describing thegeneral appraisal of a person’s soundscape in The Netherlands

    個人が用いる単語の意味のモデル化とその応用

    Get PDF
    学位の種別: 修士University of Tokyo(東京大学

    You took the words right out of my mouth:Dual-fMRI reveals intra- and inter-personal neural processes supporting verbal interaction.

    Get PDF
    Verbal communication relies heavily upon mutual understanding, or common ground. Inferring the intentional states of our interaction partners is crucial in achieving this, and social neuroscience has begun elucidating the intra- and inter-personal neural processes supporting such inferences. Typically, however, neuroscientific paradigms lack the reciprocal to-and-fro characteristic of social communication, offering little insight into the way these processes operate online during real-world interaction. In the present study, we overcame this by developing a “hyperscanning” paradigm in which pairs of interactants could communicate verbally with one another in a joint-action task whilst both undergoing functional magnetic resonance imaging simultaneously. Successful performance on this task required both interlocutors to predict their partner's upcoming utterance in order to converge on the same word as each other over recursive exchanges, based only on one another's prior verbal expressions. By applying various levels of analysis to behavioural and neuroimaging data acquired from 20 dyads, three principal findings emerged: First, interlocutors converged frequently within the same semantic space, suggesting that mutual understanding had been established. Second, assessing the brain responses of each interlocutor as they planned their upcoming utterances on the basis of their co-player's previous word revealed the engagement of the temporo-parietal junctional (TPJ), precuneus and dorso-lateral pre-frontal cortex. Moreover, responses in the precuneus were modulated positively by the degree of semantic convergence achieved on each round. Second, effective connectivity among these regions indicates the crucial role of the right TPJ in this process, consistent with the Nexus model. Third, neural signals within certain nodes of this network became aligned between interacting interlocutors. We suggest this reflects an interpersonal neural process through which interactants infer and align to one another's intentional states whilst they establish a common ground

    Affect Lexicon Induction For the Github Subculture Using Distributed Word Representations

    Get PDF
    Sentiments and emotions play essential roles in small group interactions, especially in self-organized collaborative groups. Many people view sentiments as universal constructs; however, cultural differences exist in some aspects of sentiments. Understanding the features of sentiment space in small group cultures provides essential insights into the dynamics of self-organized collaborations. However, due to the limit of carefully human annotated data, it is hard to describe sentimental divergences across cultures. In this thesis, we present a new approach to inspect cultural differences on the level of sentiments and compare subculture with the general social environment. We use Github, a collaborative software development network, as an example of self-organized subculture. First, we train word embeddings on large corpora and do embedding alignment using linear transformation method. Then we model finer-grained human sentiment in the Evaluation- Potency-Activity (EPA) space and extend subculture EPA lexicon with two-dense-layered neural networks. Finally, we apply Long Short-Term Memory (LSTM) network to analyze the identities’ sentiments triggered by event-based sentences. We evaluate the predicted EPA lexicon for Github community using a recently collected dataset, and the result proves our approach could capture subtle changes in affective dimensions. Moreover, our induced sentiment lexicon shows individuals from two environments have different understandings to sentiment-related words and phrases but agree on nouns and adjectives. The sentiment features of “Github culture” could explain that people in self-organized groups tend to reduce personal sentiment to improve group collaboration
    corecore