599 research outputs found

    Video annotation for studying the brain in naturalistic settings

    Get PDF
    Aivojen tutkiminen luonnollisissa asetelmissa on viimeaikainen suunta aivotutkimuksessa. Perinteisesti aivotutkimuksessa on käytetty hyvin yksinkertaistettuja ja keinotekoisia ärsykkeitä, mutta viime aikoina on alettu tutkia ihmisaivoja yhä luonnollisimmissa asetelmissa. Näissä kokeissa on käytetty elokuvaa luonnollisena ärsykkeenä. Elokuvan monimutkaisuudesta johtuen tarvitaan siitä yksinkertaistettu malli laskennallisen käsittely mahdollistamiseksi. Tämä malli tuotetaan annotoimalla; keräämällä elokuvan keskeisistä ärsykepiirteistä dataa tietorakenteen muodostamiseksi. Tätä dataa verrataan aivojen aikariippuvaiseen aktivaatioon etsittäessä mahdollisia korrelaatioita. Kaikkia elokuvan ominaisuuksia ei pystytä annotoimaan automaattisesti; ihmiselle merkitykselliset ominaisuudet on annotoitava käsin, joka on joissain tapauksissa ongelmallista johtuen elokuvan käyttämistä useista viestintämuodoista. Ymmärrys näistä viestinnän muodoista auttaa analysoimaan ja annotoimaan elokuvia. Elokuvaa Tulitikkutehtaan Tyttö (Aki Kaurismäki, 1990) käytettiin ärsykkeenä aivojen tutkimiseksi luonnollisissa asetelmissa. Kokeista saadun datan analysoinnin helpottamiseksi annotoitiin elokuvan keskeiset visuaaliset ärsykepiirteet. Tässä työssä tutkittiin annotointiin käytettävissä olevia eri lähestymistapoja ja teknologioita. Annotointi auttaa informaation organisoinnissa, mistä syystä annotointia ilmestyy nykyään kaikkialla. Erilaisia annotaatiotyökaluja ja -teknologioita kehitetään jatkuvasti. Lisäksi videoanalyysimenetelmät ovat alkaneet mahdollistaa yhä merkityksellisemmän informaation automaattisen annotoinnin tulevaisuudessa.Studying the brain in naturalistic settings is a recent trend in neuroscience. Traditional brain imaging experiments have relied on using highly simplified and artificial stimuli, but recently efforts have been put into studying the human brain in conditions closer to real-life. The methodology used in these studies involve imitating naturalistic stimuli with a movie. Because of the complexity of the naturalistic stimulus, a simplified model of it is needed to handle it computationally. This model is obtained by making annotations; collecting information of salient features of the movie to form a data structure. This data is compared with the brain activity evolving in time to search for possible correlations. All the features of a movie cannot be reliably annotated automatically: semantic features of a movie require manual annotations, which is in some occasions problematic due to the various cinematic techniques adopted. Understanding these methods helps analyzing and annotating movies. The movie Match Factory Girl (Aki Kaurismäki, 1990) was used as a stimulus in studying the brain in naturalistic settings. To help the analysis of the acquired data the salient visual features of the movie were annotated. In this work existing annotation approaches and available technologies for annotation were reviewed. Annotations help organizing information, therefore they are nowadays found everywhere. Different tools and technologies are being developed constantly. Furthermore, development of automatic video analysis methods are going to provide more meaningful annotations in the future

    A Model of the Network Architecture of the Brain that Supports Natural Language Processing

    Get PDF
    For centuries, neuroscience has proposed models of the neurobiology of language processing that are static and localised to few temporal and inferior frontal regions. Although existing models have offered some insight into the processes underlying lower-level language features, they have largely overlooked how language operates in the real world. Here, we aimed at investigating the network organisation of the brain and how it supports language processing in a naturalistic setting. We hypothesised that the brain is organised in a multiple core-periphery and dynamic modular architecture, with canonical language regions forming high-connectivity hubs. Moreover, we predicted that language processing would be distributed to much of the rest of the brain, allowing it to perform more complex tasks and to share information with other cognitive domains. To test these hypotheses, we collected the Naturalistic Neuroimaging Database of people watching full length movies during functional magnetic resonance imaging. We computed network algorithms to capture the voxel-wise architecture of the brain in individual participants and inspected variations in activity distribution over different stimuli and over more complex language features. Our results confirmed the hypothesis that the brain is organised in a flexible multiple core-periphery architecture with large dynamic communities. Here, language processing was distributed to much of the rest of the brain, together forming multiple communities. Canonical language regions constituted hubs, explaining why they consistently appear in various other neurobiology of language models. Moreover, language processing was supported by other regions such as visual cortex and episodic memory regions, when processing more complex context-specific language features. Overall, our flexible and distributed model of language comprehension and the brain points to additional brain regions and pathways that could be exploited for novel and more individualised therapies for patients suffering from speech impairments

    The neuro-cognitive representation of word meaning resolved in space and time.

    Get PDF
    One of the core human abilities is that of interpreting symbols. Prompted with a perceptual stimulus devoid of any intrinsic meaning, such as a written word, our brain can access a complex multidimensional representation, called semantic representation, which corresponds to its meaning. Notwithstanding decades of neuropsychological and neuroimaging work on the cognitive and neural substrate of semantic representations, many questions are left unanswered. The research in this dissertation attempts to unravel one of them: are the neural substrates of different components of concrete word meaning dissociated? In the first part, I review the different theoretical positions and empirical findings on the cognitive and neural correlates of semantic representations. I highlight how recent methodological advances, namely the introduction of multivariate methods for the analysis of distributed patterns of brain activity, broaden the set of hypotheses that can be empirically tested. In particular, they allow the exploration of the representational geometries of different brain areas, which is instrumental to the understanding of where and when the various dimensions of the semantic space are activated in the brain. Crucially, I propose an operational distinction between motor-perceptual dimensions (i.e., those attributes of the objects referred to by the words that are perceived through the senses) and conceptual ones (i.e., the information that is built via a complex integration of multiple perceptual features). In the second part, I present the results of the studies I conducted in order to investigate the automaticity of retrieval, topographical organization, and temporal dynamics of motor-perceptual and conceptual dimensions of word meaning. First, I show how the representational spaces retrieved with different behavioral and corpora-based methods (i.e., Semantic Distance Judgment, Semantic Feature Listing, WordNet) appear to be highly correlated and overall consistent within and across subjects. Second, I present the results of four priming experiments suggesting that perceptual dimensions of word meaning (such as implied real world size and sound) are recovered in an automatic but task-dependent way during reading. Third, thanks to a functional magnetic resonance imaging experiment, I show a representational shift along the ventral visual path: from perceptual features, preferentially encoded in primary visual areas, to conceptual ones, preferentially encoded in mid and anterior temporal areas. This result indicates that complementary dimensions of the semantic space are encoded in a distributed yet partially dissociated way across the cortex. Fourth, by means of a study conducted with magnetoencephalography, I present evidence of an early (around 200 ms after stimulus onset) simultaneous access to both motor-perceptual and conceptual dimensions of the semantic space thanks to different aspects of the signal: inter-trial phase coherence appears to be key for the encoding of perceptual while spectral power changes appear to support encoding of conceptual dimensions. These observations suggest that the neural substrates of different components of symbol meaning can be dissociated in terms of localization and of the feature of the signal encoding them, while sharing a similar temporal evolution

    Decoding the consumer’s brain: Neural representations of consumer experience

    Get PDF
    Understanding consumer experience – what consumers think about brands, how they feel about services, whether they like certain products – is crucial to marketing practitioners. ‘Neuromarketing’, as the application of neuroscience in marketing research is called, has generated excitement with the promise of understanding consumers’ minds by probing their brains directly. Recent advances in neuroimaging analysis leverage machine learning and pattern classification techniques to uncover patterns from neuroimaging data that can be associated with thoughts and feelings. In this dissertation, I measure brain responses of consumers by functional magnetic resonance imaging (fMRI) in order to ‘decode’ their mind. In three different studies, I have demonstrated how different aspects of consumer experience can be studied with fMRI recordings. First, I study how consumers think about brand image by comparing their brain responses during passive viewing of visual templates (photos depicting various social scenarios) to those during active visualizing of a brand’s image. Second, I use brain responses during viewing of affective pictures to decode emotional responses during watching of movie-trailers. Lastly, I examine whether marketing videos that evoke s

    Consensus Paper: Current Perspectives on Abstract Concepts and Future Research Directions

    Get PDF
    Abstract concepts are relevant to a wide range of disciplines, including cognitive science, linguistics, psychology, cognitive, social, and affective neuroscience, and philosophy. This consensus paper synthesizes the work and views of researchers in the field, discussing current perspectives on theoretical and methodological issues, and recommendations for future research. In this paper, we urge researchers to go beyond the traditional abstract-concrete dichotomy and consider the multiple dimensions that characterize concepts (e.g., sensorimotor experience, social interaction, conceptual metaphor), as well as the mediating influence of linguistic and cultural context on conceptual representations. We also promote the use of interactive methods to investigate both the comprehension and production of abstract concepts, while also focusing on individual differences in conceptual representations. Overall, we argue that abstract concepts should be studied in a more nuanced way that takes into account their complexity and diversity, which should permit us a fuller, more holistic understanding of abstract cognition

    Semantic radical consistency and character transparency effects in Chinese: an ERP study

    Get PDF
    BACKGROUND: This event-related potential (ERP) study aims to investigate the representation and temporal dynamics of Chinese orthography-to-semantics mappings by simultaneously manipulating character transparency and semantic radical consistency. Character components, referred to as radicals, make up the building blocks used dur...postprin

    Aligning computer and human visual representations

    Get PDF
    Both computer vision and human visual system target the same goal: to accomplish visual tasks easily via a set of representations. In this thesis, we study to what extent representations from computer vision models align to human visual representations. To study this research question we used an interdisciplinary approach, integrating methods from psychology, neuroscience and computer vision. Such an approach is aimed to provide new insight in the understanding of human visual representations. In the four chapters of the thesis, we tested computer vision models against brain data obtained with electro-encephalography (EEG) and functional magnetic resonance imaging (fMRI). The main findings can be summarized as follows; 1) computer vision models with one or two computational stages correlate to visual representations of intermediate complexity in the human brain, 2) models with multiple computational stages correlate best to the hierarchy of representations in the human visual system, 3) computer vision models do not align one-to-one to the temporal hierarchy of representations in the visual cortex and 4) not only visual but also semantic representations correlate to representations in the human visual system

    The process of inference making in reading comprehension: an ERP analysis

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro de Comunicação e Expressão. Programa de Pós-Graduação em Letras/Inglês e Literatura Correspondente.Pesquisas recentes na área de compreensão textual têm enfocado a habilidade dos leitores em construir uma representação mental coerente daquilo que lêem. Para que a representação uniforme de um texto seja obtida, o leitor deve ser capaz de compilar as informações presentes no texto com o seu conhecimento prévio para a construção do significado - que pode não estar explícito -, através do processo de inferência. Nesse estudo, o processo de inferência foi investigado mediante a leitura de dois tipos diferentes de texto, por meio da utilização da eletroencefalografia (EEG). Os sujeitos, falantes nativos do inglês, leram parágrafos expositivos e narrativos, e julgaram a plausibilidade da sentença final de cada parágrafo, tendo como referência, a informação das três sentenças anteriores. A análise dos resultados enfocou dois potenciais relacionados a eventos (ERPs): os componentes N1 e N400, e a acuidade nas respostas comportamentais. As amplitudes do N400 revelaram que o texto expositivo exigiu mais dos sujeitos em termos de processamento semântico, enquanto que as respostas comportamentais mostraram que os sujeitos tiveram uma tendência maior a gerar inferências enquanto liam esse mesmo tipo de texto. Com relação ao envolvimento dos hemisférios esquerdo e direito no processo de inferência, não houve diferenças significativas em relação à amplitude dos ERPs, embora o hemisfério direito tenha se mostrado mais participativo no momento em que os sujeitos liam a última sentença dos parágrafos, e tinham que julgar se a mesma era coerente com as sentenças anteriores. No geral, esse estudo sugere que os dois tipos de texto são processados diferentemente pelo cérebro, conforme demonstrado pelas nuances dos componentes N1 e N400, gerados durante a leitura das duas últimas sentenças de cada parágrafo. Embora não tenha sido possível uma clara visualização com relação aos processos cerebrais subjacentes ao processo de inferência, em função dos resultados pouco robustos, o presente estudo contribui como mais um dos primeiros passos a serem dados no longo caminho, até que uma compreensão mais detalhada dos processos cognitivos inerentes à compreensão textual seja alcançada. Much of recent research on discourse comprehension has centered on the readers' ability to construct coherent mental representations of texts. In order to form a unified representation of a given text, a reader must be able to join the information presented in the text with his background knowledge to construe the meaning that may not be explicitly stated in the text, through the generation of inferences. In this is study, the process of inference making by native speakers of English while reading two different types of text was investigated, using Electroencephalography (EEG). Subjects read narrative and expository paragraphs, and judged the plausibility of the final sentence of each four-sentence long paragraph by reference to the previous information. The analysis of data focused on two ERP (Event-related brain potential) components, the N1 and the N400 and on accuracy of behavioral responses. N400 amplitudes revealed that exposition was more demanding than narration in terms of semantic processing, whereas behavioral data showed that subjects were more prone to generate inferences when reading exposition. Concerning the involvement of the right and left hemispheres in the process of inference making, there were no significant differences in terms of the ERPs amplitudes, although the right hemisphere showed a tendency for greater participation when subjects were reading the last sentence of the paragraphs and had to judge whether this sentence was coherent to the previous sentences. Overall, this study suggests that the two types of text investigated are processed differently by the brain, as revealed by the nuances showed in the N1 and N400 components across the two last sentences of the paragraphs. Even though it was not possible to delineate a clear picture in terms of brain processes, given the lack of robust results, this study might be the first of many steps towards a complete understanding of the cognitive processes involved in discourse comprehension

    Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine

    Get PDF
    The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far. Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews. Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level. In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data. The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience
    corecore