11 research outputs found
Predicting informativeness of words from human brain signals
We study the effect of word informativeness on brain activity associated with reading, i.e. whether the brain processes informative and uninformative words differently. Unlike most studies that investigate the relationship between language and the brain, we do not study linguistic constructs such as syntax or semantics, but informativeness, an attribute statistically computable from text. Here, informativeness is defined as the ability of a word to distinguish the topic to which it is related to. For instance, the word 'Gandhi' is better at distinguishing the topic of India from other topics than the word 'hot'. We utilize Electroencephalography (EEG) data recorded from subjects reading Wikipedia documents of various topics. We report two experiments: 1) a neurophysiological experiment investigating the neural correlates of informativeness and 2) a single-trial Event-Related brain Potential (ERP) classification experiment, in which we predict the word informativeness from brain signals. We show that word informativeness has a significant effect on the P200, P300, and P600 ERP-components. Furthermore, we demonstrate that word informativeness can be predicted from ERPs with a performance better than a random baseline using a Linear Discriminant Analysis (LDA) classifier. Additionally, we present a language model -based statistical model that allows the estimation of word informativeness from a corpus of text
Neuroadaptive modelling for generating images matching perceptual categories
Brain-computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant's brain signals as feedback to adapt a boundless generative model and generate new information matching the participant's intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user's intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator's perceptual categories.Peer reviewe
Why Do Users Issue Good Queries? : Neural Correlates of Term Specificity
JUFO2Despite advances in the past few decades in studying what kind of queries users input to search engines and how to suggest queries for the users, the fundamental question of what makes human cognition able to estimate goodness of query terms is largely unanswered. For example, a person searching information about "cats" is able to choose query terms, such as "housecat", "feline", or "animal" and avoid terms like "similar", "variety", and "distinguish". We investigated the association between the specificity of terms occurring in documents and human brain activity measured via electroencephalography (EEG). We analyzed the brain activity data of fifteen participants, recorded in response to reading terms from Wikipedia documents. Term specificity was shown to be associated with the amplitude of evoked brain responses. The results indicate that by being able to determine which terms carry maximal information about, and can best discriminate between, documents, people have the capability to enter good query terms. Moreover, our results suggest that the effective query term selection process, often observed in practical search behavior studies, has a neural basis. We believe our findings constitute an important step in revealing the cognitive processing behind query formulation and evaluating informativeness of language in general.Peer reviewe
Information gain modulates brain activity evoked by reading
The human brain processes language to optimise efficient communication. Studies have shown extensive evidence that the brain's response to language is affected both by lower-level features, such as word-length and frequency, and syntactic and semantic violations within sentences. However, our understanding on cognitive processes at discourse level remains limited: How does the relationship between words and the wider topic one is reading about affect language processing? We propose an information theoretic model to explain cognitive resourcing. In a study in which participants read sentences from Wikipedia entries, we show information gain, an information theoretic measure that quantifies the specificity of a word given its topic context, modulates word-synchronised brain activity in the EEG. Words with high information gain amplified a slow positive shift in the event related potential. To show that the effect persists for individual and unseen brain responses, we furthermore show that a classifier trained on EEG data can successfully predict information gain from previously unseen EEG. The findings suggest that biological information processing seeks to maximise performance subject to constraints on information capacity.Peer reviewe
Brain-computer interface for generating personally attractive images
While we instantaneously recognize a face as attractive, it is much harder to explain what exactly defines personal attraction. This suggests that attraction depends on implicit processing of complex, culturally and individually defined features. Generative adversarial neural networks (GANs), which learn to mimic complex data distributions, can potentially model subjective preferences unconstrained by pre-defined model parameterization. Here, we present generative brain-computer interfaces (GBCI), coupling GANs with brain-computer interfaces. GBCI first presents a selection of images and captures personalized attractiveness reactions toward the images via electroencephalography. These reactions are then used to control a GAN model, finding a representation that matches the features constituting an attractive image for an individual. We conducted an experiment (N=30) to validate GBCI using a face-generating GAN and producing images that are hypothesized to be individually attractive. In double-blind evaluation of the GBCI-produced images against matched controls, we found GBCI yielded highly accurate results. Thus, the use of EEG responses to control a GAN presents a valid tool for interactive information-generation. Furthermore, the GBCI-derived images visually replicated known effects from social neuroscience, suggesting that the individually responsive, generative nature of GBCI provides a powerful, new tool in mapping individual differences and visualizing cognitive-affective processing.Peer reviewe
actris-cloudnet/cloudnetpy: CloudnetPy 1.3.1
<p>This release adds support for RPG Level 1 V4 files</p>
actris-cloudnet/cloudnetpy: CloudnetPy 1.9.4
<ul>
<li>Fixes bug that misplaced RPG cloud radar time array</li>
</ul>
actris-cloudnet/cloudnetpy: CloudnetPy 1.25.1
Removes quality control from CloudnetPy package
Adds speckle filter to BASTA data
Removes classification results from profiles without any lidar dat
actris-cloudnet/cloudnetpy: CloudnetPy 1.28.1
Use the same plotting routines for current and legacy file