26 research outputs found

    Space, time and number : common coding mechanisms and interactions between domains

    Get PDF
    Space, time and number are key dimensions that underlie how we perceive, identify and act within the environment. They are interconnected in our behaviour and brain. In this study, we examined interdependencies between these dimensions. To this end, left- and right-handed participants performed an object collision task that required space–time processing and arithmetic tests that involved number processing. Handedness of the participants influenced collision detection with left-handers being more accurate than right-handers, which is in line with the premise that hand preference guides individual differences as a result of sensorimotor experiences and distinct interhemispheric integration patterns. The data further showed that successful collision detection was a predictor for arithmetic achievement, at least in right-handers. These findings suggest that handedness plays a mediating role in binding information processing across domains, likely due to selective connectivity properties within the sensorimotor system that is guided by hemispheric lateralisation patterns.Peer reviewe

    Neuroadaptive modelling for generating images matching perceptual categories

    Get PDF
    Brain-computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant's brain signals as feedback to adapt a boundless generative model and generate new information matching the participant's intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user's intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator's perceptual categories.Peer reviewe

    Time to imagine moving: Simulated motor activity affects time perception

    Get PDF
    Sensing the passage of time is important for countless daily tasks, yet time perception is easily influenced by perception, cognition, and emotion. Mechanistic accounts of time perception have traditionally regarded time perception as part of central cognition. Since proprioception, action execution, and sensorimotor contingencies also affect time perception, perception-action integration theories suggest motor processes are central to the experience of the passage of time. We investigated whether sensory information and motor activity may interactively affect the perception of the passage of time. Two prospective timing tasks involved timing a visual stimulus display conveying optical flow at increasing or decreasing velocity. While doing the timing tasks, participants were instructed to imagine themselves moving at increasing or decreasing speed, independently of the optical flow. In the direct-estimation task, the duration of the visual display was explicitly judged in seconds while in the motor-timing task, participants were asked to keep a constant pace of tapping. The direct-estimation task showed imagining accelerating movement resulted in relative overestimation of time, or time dilation, while decelerating movement elicited relative underestimation, or time compression. In the motor-timing task, imagined accelerating movement also accelerated tapping speed, replicating the time-dilation effect. The experiments show imagined movement affects time perception, suggesting a causal role of simulated motor activity. We argue that imagined movements and optical flow are integrated by temporal unfolding of sensorimotor contingencies. Consequently, as physical time is relative to spatial motion, so too is perception of time relative to imaginary motion.Peer reviewe

    Anticipation of aversive visual stimuli lengthens perceived temporal duration

    Get PDF
    Subjective estimates of elapsed time are sensitive to the fluctuations in an emotional state. While it is well known that dangerous and threatening situations, such as electric shocks or loud noises, are perceived as lasting longer than safe events, it remains unclear whether anticipating a threatening event speeds up or slows down subjective time and what defines the direction of the distortion. We examined whether the anticipation of uncertain visual aversive events resulted in either underestimation or overestimation of perceived duration. The participants did a temporal bisection task, where they estimated durations of visual cues relative to previously learnt long and short standard durations. The colour of the to-be-timed visual cue signalled either a 50% or 0% probability of encountering an aversive image at the end of the interval. The cue durations were found to be overestimated due to anticipation of aversive images, even when no image was shown afterwards. Moreover, the overestimation was more pronounced in people who reported feeling more anxious while anticipating the image. These results demonstrate that anxiogenic anticipation of uncertain visual threats induce temporal overestimation, which questions a recently proposed view that temporal underestimation evoked by uncertain threats is due to anxiety.Peer reviewe

    Collaborative Filtering with Preferences Inferred from Brain Signals

    Get PDF
    Collaborative filtering is a common technique in which interaction data from a large number of users are used to recommend items to an individual that the individual may prefer but has not interacted with. Previous approaches have achieved this using a variety of behavioral signals, from dwell time and clickthrough rates to self-reported ratings. However, such signals are mere estimations of the real underlying preferences of the users. Here, we use brain-computer interfacing to infer preferences directly from the human brain. We then utilize these preferences in a collaborative filtering setting and report results from an experiment where brain inferred preferences are used in a neural collaborative filtering framework. Our results demonstrate, for the first time, that brain-computer interfacing can provide a viable alternative for behavioral and self-reported preferences in realistic recommendation scenarios. We also discuss the broader implications of our findings for personalization systems and user privacy.Peer reviewe

    Information gain modulates brain activity evoked by reading

    Get PDF
    The human brain processes language to optimise efficient communication. Studies have shown extensive evidence that the brain's response to language is affected both by lower-level features, such as word-length and frequency, and syntactic and semantic violations within sentences. However, our understanding on cognitive processes at discourse level remains limited: How does the relationship between words and the wider topic one is reading about affect language processing? We propose an information theoretic model to explain cognitive resourcing. In a study in which participants read sentences from Wikipedia entries, we show information gain, an information theoretic measure that quantifies the specificity of a word given its topic context, modulates word-synchronised brain activity in the EEG. Words with high information gain amplified a slow positive shift in the event related potential. To show that the effect persists for individual and unseen brain responses, we furthermore show that a classifier trained on EEG data can successfully predict information gain from previously unseen EEG. The findings suggest that biological information processing seeks to maximise performance subject to constraints on information capacity.Peer reviewe

    Brain-computer interface for generating personally attractive images

    Get PDF
    While we instantaneously recognize a face as attractive, it is much harder to explain what exactly defines personal attraction. This suggests that attraction depends on implicit processing of complex, culturally and individually defined features. Generative adversarial neural networks (GANs), which learn to mimic complex data distributions, can potentially model subjective preferences unconstrained by pre-defined model parameterization. Here, we present generative brain-computer interfaces (GBCI), coupling GANs with brain-computer interfaces. GBCI first presents a selection of images and captures personalized attractiveness reactions toward the images via electroencephalography. These reactions are then used to control a GAN model, finding a representation that matches the features constituting an attractive image for an individual. We conducted an experiment (N=30) to validate GBCI using a face-generating GAN and producing images that are hypothesized to be individually attractive. In double-blind evaluation of the GBCI-produced images against matched controls, we found GBCI yielded highly accurate results. Thus, the use of EEG responses to control a GAN presents a valid tool for interactive information-generation. Furthermore, the GBCI-derived images visually replicated known effects from social neuroscience, suggesting that the individually responsive, generative nature of GBCI provides a powerful, new tool in mapping individual differences and visualizing cognitive-affective processing.Peer reviewe

    Manipulating Bodily Presence Affects Cross-Modal Spatial Attention : A Virtual-Reality-Based ERP Study

    Get PDF
    Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver’s body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.Peer reviewe

    Seeking Flow from Fine-Grained Log Data

    Get PDF
    Flow is the experience of deep absorption in a demanding, intrinsically-motivating task conducted with skill. We consider how to measure behavioural correlates of flow from fine-grained process data extracted from programming environments. Specifically, we propose measuring affective factors related to flow non-intrusively based on log data. Presently, such affective factors are typically measured intrusively (by self-report), which naturally will break the flow. We evaluate our approach in a pilot study, where we use log data and survey data collected from an introductory programming course. The log data is fine-grained, containing timestamped actions at the keystroke level from the process of solving programming assignments, while the survey data has been collected at the end of every completed assignment. The survey data in the pilot study comprises of Likert-like items measuring perceived educational value, perceived difficulty, and students' self-reported focus when solving the assignments. We study raw and derived log data metrics, by looking for relationships between the metrics and the survey data. We discuss the results of the pilot study and provide suggestions for future work related to non-intrusive measures of programmer affect.Peer reviewe
    corecore