23 research outputs found

    The Social Situation Affects How We Process Feedback About Our Actions

    Get PDF
    Humans achieve their goals in joint action tasks either by cooperation or competition. In the present study, we investigated the neural processes underpinning error and monetary rewards processing in such cooperative and competitive situations. We used electroencephalography (EEG) and analyzed event-related potentials (ERPs) triggered by feedback in both social situations. 26 dyads performed a joint four-alternative forced choice (4AFC) visual task either cooperatively or competitively. At the end of each trial, participants received performance feedback about their individual and joint errors and accompanying monetary rewards. Furthermore, the outcome, i.e., resulting positive, negative, or neutral rewards, was dependent on the pay-off matrix, defining the social situation either as cooperative or competitive. We used linear mixed effects models to analyze the feedback-related-negativity (FRN) and used the Threshold-free cluster enhancement (TFCE) method to explore activations of all electrodes and times. We found main effects of the outcome and social situation, but no interaction at mid-line frontal electrodes. The FRN was more negative for losses than wins in both social situations. However, the FRN amplitudes differed between social situations. Moreover, we compared monetary with neutral outcomes in both social situations. Our exploratory TFCE analysis revealed that processing of feedback differs between cooperative and competitive situations at right temporo-parietal electrodes where the cooperative situation elicited more positive amplitudes. Further, the differences induced by the social situations were stronger in participants with higher scores on a perspective taking test. In sum, our results replicate previous studies about the FRN and extend them by comparing neurophysiological responses to positive and negative outcomes in a task that simultaneously engages two participants in competitive and cooperative situations

    Eye movements as a window to cognitive processes

    Get PDF
    Eye movement research is a highly active and productive research field. Here we focus on how the embodied nature of eye movements can act as a window to the brain and the mind. In particular, we discuss how conscious perception depends on the trajectory of fixated locations and consequently address how fixation locations are selected. Specifically, we argue that the selection of fixation points during visual exploration can be understood to a large degree based on retinotopically structured models. Yet, these models largely ignore spatiotemporal structure in eye-movement sequences. Explaining spatiotemporal structure in eye-movement trajectories requires an understanding of spatiotemporal properties of the visual sampling process. With this in mind, we discuss the availability of external information to internal inference about causes in the world. We demonstrate that visual foraging is a dynamic process that can be systematically modulated either towards exploration or exploitation. For an analysis at high temporal resolution, we suggest a new method: The renewal density allows the investigation of precise temporal relation of eye movements and other actions like a button press. We conclude with an outlook and propose that eye movement research has reached an appropriate stage and can easily be combined with other research methods to utilize this window to the brain and mind to its fullest

    Retrospective evaluation of whole exome and genome mutation calls in 746 cancer samples

    No full text
    Funder: NCI U24CA211006Abstract: The Cancer Genome Atlas (TCGA) and International Cancer Genome Consortium (ICGC) curated consensus somatic mutation calls using whole exome sequencing (WES) and whole genome sequencing (WGS), respectively. Here, as part of the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium, which aggregated whole genome sequencing data from 2,658 cancers across 38 tumour types, we compare WES and WGS side-by-side from 746 TCGA samples, finding that ~80% of mutations overlap in covered exonic regions. We estimate that low variant allele fraction (VAF < 15%) and clonal heterogeneity contribute up to 68% of private WGS mutations and 71% of private WES mutations. We observe that ~30% of private WGS mutations trace to mutations identified by a single variant caller in WES consensus efforts. WGS captures both ~50% more variation in exonic regions and un-observed mutations in loci with variable GC-content. Together, our analysis highlights technological divergences between two reproducible somatic variant detection efforts

    Decisions, Predictions, and Learning in the visual sense

    No full text
    We experience the world through our senses. But we can only make sense of the incoming information because it is weighted and interpreted against our perceptual experience which we gather throughout our lives. In this thesis I present several approaches we used to investigate the learning of prior-experience and its utilization for prediction-based computations in decision making. Teaching participants new categories is a good example to demonstrate how new information is used to learn about, and to understand the world. In the first study I present, we taught participants new visual categories using a reinforcement learning paradigm. We recorded their brain activity before, during, and after prolonged learning over 24 sessions. This allowed us to show that initial learning of categories occurs relatively late during processing, in prefrontal areas. After extended learning, categorization occurs early during processing and is likely to occur in temporal structures. One possible computational mechanism to express prior information is the prediction of future input. In this thesis, I make use of a prominent theory of brain function, predictive coding. We performed two studies. In the first, we showed that expectations of the brain can surpass the reliability of incoming information: In a perceptual decision making task, a percept based on fill-in from the physiological blind spot is judged as more reliable to an identical percept from veridical input. In the second study, we showed that expectations occur between eye movements. There, we measured brain activity while peripheral predictions were violated over eye movements. We found two sets of prediction errors early and late during processing. By changing the reliability of the stimulus using the blind spots, we in addition confirm an important theoretical idea: The strength of prediction-violation is modified based on the reliability of the prediction. So far, we used eye-movements as they are useful to understand the interaction between the current information state of the brain and expectations of future information. In a series of experiments we modulated the amount of information the visual system is allowed to extract before a new eye movement is made. We developed a new paradigm that allows for experimental control of eye-movement trajectories as well as fixation durations. We show that interrupting the extraction of information influences the planning of new eye movements. In addition, we show that eye movement planning time follow Hick's law, a logarithmic increase of saccadic reaction time with increasing number of possible targets. Most of the studies presented here tried to identify causal effects in human behavior or brain-computations. Often direct interventions in the system, like brain stimulation or lesions, are needed for such causal statements. Unfortunately, not many methods are available to directly control the neurons of the brain and even less the encoded expectations. Recent developments of the new optogenetic agent Melanopsin allow for direct activation and silencing of neuronal cells. In cooperation with researchers from the field of optogenetics, we developed a generative Bayesian model of Melanopsin, that allows to integrate physiological data over multiple experiments, include prior knowledge on bio-physical constraints and identify differences between proteins. After discussing these projects, I will take a meta-perspective on my field and end this dissertation with a discussion and outlook of open science and statistical developments in the field of cognitive science

    Unfold : an integrated toolbox for overlapcorrection, non-linear modeling, andregression-based EEG analysis

    No full text
    Electrophysiological research with event-related brain potentials (ERPs) isincreasingly moving from simple, strictly orthogonal stimulation paradigms towardsmore complex, quasi-experimental designs and naturalistic situations that involvefast, multisensory stimulation and complex motor behavior. As a result,electrophysiological responses from subsequent events often overlap with each other.In addition, the recorded neural activity is typically modulated by numerouscovariates, which influence the measured responses in a linear or non-linear fashion.Examples of paradigms where systematic temporal overlap variations andlow-level confounds between conditions cannot be avoided include combinedelectroencephalogram (EEG)/eye-tracking experiments during natural vision, fastmultisensory stimulation experiments, and mobile brain/body imaging studies.However, even“traditional,”highly controlled ERP datasets often contain a hiddenmix of overlapping activity (e.g., from stimulus onsets, involuntary microsaccades, orbutton presses) and it is helpful or even necessary to disentangle these componentsfor a correct interpretation of the results. In this paper, we introduceunfold,a powerful, yet easy-to-use MATLAB toolbox for regression-based EEG analyses thatcombines existing concepts of massive univariate modeling (“regression-ERPs”),linear deconvolution modeling, and non-linear modeling with the generalizedadditive model into one coherent andflexible analysis framework. The toolbox ismodular, compatible with EEGLAB and can handle even large datasets efficiently.It also includes advanced options for regularization and the use of temporal basisfunctions (e.g., Fourier sets). We illustrate the advantages of this approach forsimulated data as well as data from a standard face recognition experiment.In addition to traditional and non-conventional EEG/ERP designs,unfoldcan also beapplied to other overlapping physiological signals, such as pupillary or electrodermalresponses. It is available as open-source software at http://www.unfoldtoolbox.org

    Coordinating With a Robot Partner Affects Neural Processing Related to Action Monitoring

    No full text
    Robots start to play a role in our social landscape, and they are progressively becoming responsive, both physically and socially. It begs the question of how humans react to and interact with robots in a coordinated manner and what the neural underpinnings of such behavior are. This exploratory study aims to understand the differences in human-human and human-robot interactions at a behavioral level and from a neurophysiological perspective. For this purpose, we adapted a collaborative dynamical paradigm from the literature. We asked 12 participants to hold two corners of a tablet while collaboratively guiding a ball around a circular track either with another participant or a robot. In irregular intervals, the ball was perturbed outward creating an artificial error in the behavior, which required corrective measures to return to the circular track again. Concurrently, we recorded electroencephalography (EEG). In the behavioral data, we found an increased velocity and positional error of the ball from the track in the human-human condition vs. human-robot condition. For the EEG data, we computed event-related potentials. We found a significant difference between human and robot partners driven by significant clusters at fronto-central electrodes. The amplitudes were stronger with a robot partner, suggesting a different neural processing. All in all, our exploratory study suggests that coordinating with robots affects action monitoring related processing. In the investigated paradigm, human participants treat errors during human-robot interaction differently from those made during interactions with other humans. These results can improve communication between humans and robot with the use of neural activity in real-time

    A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000

    Get PDF
    Eye-tracking experiments rely heavily on good data quality of eye-trackers. Unfortunately, it is often the case that only the spatial accuracy and precision values are available from the manufacturers. These two values alone are not sufficient to serve as a benchmark for an eye-tracker: Eye-tracking quality deteriorates during an experimental session due to head movements, changing illumination or calibration decay. Additionally, different experimental paradigms require the analysis of different types of eye movements; for instance, smooth pursuit movements, blinks or microsaccades, which themselves cannot readily be evaluated by using spatial accuracy or precision alone. To obtain a more comprehensive description of properties, we developed an extensive eye-tracking test battery. In 10 different tasks, we evaluated eye-tracking related measures such as: the decay of accuracy, fixation durations, pupil dilation, smooth pursuit movement, microsaccade classification, blink classification, or the influence of head motion. For some measures, true theoretical values exist. For others, a relative comparison to a reference eye-tracker is needed. Therefore, we collected our gaze data simultaneously from a remote EyeLink 1000 eye-tracker as the reference and compared it with the mobile Pupil Labs glasses. As expected, the average spatial accuracy of 0.57° for the EyeLink 1000 eye-tracker was better than the 0.82° for the Pupil Labs glasses (N = 15). Furthermore, we classified less fixations and shorter saccade durations for the Pupil Labs glasses. Similarly, we found fewer microsaccades using the Pupil Labs glasses. The accuracy over time decayed only slightly for the EyeLink 1000, but strongly for the Pupil Labs glasses. Finally, we observed that the measured pupil diameters differed between eye-trackers on the individual subject level but not on the group level. To conclude, our eye-tracking test battery offers 10 tasks that allow us to benchmark the many parameters of interest in stereotypical eye-tracking situations and addresses a common source of confounds in measurement errors (e.g., yaw and roll head movements). All recorded eye-tracking data (including Pupil Labs’ eye videos), the stimulus code for the test battery, and the modular analysis pipeline are freely available (https://github.com/behinger/etcomp)

    The Social Situation Affects How We Process Feedback About Our Actions

    No full text
    Humans achieve their goals in joint action tasks either by cooperation or competition. In the present study, we investigated the neural processes underpinning error and monetary rewards processing in such cooperative and competitive situations. We used electroencephalography (EEG) and analyzed event-related potentials (ERPs) triggered by feedback in both social situations. 26 dyads performed a joint four-alternative forced choice (4AFC) visual task either cooperatively or competitively. At the end of each trial, participants received performance feedback about their individual and joint errors and accompanying monetary rewards. Furthermore, the outcome, i.e., resulting positive, negative, or neutral rewards, was dependent on the pay-off matrix, defining the social situation either as cooperative or competitive. We used linear mixed effects models to analyze the feedback-related-negativity (FRN) and used the Threshold-free cluster enhancement (TFCE) method to explore activations of all electrodes and times. We found main effects of the outcome and social situation, but no interaction at mid-line frontal electrodes. The FRN was more negative for losses than wins in both social situations. However, the FRN amplitudes differed between social situations. Moreover, we compared monetary with neutral outcomes in both social situations. Our exploratory TFCE analysis revealed that processing of feedback differs between cooperative and competitive situations at right temporo-parietal electrodes where the cooperative situation elicited more positive amplitudes. Further, the differences induced by the social situations were stronger in participants with higher scores on a perspective taking test. In sum, our results replicate previous studies about the FRN and extend them by comparing neurophysiological responses to positive and negative outcomes in a task that simultaneously engages two participants in competitive and cooperative situations

    How does the method change what we measure? Comparing virtual reality and text-based surveys for the assessment of moral decisions in traffic dilemmas

    No full text
    The question of how self-driving cars should behave in dilemma situations has recently attracted a lot of attention in science, media and society. A growing number of publications amass insight into the factors underlying the choices we make in such situations, often using forced-choice paradigms closely linked to the trolley dilemma. The methodology used to address these questions, however, varies widely between studies, ranging from fully immersive virtual reality settings to completely text-based surveys. In this paper we compare virtual reality and text-based assessments, analyzing the effect that different factors in the methodology have on decisions and emotional response of participants. We present two studies, comparing a total of six different conditions varying across three dimensions: The level of abstraction, the use of virtual reality, and time-constraints. Our results show that the moral decisions made in this context are not strongly influenced by the assessment, and the compared methods ultimately appear to measure very similar constructs. Furthermore, we add to the pool of evidence on the underlying factors of moral judgment in traffic dilemmas, both in terms of general preferences, i.e., features of the particular situation and potential victims, as well as in terms of individual differences between participants, such as their age and gender

    Planning to revisit: Neural activity in refixation precursors

    No full text
    Eye tracking studies suggest that refixations-fixations to locations previously visited-serve to recover information lost or missed during earlier exploration of a visual scene. These studies have largely ignored the role of precursor fixations-previous fixations on locations the eyes return to later. We consider the possibility that preparations to return later are already made during precursor fixations. This process would mark precursor fixations as a special category of fixations, that is, distinct in neural activity from other fixation categories such as refixations and fixations to locations visited only once. To capture the neural signals associated with fixation categories, we analyzed electroencephalograms (EEGs) and eye movements recorded simultaneously in a free-viewing contour search task. We developed a methodological pipeline involving regression-based deconvolution modeling, allowing our analyses to account for overlapping EEG responses owing to the saccade sequence and other oculomotor covariates. We found that precursor fixations were preceded by the largest saccades among the fixation categories. Independent of the effect of saccade length, EEG amplitude was enhanced in precursor fixations compared with the other fixation categories 200 to 400 ms after fixation onsets, most noticeably over the occipital areas. We concluded that precursor fixations play a pivotal role in visual perception, marking the continuous occurrence of transitions between exploratory and exploitative modes of eye movement in natural viewing behavior
    corecore