863 research outputs found

    Strength of visual percept generated by famous faces perceived without awareness: effects of affective valence, response latency, and visual field

    Get PDF
    Participants who were unable to detect familiarity from masked 17 ms faces ([Stone and Valentine, 2004] and [Stone and Valentine, in press-b]) did report a vague, partial visual percept. Two experiments investigated the relative strength of the visual percept generated by famous and unfamiliar faces, using masked 17 ms exposure. Each trial presented simultaneously a famous and an unfamiliar face, one face in LVF and the other in RVF. In one task, participants responded according to which of the faces generated the stronger visual percept, and in the other task, they attempted an explicit familiarity decision. The relative strength of the visual percept of the famous face compared to the unfamiliar face was moderated by response latency and participants’ attitude towards the famous person. There was also an interaction of visual field with response latency, suggesting that the right hemisphere can generate a visual percept differentiating famous from unfamiliar faces more rapidly than the left hemisphere. Participants were at chance in the explicit familiarity decision, confirming the absence of awareness of facial familiarity

    Decoding Visual Percepts Induced by Word Reading with fMRI

    Get PDF
    International audienceWord reading involves multiple cognitive processes. To infer which word is being visualized, the brain first processes the visual percept, deciphers the letters, bigrams, and activates different words based on context or prior expectation like word frequency. In this contribution, we use supervised machine learning techniques to decode the first step of this processing stream using functional Magnetic Resonance Images (fMRI). We build a decoder that predicts the visual percept formed by four letter words, allowing us to identify words that were not present in the training data. To do so, we cast the learning problem as multiple classification problems after describing words with multiple binary attributes. This work goes beyond the identification or reconstruction of single letters or simple geometrical shapes and addresses a challenging estimation problem, that is the prediction of multiple variables from a single observation, hence facing the problem of learning multiple predictors from correlated inputs

    Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

    Get PDF
    The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure

    Quotational higher-order thought theory

    Get PDF
    © 2015. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.Due to their reliance on constitutive higher-order representing to generate the qualities of which the subject is consciously aware, I argue that the major existing higher-order representational theories of consciousness insulate us from our first-order sensory states. In fact on these views we are never properly conscious of our sensory states at all. In their place I offer a new higher-order theory of consciousness, with a view to making us suitably intimate with our sensory states in experience. This theory relies on the idea of ‘quoting’ sensory qualities, so is dubbed the ‘quotational higher-order thought theory’. I argue that it can capture something of the idea that we are ‘acquainted’ with our conscious states without slipping beyond the pale for naturalists, whilst also providing satisfying treatments of traditional problems for higher-order theories concerning representational mismatch. The theory achieves this by abandoning a representational mechanism for mental intentionality, in favour of one based on ‘embedding’Peer reviewedFinal Published versio

    Précis of Ways of Seeing

    Get PDF
    This is a summary of the book Ways of Seing co-authord witth Marc Jeannerod and published by Oxford University Press in 200

    HRF estimation improves sensitivity of fMRI encoding and decoding models

    Get PDF
    Extracting activation patterns from functional Magnetic Resonance Images (fMRI) datasets remains challenging in rapid-event designs due to the inherent delay of blood oxygen level-dependent (BOLD) signal. The general linear model (GLM) allows to estimate the activation from a design matrix and a fixed hemodynamic response function (HRF). However, the HRF is known to vary substantially between subjects and brain regions. In this paper, we propose a model for jointly estimating the hemodynamic response function (HRF) and the activation patterns via a low-rank representation of task effects.This model is based on the linearity assumption behind the GLM and can be computed using standard gradient-based solvers. We use the activation patterns computed by our model as input data for encoding and decoding studies and report performance improvement in both settings.Comment: 3nd International Workshop on Pattern Recognition in NeuroImaging (2013

    Improving visual sensitivity with subthreshold transcranial magnetic stimulation

    Get PDF
    We probed for improvement of visual sensitivity in human participants using transcranial magnetic stimulation (TMS). Stimulation of visual cortex can induce an illusory visual percept known as a phosphene. It is known that TMS, delivered at intensities above the threshold to induce phosphenes, impairs the detection of visual stimuli. We investigated how the detection of a simple visual stimulus is affected by TMS applied to visual cortex at or below the phosphene threshold. Participants performed the detection task while the contrast of the visual stimulus was varied from trial to trial according to an adaptive staircase procedure. Detection of the stimulus was enhanced when a single pulse of TMS was delivered to the contralateral visual cortex 100 or 120 ms after stimulus onset at intensities just below the phosphene threshold. No improvement in visual sensitivity was observed when TMS was applied to the visual cortex in the opposite hemisphere (ipsilateral to the visual stimulus). We conclude that TMS-induced neuronal activity can sum with stimulus-evoked activity to augment visual perception

    Individual differences in alpha frequency drive crossmodal illusory perception

    Get PDF
    Perception routinely integrates inputs from different senses. Stimulus temporal proximity critically determines whether or not these inputs are bound together. Despite the temporal window of integration being a widely accepted notion, its neurophysiological substrate remains unclear. Many types of common audio-visual interactions occur within a time window of -100ms [1-5]. For example, in the sound- induced double-flash illusion, when two beeps are presented within -100ms together with one flash, a second illusory flash is often perceived [2]. Due to their intrinsic rhythmic nature, brain oscillations are one candidate mechanism for gating the temporal window of integration. Interestingly, occipital alpha-band oscillations cycle on average every -100ms with peak frequencies ranging between 8-14Hz (i.e. 120-60ms cycle). Moreover, presenting a brief tone can phase-reset such oscillations in visual cortex [6, 7]. Based on these observations, we hypothesized that the duration of each alpha cycle might provide the temporal unit to bind audio-visual events. Here we first recorded EEG while participants performed the sound-induced double-flash illusion task [4] and found positive correlation between individual alpha-frequency (IAF) peak and the size of the temporal window of the illusion. Participants then performed the same task while receiving occipital transcranial alternating current stimulation (tACS), to modulate oscillatory activity [8] either at their IAF or at off-peak alpha-frequencies (IAF±2Hz). Compared to IAF tACS, IAF-2Hz and IAF+2Hz tACS respectively enlarged and shrunk the temporal window of illusion, suggesting that alpha oscillations might represent the temporal unit of visual processing that cyclically gates perception and the neurophysiological substrate promoting audio-visual interactions
    • …
    corecore