29 research outputs found

    Action planning and the timescale of evidence accumulation

    Get PDF
    Perceptual decisions are based on the temporal integration of sensory evidence for different states of the outside world. The timescale of this integration process varies widely across behavioral contexts and individuals, and it is diagnostic for the underlying neural mechanisms. In many situations, the decision-maker knows the required mapping between perceptual evidence and motor response (henceforth termed “sensory-motor contingency”) before decision formation. Here, the integrated evidence can be directly translated into a motor plan and, indeed, neural signatures of the integration process are evident as build-up activity in premotor brain regions. In other situations, however, the sensory-motor contingencies are unknown at the time of decision formation. We used behavioral psychophysics and computational modeling to test if knowledge about sensory-motor contingencies affects the timescale of perceptual evidence integration. We asked human observers to perform the same motion discrimination task, with or without trial-to-trial variations of the mapping between perceptual choice and motor response. When the mapping varied, it was either instructed before or after the stimulus presentation. We quantified the timescale of evidence integration under these different sensory-motor mapping conditions by means of two approaches. First, we analyzed subjects’ discrimination threshold as a function of stimulus duration. Second, we fitted a dynamical decision-making model to subjects’ choice behavior. The results from both approaches indicated that observers (i) integrated motion information for several hundred ms, (ii) used a shorter than optimal integration timescale, and (iii) used the same integration timescale under all sensory-motor mappings. We conclude that the mechanisms limiting the timescale of perceptual decisions are largely independent from long-term learning (under fixed mapping) or rapid acquisition (under variable mapping) of sensory-motor contingencies. This conclusion has implications for neurophysiological and neuroimaging studies of perceptual decision-making

    Representations of time in human frontoparietal cortex

    Get PDF
    Precise time estimation is crucial in perception, action and social interaction. Previous neuroimaging studies in humans indicate that perceptual timing tasks involve multiple brain regions; however, whether the representation of time is localized or distributed in the brain remains elusive. Using ultra-high-field functional magnetic resonance imaging combined with multivariate pattern analyses, we show that duration information is decoded in multiple brain areas, including the bilateral parietal cortex, right inferior frontal gyrus and, albeit less clearly, the medial frontal cortex. Individual differences in the duration judgment accuracy were positively correlated with the decoding accuracy of duration in the right parietal cortex, suggesting that individuals with a better timing performance represent duration information in a more distinctive manner. Our study demonstrates that although time representation is widely distributed across frontoparietal regions, neural populations in the right parietal cortex play a crucial role in time estimation

    Data Mining the Brain to Decode the Mind

    Get PDF
    In recent years, neuroscience has begun to transform itself into a “big data” enterprise with the importation of computational and statistical techniques from machine learning and informatics. In addition to their translational applications such as brain-computer interfaces and early diagnosis of neuropathology, these tools promise to advance new solutions to longstanding theoretical quandaries. Here I critically assess whether these promises will pay off, focusing on the application of multivariate pattern analysis (MVPA) to the problem of reverse inference. I argue that MVPA does not inherently provide a new answer to classical worries about reverse inference, and that the method faces pervasive interpretive problems of its own. Further, the epistemic setting of MVPA and other decoding methods contributes to a potentially worrisome shift towards prediction and away from explanation in fundamental neuroscience

    Decoding accuracy in supplementary motor cortex correlates with perceptual sensitivity to tactile roughness

    Get PDF
    Perceptual sensitivity to tactile roughness varies across individuals for the same degree of roughness. A number of neurophysiological studies have investigated the neural substrates of tactile roughness perception, but the neural processing underlying the strong individual differences in perceptual roughness sensitivity remains unknown. In this study, we explored the human brain activation patterns associated with the behavioral discriminability of surface texture roughness using functional magnetic resonance imaging (fMRI). First, a wholebrain searchlight multi-voxel pattern analysis (MVPA) was used to find brain regions from which we could decode roughness information. The searchlight MVPA revealed four brain regions showing significant decoding results: the supplementary motor area (SMA), contralateral postcentral gyrus (S1), and superior portion of the bilateral temporal pole (STP). Next, we evaluated the behavioral roughness discrimination sensitivity of each individual using the just-noticeable difference (JND) and correlated this with the decoding accuracy in each of the four regions. We found that only the SMA showed a significant correlation between neuronal decoding accuracy and JND across individuals; Participants with a smaller JND (i.e., better discrimination ability) exhibited higher decoding accuracy from their voxel response patterns in the SMA. Our findings suggest that multivariate voxel response patterns presented in the SMA represent individual perceptual sensitivity to tactile roughness and people with greater perceptual sensitivity to tactile roughness are likely to have more distinct neural representations of different roughness levels in their SMA. © 2015 Kim et al.close0

    Reconciling the statistics of spectral reflectance and colour

    Get PDF
    The spectral reflectance function of a surface specifies the fraction of the illumination reflected by it at each wavelength. Jointly with the illumination spectral density, this function determines the apparent colour of the surface. Models for the distribution of spectral reflectance functions in the natural environment are considered. The realism of the models is assessed in terms of the individual reflectance functions they generate, and in terms of the overall distribution of colours which they give rise to. Both realism assessments are made in comparison to empirical datasets. Previously described models (PCA- and fourier-based) of reflectance function statistics are evaluated, as are improved versions; and also a novel model, which synthesizes reflectance functions as a sum of sigmoid functions. Key model features for realism are identified. The new sigmoid-sum model is shown to be the most realistic, generating reflectance functions that are hard to distinguish from real ones, and accounting for the majority of colours found in natural images with the exception of an abundance of vegetation green and sky blue
    corecore