9 research outputs found

    Specialized Signals for Spatial Attention in the Ventral and Dorsal Visual Streams

    Get PDF
    Neuroscientists have traditionally conceived the visual system as having a ventral stream of vision for perception and a dorsal one associated with vision for action. However functional differences between them have become relatively blurred in recent years, not the least by the systematic parallel mapping of functions allowed by functional magnetic resonance imaging (fMRI). Here, using fMRI to simultaneously monitor several brain regions, we first studied a hallmark ventral stream computation: the processing of faces. We did so by probing responses to motion, an attribute whose processing is typically associated with the dorsal stream. In humans, it is known that face-selective regions in the superior temporal sulcus (STS) show enhanced responses to facial motion that are absent in the rest of the face-processing system. In macaques, face areas also exist, but their functional specializations for facial motion are unknown. We showed static and moving face and non-face objects to macaques and humans in an fMRI experiment in order to isolate potential functional specializations in the ventral stream face-processing system and to motivate putative homologies across species. Our results revealed all macaque face areas showed enhanced responses to moving faces. There was a difference between more dorsal face areas in the fundus of the STS, which are embedded in motion responsive cortex and ventral ones, where enhanced responses to motion interacted with object category and could not be explained by their proximity to motion responsive cortex. In humans watching the same stimuli, only the STS face area showed an enhancement for motion. These results suggest specializations for motion exist in the macaque face-processing network but they do not lend themselves to a direct equalization between human and macaque face areas. We then proceeded to compare ventral and dorsal stream functions in terms of their code for spatial attention, whose control was typically associated with the dorsal stream and prefrontal areas. We took advantage of recent fMRI studies that provide a systematic map of cortical areas modulated by spatial attention and suggest PITd, a ventral stream area in the temporal lobe, can support endogenous attention control. Covert attention and stimulus selection by saccades are represented in the same maps of visual space in attention control areas. Difficulties interpreting this multiplicity of functions led to the proposal that they encode priority maps, where multiple sources are summed to form a single priority signal, agnostic as to its eventual use by downstream areas. Using a paradigm that dissociates covert attention and response selection, we test this hypothesis with fMRI-guided electrophysiology in two cortical areas: parietal area LIP, where the priority map was first proposed to apply, and temporal area PITd. Our results indicate LIP sums disparate signals, but as a consequence independent channels of spatial information exist for attention and response planning. PITd represents relevant locations and, rather than summing signals, contains a single map for covert attention. Our findings have the potential to resolve a longstanding controversy about the nature of spatial signals in LIP and establish PITd as a robust map for covert attention in the ventral stream. Together, our results suggest that while the distribution of labor between ventral stream and dorsal stream areas is less linear than what a what a rough depiction of them can suggest, it is illuminated by their proposed function as supporting vision for perception and vision for action respectively

    Parsing a Perceptual Decision into a Sequence of Moments of Thought

    Get PDF
    Theoretical, computational, and experimental studies have converged to a model of decision-making in which sensory evidence is stochastically integrated to a threshold, implementing a shift from an analog to a discrete form of computation. Understanding how this process can be chained and sequenced – as virtually all real-life tasks involve a sequence of decisions – remains an open question in neuroscience. We reasoned that incorporating a virtual continuum of possible behavioral outcomes in a simple decision task – a fundamental ingredient of real-life decision-making – should result in a progressive sequential approximation to the correct response. We used real-time tracking of motor action in a decision task, as a measure of cognitive states reflecting an internal decision process. We found that response trajectories were spontaneously segmented into a discrete sequence of explorations separated by brief stops (about 200 ms) – which remained unconscious to the participants. The characteristics of these stops were indicative of a decision process – a “moment of thought”: their duration correlated with the difficulty of the decision and with the efficiency of the subsequent exploration. Our findings suggest that simple navigation in an abstract space involves a discrete sequence of explorations and stops and, moreover, that these stops reveal a fingerprint of moments of thought

    Faces in Motion: Selectivity of Macaque and Human Face Processing Areas for Dynamic Stimuli

    Get PDF
    Face recognition mechanisms need to extract information from static and dynamic faces. It has been hypothesized that the analysis of dynamic face attributes is performed by different face areas than the analysis of static facial attributes. To date, there is no evidence for such a division of labor in macaque monkeys. We used fMRI to determine specializations of macaque face areas for motion. Face areas in the fundus of the superior temporal sulcus responded to general object motion; face areas outside of the superior temporal sulcus fundus responded more to facial motion than general object motion. Thus, the macaque face-processing system exhibits regional specialization for facial motion. Human face areas, processing the same stimuli, exhibited specializations for facial motion as well. Yet the spatial patterns of facial motion selectivity differed across species, suggesting that facial dynamics are analyzed differently in humans and macaques

    Improving Scientific Machine Learning via Attention and Multiple Shooting

    Full text link
    Scientific Machine Learning (SciML) is a burgeoning field that synergistically combines domain-aware and interpretable models with agnostic machine learning techniques. In this work, we introduce GOKU-UI, an evolution of the SciML generative model GOKU-nets. GOKU-UI not only broadens the original model's spectrum to incorporate other classes of differential equations, such as Stochastic Differential Equations (SDEs), but also integrates attention mechanisms and a novel multiple shooting training strategy in the latent space. These enhancements have led to a significant increase in its performance in both reconstruction and forecast tasks, as demonstrated by our evaluation of simulated and empirical data. Specifically, GOKU-UI outperformed all baseline models on synthetic datasets even with a training set 16-fold smaller, underscoring its remarkable data efficiency. Furthermore, when applied to empirical human brain data, while incorporating stochastic Stuart-Landau oscillators into its dynamical core, it not only surpassed all baseline methods in the reconstruction task, but also demonstrated better prediction of future brain activity up to 15 seconds ahead. By training GOKU-UI on resting state fMRI data, we encoded whole-brain dynamics into a latent representation, learning an effective low-dimensional dynamical system model that could offer insights into brain functionality and open avenues for practical applications such as the classification of mental states or psychiatric conditions. Ultimately, our research provides further impetus for the field of Scientific Machine Learning, showcasing the potential for advancements when established scientific insights are interwoven with modern machine learning

    Baseline multimodal information predicts future motor impairment in premanifest Huntington's disease

    No full text
    In Huntington's disease (HD), accurate estimates of expected future motor impairments are key for clinical trials. Individual prognosis is only partially explained by genetics. However, studies so far have focused on predicting the time to clinical diagnosis based on fixed impairment levels, as opposed to predicting impairment in time windows comparable to the duration of a clinical trial. Here we evaluate an approach to both detect atrophy patterns associated with early degeneration and provide a prognosis of motor impairment within 3 years, using data from the TRACK-HD study on 80 premanifest HD (pre-HD) individuals and 85 age- and sex-matched healthy controls. We integrate anatomical MRI information from gray matter concentrations (estimated via voxel-based morphometry) together with baseline data from demographic, genetic and motor domains to distinguish individuals at high risk of developing pronounced future motor impairment from those at low risk. We evaluate the ability of models to distinguish between these two groups solely using baseline imaging data, as well as in combination with longitudinal imaging or non-imaging data. Our models show improved performance for motor prognosis through the incorporation of imaging features to non-imaging data, reaching 88% cross-validated accuracy when using baseline non-longitudinal information, and detect informative correlates in the caudate nucleus and the thalamus both for motor prognosis and early atrophy detection. These results show the plausibility of using baseline imaging and basic demographic/genetic measures for early detection of individuals at high risk of severe future motor impairment in relatively short timeframes. Keywords: Future motor impairment prediction, Premanifest Huntington's disease, Classification, Structural MRI, TRACK-H

    Accelerating Medicines Partnership® Schizophrenia (AMP® SCZ):Rationale and Study Design of the Largest Global Prospective Cohort Study of Clinical High Risk for Psychosis

    Get PDF
    This article describes the rationale, aims, and methodology of the Accelerating Medicines Partnership® Schizophrenia (AMP® SCZ). This is the largest international collaboration to date that will develop algorithms to predict trajectories and outcomes of individuals at clinical high risk (CHR) for psychosis and to advance the development and use of novel pharmacological interventions for CHR individuals. We present a description of the participating research networks and the data processing analysis and coordination center, their processes for data harmonization across 43 sites from 13 participating countries (recruitment across North America, Australia, Europe, Asia, and South America), data flow and quality assessment processes, data analyses, and the transfer of data to the National Institute of Mental Health (NIMH) Data Archive (NDA) for use by the research community. In an expected sample of approximately 2000 CHR individuals and 640 matched healthy controls, AMP SCZ will collect clinical, environmental, and cognitive data along with multimodal biomarkers, including neuroimaging, electrophysiology, fluid biospecimens, speech and facial expression samples, novel measures derived from digital health technologies including smartphone-based daily surveys, and passive sensing as well as actigraphy. The study will investigate a range of clinical outcomes over a 2-year period, including transition to psychosis, remission or persistence of CHR status, attenuated positive symptoms, persistent negative symptoms, mood and anxiety symptoms, and psychosocial functioning. The global reach of AMP SCZ and its harmonized innovative methods promise to catalyze the development of new treatments to address critical unmet clinical and public health needs in CHR individuals.</p
    corecore