14 research outputs found

    Atypical gaze patterns in autistic adults are heterogeneous across but reliable within individuals

    Get PDF
    Background: Across behavioral studies, autistic individuals show greater variability than typically developing individuals. However, it remains unknown to what extent this variability arises from heterogeneity across individuals, or from unreliability within individuals. Here, we focus on eye tracking, which provides rich dependent measures that have been used extensively in studies of autism. Autistic individuals have an atypical gaze onto both static visual images and dynamic videos that could be leveraged for diagnostic purposes if the above open question could be addressed. Methods: We tested three competing hypotheses: (1) that gaze patterns of autistic individuals are less reliable or noisier than those of controls, (2) that atypical gaze patterns are individually reliable but heterogeneous across autistic individuals, or (3) that atypical gaze patterns are individually reliable and also homogeneous among autistic individuals. We collected desktop-based eye tracking data from two different full-length television sitcom episodes, at two independent sites (Caltech and Indiana University), in a total of over 150 adult participants (N = 48 autistic individuals with IQ in the normal range, 105 controls) and quantified gaze onto features of the videos using automated computer vision-based feature extraction. Results: We found support for the second of these hypotheses. Autistic people and controls showed equivalently reliable gaze onto specific features of videos, such as faces, so much so that individuals could be identified significantly above chance using a fingerprinting approach from video epochs as short as 2 min. However, classification of participants into diagnostic groups based on their eye tracking data failed to produce clear group classifications, due to heterogeneity in the autistic group. Limitations: Three limitations are the relatively small sample size, assessment across only two videos (from the same television series), and the absence of other dependent measures (e.g., neuroimaging or genetics) that might have revealed individual-level variability that was not evident with eye tracking. Future studies should expand to larger samples across longer longitudinal epochs, an aim that is now becoming feasible with Internet- and phone-based eye tracking. Conclusions: These findings pave the way for the investigation of autism subtypes, and for elucidating the specific visual features that best discriminate gaze patterns—directions that will also combine with and inform neuroimaging and genetic studies of this complex disorder.publishedVersionPeer reviewe

    Comprehensive trait attributions from faces

    No full text

    A cautionary note on predicting social judgments from faces with deep neural networks

    No full text
    People spontaneously infer other people’s psychology from faces, encompassing inferences of their affective states, cognitive states, and stable traits such as personality. These judgments are known to be often invalid, but nonetheless bias many social decisions. Their importance and ubiquity have made them popular targets for automated prediction using deep convolutional neural networks (DCNNs). Here we investigated the applicability of this approach: how well does it generalize, and what biases does it introduce? We compared three distinct sets of features (from a face identification DCNN, an object recognition DCNN, and using facial geometry), and tested their prediction across multiple out-of-sample datasets. Across judgments and datasets, features from both pre-trained DCNNs provided better predictions than did facial geometry. However, predictions using object recognition DCNN features were not robust to superficial cues (e.g., color, hair style). Importantly, predictions using face identification DCNN features were not specific: models trained to predict one social judgment (e.g., trustworthiness) also significantly predicted other social judgments (e.g., femininity, criminal), and at an even higher accuracy in some cases than predicting the judgment of interest (e.g., trustworthiness). Models trained to predict affective states (e.g., happy) also significantly predicted judgments of stable traits (e.g., sociable), and vice versa. Our analysis pipeline provides a flexible and efficient framework for predicting affective and social judgments from faces, but also highlights the dangers of such automated predictions: correlated but unintended judgments can drive the predictions of the intended judgments

    Cortical networks of dynamic scene category representation in the human brain

    No full text
    Humans have an impressive ability to rapidly process global information in natural scenes to infer their category. Yet, it remains unclear whether and how scene categories observed dynamically in the natural world are represented in cerebral cortex beyond few canonical scene-selective areas. To address this question, here we examined the representation of dynamic visual scenes by recording whole-brain blood oxygenation level-dependent (BOLD) responses while subjects viewed natural movies. We fit voxelwise encoding models to estimate tuning for scene categories that reflect statistical ensembles of objects and actions in the natural world. We find that this scene-category model explains a significant portion of the response variance broadly across cerebral cortex. Cluster analysis of scene-category tuning profiles across cortex reveals nine spatially-segregated networks of brain regions consistently across subjects. These networks show heterogeneous tuning for a diverse set of dynamic scene categories related to navigation, human activity, social interaction, civilization, natural environment, non-human animals, motion-energy, and texture, suggesting that the organization of scene category representation is quite complex

    Decoding Autism (vs. TD and DCD)

    No full text
    Test whether sensory and motor features improve upon original RDoC constructs in decoding youth with ASD from typically developing controls and those with motor deficits

    Uncovering a New Cause of Obstructive Hydrocephalus Following Subarachnoid Hemorrhage: Choroidal Artery Vasospasm-Related Ependymal Cell Degeneration and Aqueductal Stenosis-First Experimental Study

    No full text
    AYDIN, Nazan/0000-0003-3232-7713; Gundogdu, Cemal/0000-0003-2857-923X; Kanat, Ayhan/0000-0002-8189-2877WOS: 000380360500060PubMed: 27020981BACKGROUND: Hydrocephalus is a serious complication of subarachnoid hemorrhage (SAH). Obstruction of the cerebral aqueduct may cause hydrocephalus after SAH. Although various etiologic theories have been put forward, choroidal artery vasospasm-related ependymal desquamation and subependymal basal membrane rupture as mechanisms of aqueductal stenosis have not been suggested in the literature. METHODS: This study was conducted on 26 hybrid rabbits. Five rabbits were placed in a control group, 5 were placed in a sham group, and the remaining rabbits (n = 16) were placed in the SAH group. in the first 2 weeks, 5 animals in the SAH group died. the other 21 animals were decapitated after the 4-week follow-up period. Choroidal artery changes resulting from vasospasm, aqueduct volume, ependymal cell density, and Evans index values of brain ventricles were obtained and compared statistically. RESULTS: Mean aqueduct volume was 1.137 mm(3) +/- 0.096, normal ependymal cell density was 4560/mm(2) +/- 745, and Evans index was 0.32 +/- 0.05 in control animals (n = 5); these values were 1.247 mm(3) +/- 0.112, 3568/mm(2) +/- 612, and 0.34 +/- 0.15 in sham animals (n = 5); 1.676 mm(3) +/- 0.123, 2923/mm(2) +/- 591, and 0.43 +/- 0.09 in animals without aqueductal stenosis (n = 5); and 0.650 mm(3) +/- 0.011, 1234/mm(2) +/- 498, and 0.60 +/- 0.18 in animals with severe aqueductal stenosis (n = 6). the choroidal vasospasm index values were 1.160 +/- 0.040 in the control group, 1.150 +/- 0.175 in the sham group, 1.760 +/- 0.125 in the nonstenotic group, and 2.262 +/- 0.160 in the stenotic group. Aqueduct volumes, ependymal cell densities, Evans index, and choroidal artery vasospasm index values were statistically significantly different between groups (P<0.05). CONCLUSIONS: Ependymal cell desquamation and subependymal basal membrane destruction related to choroidal artery vasospasm may lead to aqueductal stenosis and hydrocephalus after SAH

    Optical coherence tomography findings in Parkinson's disease

    No full text
    The aim of this study is to compare optical coherence tomography (OCT) findings of retinal thickness (RT) and retinal nerve fiber layer thickness (RNFLT) of idiopathic Parkinson's disease (IPD) patients to those of healthy subjects, and to investigate whether there is any relationship between the severity of the disease and the RNFLT values. This prospective study was included 25 IPD patients and 29 healthy controls. In the IPD group, the Hoehn and Yahr (H&Y), Unified Parkinson's Disease Rating Scale (UPDRS), and Mini-Mental State Exam (MMSE) were performed. Intraocular pressure (IOP), visual acuity (VA), spherical equivalent, axial length (AL), and central corneal thickness (CCT) were measured using OCT in both groups. The RT was measured in the central retinal (RTc), nasal (RTn), and temporal (RTt) segments. Nasal (RNFLTn), nasal superior (RNFLTns), nasal inferior (RNFLTni), temporal (RNFLTt), temporal superior (RNFLTts), and temporal inferior (RNFLTti) measurements were made and mean RTFLT was calculated (RNFLTg) for each individual. In the patient group, IOP and VA values were statistically significantly lower The RTn and RNFLTg were significantly thinner in the patient group. There was no statistically significant relationship between the severity of IPD and these findings. In our study, RNFLTg and RTn were found to be thinner in the IPD group, which may have caused lower VA scores. The effects of retinal dopamine depletion on RT and RNFLT, and lower IOP values in the non-glaucomatous IPD patients should be further investigated

    A multi-layered graphene based gas sensor platform for discrimination of volatile organic compounds via differential intercalation

    No full text
    Selective and sensitive detection of volatile organic compounds (VOCs) is of critical importance for environmental monitoring, disease diagnosis and industrial applications. Among VOCs, assay development for primary alcohols has captured significant research attention since their toxicity causes adverse effects on gastrointestinal and central nerve systems, resulting in irreversible blindness, and coma, and can be even fatal at high exposure levels. However, selective detection of primary alcohols is extremely challenging owing to the similarity in their molecular structure and characteristic groups. Herein, we have attempted to investigate the differential methanol (MeOH)-ethanol (EtOH) discriminative properties of single-layer, bi-layer, and multi-layer graphene morphologies. Chemiresistors fabricated using the three morphologies of graphene illustrate discriminative MeOH-EtOH responses, which is attributed to the phenomenon of differential intercalation of MeOH within layered graphene morphologies as compared to that of EtOH. This hypothesis is verified by density functional theory calculations, which revealed that the adsorption of EtOH molecules on the graphene surface is more energetically favorable as compared to that of MeOH molecules, thereby inhibiting their intercalation within the layered graphene morphologies. It is further evaluated that the degree of MeOH intercalation increases with increasing layers of graphene for obtaining differential MeOH-EtOH responses. Experimental results suggest possibilities to develop selective and sensitive MeOH assays fabricated using various graphene morphologies in a combinatorial sensor array format.This research was supported by a grant from the Scientific and Technological Research Council of Turkey, TUBÄ°TAK (Grant No: 117F243). We are thankful for financial support from the Izmir Institute of Technology Scientific Project Fund (IYTE -BAP-291). The author D. O. I is a YĂ–K 100-2000 scholarship holder. H. S. thanks TUBITAK for partially supporting the theoretical calculations and experimental characterization of this study within the framework of project Grant No: 120F318

    Multimodal single-neuron, intracranial EEG, and fMRI brain responses during movie watching in human patients

    No full text
    Abstract We present a multimodal dataset of intracranial recordings, fMRI, and eye tracking in 20 participants during movie watching. Recordings consist of single neurons, local field potential, and intracranial EEG activity acquired from depth electrodes targeting the amygdala, hippocampus, and medial frontal cortex implanted for monitoring of epileptic seizures. Participants watched an 8-min long excerpt from the video “Bang! You’re Dead” and performed a recognition memory test for movie content. 3 T fMRI activity was recorded prior to surgery in 11 of these participants while performing the same task. This NWB- and BIDS-formatted dataset includes spike times, field potential activity, behavior, eye tracking, electrode locations, demographics, and functional and structural MRI scans. For technical validation, we provide signal quality metrics, assess eye tracking quality, behavior, the tuning of cells and high-frequency broadband power field potentials to familiarity and event boundaries, and show brain-wide inter-subject correlations for fMRI. This dataset will facilitate the investigation of brain activity during movie watching, recognition memory, and the neural basis of the fMRI-BOLD signal

    Movie annotations for: Multimodal brain responses during movie watching: single-neuron, intracranial EEG, and fMRI in human patients

    No full text
    Movie annotations for the manuscript: "Multimodal brain responses during movie watching: single-neuron, intracranial EEG, and fMRI in human patients"Authors: Umit Keles, Julien Dubois, Kevin J. M. Le, J. Michael Tyszka, David A. Kahn, Chrystal M. Reed, Jeffrey M. Chung, Adam N. Mamelak, Ralph Adolphs, Ueli RutishauserAbstract: We present a multimodal dataset of intracranial recordings, fMRI, and eye tracking in 20 human participants as they watched the same movie stimulus. Intracranial recordings consist of single neurons, local field potential, and intracranial EEG activity recorded concurrently from depth electrodes targeting the amygdala, hippocampus, and medial frontal cortex while participants underwent intracranial monitoring for localization of epileptic seizures.Participants watched an 8-min long excerpt from the video “Bang! You’re Dead” and performed a recognition memory test for movie content. 3T fMRI activity was recorded prior to surgery in 11 of these participants while performing the same task. This NWB- and BIDS-formatted dataset includes the spike times of all neurons, field potential activity, behavior, eye tracking, electrode locations, demographics, and functional and structural MRI scans. For technical validation, we provide signal quality metrics, assess eye tracking quality, behavior, the tuning of cells and high-frequency broadband power field potentials to familiarity and event boundaries, and show brain-wide inter-subject correlations for fMRI.This dataset will facilitate the investigation of brain activity during movie watching, recognition memory, and the neural basis of the fMRI-BOLD signal.Related code: https://github.com/rutishauserlab/bmovie-release-NWB-BIDS Intracranial recording data: https://dandiarchive.org/dandiset/000623fMRI data: https://openneuro.org/datasets/ds004798/</p
    corecore