22 research outputs found

    Emergence of Visual Center-Periphery Spatial Organization in Deep Convolutional Neural Networks

    Get PDF
    Research at the intersection of computer vision and neuroscience has revealed hierarchical correspondence between layers of deep convolutional neural networks (DCNNs) and cascade of regions along human ventral visual cortex. Recently, studies have uncovered emergence of human interpretable concepts within DCNNs layers trained to identify visual objects and scenes. Here, we asked whether an artificial neural network (with convolutional structure) trained for visual categorization would demonstrate spatial correspondences with human brain regions showing central/peripheral biases. Using representational similarity analysis, we compared activations of convolutional layers of a DCNN trained for object and scene categorization with neural representations in human brain visual regions. Results reveal a brain-like topographical organization in the layers of the DCNN, such that activations of layer-units with central-bias were associated with brain regions with foveal tendencies (e.g. fusiform gyrus), and activations of layer-units with selectivity for image backgrounds were associated with cortical regions showing peripheral preference (e.g. parahippocampal cortex). The emergence of a categorical topographical correspondence between DCNNs and brain regions suggests these models are a good approximation of the perceptual representation generated by biological neural networks

    Reliability and generalizability of similarity-based fusion of meg and fmri data in human ventral and dorsal visual streams

    Get PDF
    To build a representation of what we see, the human brain recruits regions throughout the visual cortex in cascading sequence. Recently, an approach was proposed to evaluate the dynamics of visual perception in high spatiotemporal resolution at the scale of the whole brain. This method combined functional magnetic resonance imaging (fMRI) data with magnetoencephalography (MEG) data using representational similarity analysis and revealed a hierarchical progression from primary visual cortex through the dorsal and ventral streams. To assess the replicability of this method, we here present the results of a visual recognition neuro-imaging fusion experiment and compare them within and across experimental settings. We evaluated the reliability of this method by assessing the consistency of the results under similar test conditions, showing high agreement within participants. We then generalized these results to a separate group of individuals and visual input by comparing them to the fMRI-MEG fusion data of Cichy et al (2016), revealing a highly similar temporal progression recruiting both the dorsal and ventral streams. Together these results are a testament to the reproducibility of the fMRI-MEG fusion approach and allows for the interpretation of these spatiotemporal dynamic in a broader context

    Non-invasive diagnostic tests for Helicobacter pylori infection

    Get PDF
    BACKGROUND: Helicobacter pylori (H pylori) infection has been implicated in a number of malignancies and non-malignant conditions including peptic ulcers, non-ulcer dyspepsia, recurrent peptic ulcer bleeding, unexplained iron deficiency anaemia, idiopathic thrombocytopaenia purpura, and colorectal adenomas. The confirmatory diagnosis of H pylori is by endoscopic biopsy, followed by histopathological examination using haemotoxylin and eosin (H & E) stain or special stains such as Giemsa stain and Warthin-Starry stain. Special stains are more accurate than H & E stain. There is significant uncertainty about the diagnostic accuracy of non-invasive tests for diagnosis of H pylori. OBJECTIVES: To compare the diagnostic accuracy of urea breath test, serology, and stool antigen test, used alone or in combination, for diagnosis of H pylori infection in symptomatic and asymptomatic people, so that eradication therapy for H pylori can be started. SEARCH METHODS: We searched MEDLINE, Embase, the Science Citation Index and the National Institute for Health Research Health Technology Assessment Database on 4 March 2016. We screened references in the included studies to identify additional studies. We also conducted citation searches of relevant studies, most recently on 4 December 2016. We did not restrict studies by language or publication status, or whether data were collected prospectively or retrospectively. SELECTION CRITERIA: We included diagnostic accuracy studies that evaluated at least one of the index tests (urea breath test using isotopes such as13C or14C, serology and stool antigen test) against the reference standard (histopathological examination using H & E stain, special stains or immunohistochemical stain) in people suspected of having H pylori infection. DATA COLLECTION AND ANALYSIS: Two review authors independently screened the references to identify relevant studies and independently extracted data. We assessed the methodological quality of studies using the QUADAS-2 tool. We performed meta-analysis by using the hierarchical summary receiver operating characteristic (HSROC) model to estimate and compare SROC curves. Where appropriate, we used bivariate or univariate logistic regression models to estimate summary sensitivities and specificities. MAIN RESULTS: We included 101 studies involving 11,003 participants, of which 5839 participants (53.1%) had H pylori infection. The prevalence of H pylori infection in the studies ranged from 15.2% to 94.7%, with a median prevalence of 53.7% (interquartile range 42.0% to 66.5%). Most of the studies (57%) included participants with dyspepsia and 53 studies excluded participants who recently had proton pump inhibitors or antibiotics.There was at least an unclear risk of bias or unclear applicability concern for each study.Of the 101 studies, 15 compared the accuracy of two index tests and two studies compared the accuracy of three index tests. Thirty-four studies (4242 participants) evaluated serology; 29 studies (2988 participants) evaluated stool antigen test; 34 studies (3139 participants) evaluated urea breath test-13C; 21 studies (1810 participants) evaluated urea breath test-14C; and two studies (127 participants) evaluated urea breath test but did not report the isotope used. The thresholds used to define test positivity and the staining techniques used for histopathological examination (reference standard) varied between studies. Due to sparse data for each threshold reported, it was not possible to identify the best threshold for each test.Using data from 99 studies in an indirect test comparison, there was statistical evidence of a difference in diagnostic accuracy between urea breath test-13C, urea breath test-14C, serology and stool antigen test (P = 0.024). The diagnostic odds ratios for urea breath test-13C, urea breath test-14C, serology, and stool antigen test were 153 (95% confidence interval (CI) 73.7 to 316), 105 (95% CI 74.0 to 150), 47.4 (95% CI 25.5 to 88.1) and 45.1 (95% CI 24.2 to 84.1). The sensitivity (95% CI) estimated at a fixed specificity of 0.90 (median from studies across the four tests), was 0.94 (95% CI 0.89 to 0.97) for urea breath test-13C, 0.92 (95% CI 0.89 to 0.94) for urea breath test-14C, 0.84 (95% CI 0.74 to 0.91) for serology, and 0.83 (95% CI 0.73 to 0.90) for stool antigen test. This implies that on average, given a specificity of 0.90 and prevalence of 53.7% (median specificity and prevalence in the studies), out of 1000 people tested for H pylori infection, there will be 46 false positives (people without H pylori infection who will be diagnosed as having H pylori infection). In this hypothetical cohort, urea breath test-13C, urea breath test-14C, serology, and stool antigen test will give 30 (95% CI 15 to 58), 42 (95% CI 30 to 58), 86 (95% CI 50 to 140), and 89 (95% CI 52 to 146) false negatives respectively (people with H pylori infection for whom the diagnosis of H pylori will be missed).Direct comparisons were based on few head-to-head studies. The ratios of diagnostic odds ratios (DORs) were 0.68 (95% CI 0.12 to 3.70; P = 0.56) for urea breath test-13C versus serology (seven studies), and 0.88 (95% CI 0.14 to 5.56; P = 0.84) for urea breath test-13C versus stool antigen test (seven studies). The 95% CIs of these estimates overlap with those of the ratios of DORs from the indirect comparison. Data were limited or unavailable for meta-analysis of other direct comparisons. AUTHORS' CONCLUSIONS: In people without a history of gastrectomy and those who have not recently had antibiotics or proton ,pump inhibitors, urea breath tests had high diagnostic accuracy while serology and stool antigen tests were less accurate for diagnosis of Helicobacter pylori infection.This is based on an indirect test comparison (with potential for bias due to confounding), as evidence from direct comparisons was limited or unavailable. The thresholds used for these tests were highly variable and we were unable to identify specific thresholds that might be useful in clinical practice.We need further comparative studies of high methodological quality to obtain more reliable evidence of relative accuracy between the tests. Such studies should be conducted prospectively in a representative spectrum of participants and clearly reported to ensure low risk of bias. Most importantly, studies should prespecify and clearly report thresholds used, and should avoid inappropriate exclusions

    processed data

    No full text
    fMRI, MEG, AlexNet features, and RDM data used in analyse

    stimuli

    No full text
    156-image experimental stimulus set with memorability and categorical metadat

    MemorabilityFusion

    No full text
    Data and code associated with the manuscript, "Visual perception of highly memorable images is mediated by a distributed network of ventral visual regions that enable a late memorability response

    scripts

    No full text
    scripts used for the core analyse

    An fMRI dataset of 1,102 natural videos for visual event understanding

    No full text
    A visual event, such as a dog running in a park, communicates complex relationships between objects and their environment. The human visual system is tasked with transforming these spatiotemporal events into meaningful outputs so we can effectively interact with our environment. To form a useful representation of the event, the visual system utilizes many visual processes, from object recognition to motion perception. Thus, studying the neural correlates of visual event understanding requires brain responses that capture the entire transformation from video-based stimuli to high-level conceptual understanding. However, despite its ecological importance and computational richness, there does not yet exist a dataset to sufficiently study visual event understanding. Here we release the Algonauts Action Videos (AAV) dataset composed of high quality functional magnetic resonance imaging brain responses to 1,102 richly annotated naturalistic video stimuli. We detail AAV’s experimental design and highlight its high quality and reliable activation throughout the visual and parietal cortices. Initial analyses show the signal contained in AAV reflects numerous visual processes representing different aspects of visual event understanding, from scene recognition to action recognition to memorability processing. Since AAV captures an ecologically-relevant and complex visual process, this dataset can be used to study how various aspects of visual perception integrate to form a meaningful understanding of a video. Additionally, we demonstrate its utility as a model evaluation benchmark to bridge the gap between visual neuroscience and video-based computer vision research.S.M

    Visual perception of highly memorable images is mediated by a distributed network of ventral visual regions that enable a late memorability response.

    No full text
    Behavioral and neuroscience studies in humans and primates have shown that memorability is an intrinsic property of an image that predicts its strength of encoding into and retrieval from memory. While previous work has independently probed when or where this memorability effect may occur in the human brain, a description of its spatiotemporal dynamics is missing. Here, we used representational similarity analysis (RSA) to combine functional magnetic resonance imaging (fMRI) with source-estimated magnetoencephalography (MEG) to simultaneously measure when and where the human cortex is sensitive to differences in image memorability. Results reveal that visual perception of High Memorable images, compared to Low Memorable images, recruits a set of regions of interest (ROIs) distributed throughout the ventral visual cortex: a late memorability response (from around 300 ms) in early visual cortex (EVC), inferior temporal cortex, lateral occipital cortex, fusiform gyrus, and banks of the superior temporal sulcus. Image memorability magnitude results are represented after high-level feature processing in visual regions and reflected in classical memory regions in the medial temporal lobe (MTL). Our results present, to our knowledge, the first unified spatiotemporal account of visual memorability effect across the human cortex, further supporting the levels-of-processing theory of perception and memory
    corecore