54 research outputs found

    Anatomy and physiology of word‑selective visual cortex: from visual features to lexical processing

    Get PDF
    Published: 12 October 2021Over the past 2 decades, researchers have tried to uncover how the human brain can extract linguistic information from a sequence of visual symbols. The description of how the brain’s visual system processes words and enables reading has improved with the progressive refinement of experimental methodologies and neuroimaging techniques. This review provides a brief overview of this research journey. We start by describing classical models of object recognition in non-human primates, which represent the foundation for many of the early models of visual word recognition in humans. We then review functional neuroimaging studies investigating the word-selective regions in visual cortex. This research led to the differentiation of highly specialized areas, which are involved in the analysis of different aspects of written language. We then consider the corresponding anatomical measurements and provide a description of the main white matter pathways carrying neural signals crucial to word recognition. Finally, in an attempt to integrate structural, functional, and electrophysiological findings, we propose a view of visual word recognition, accounting for spatial and temporal facets of word-selective neural processes. This multi-modal perspective on the neural circuitry of literacy highlights the relevance of a posterior–anterior differentiation in ventral occipitotemporal cortex for visual processing of written language and lexical features. It also highlights unanswered questions that can guide us towards future research directions. Bridging measures of brain structure and function will help us reach a more precise understanding of the transformation from vision to language.This work was supported by European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no. 837228 and Rita Levi Montalcini fellowship to SC, NICHD R01-HD095861 and Jacobs Foundation Research Fellowship to JDY, Stanford Maternal and Child Health Research Institute award to IK, and the Zuckerman-CHE STEM Leadership Program to MY

    Combining Citizen Science and Deep Learning to Amplify Expertise in Neuroimaging

    Get PDF
    Big Data promises to advance science through data-driven discovery. However, many standard lab protocols rely on manual examination, which is not feasible for large-scale datasets. Meanwhile, automated approaches lack the accuracy of expert examination. We propose to (1) start with expertly labeled data, (2) amplify labels through web applications that engage citizen scientists, and (3) train machine learning on amplified labels, to emulate the experts. Demonstrating this, we developed a system to quality control brain magnetic resonance images. Expert-labeled data were amplified by citizen scientists through a simple web interface. A deep learning algorithm was then trained to predict data quality, based on citizen scientist labels. Deep learning performed as well as specialized algorithms for quality control (AUC = 0.99). Combining citizen science and deep learning can generalize and scale expert decision making; this is particularly important in disciplines where specialized, automated tools do not yet exist

    Generating and Evaluating Tests for K-12 Students with Language Model Simulations: A Case Study on Sentence Reading Efficiency

    Full text link
    Developing an educational test can be expensive and time-consuming, as each item must be written by experts and then evaluated by collecting hundreds of student responses. Moreover, many tests require multiple distinct sets of questions administered throughout the school year to closely monitor students' progress, known as parallel tests. In this study, we focus on tests of silent sentence reading efficiency, used to assess students' reading ability over time. To generate high-quality parallel tests, we propose to fine-tune large language models (LLMs) to simulate how previous students would have responded to unseen items. With these simulated responses, we can estimate each item's difficulty and ambiguity. We first use GPT-4 to generate new test items following a list of expert-developed rules and then apply a fine-tuned LLM to filter the items based on criteria from psychological measurements. We also propose an optimal-transport-inspired technique for generating parallel tests and show the generated tests closely correspond to the original test's difficulty and reliability based on crowdworker responses. Our evaluation of a generated test with 234 students from grades 2 to 8 produces test scores highly correlated (r=0.93) to those of a standard test form written by human experts and evaluated across thousands of K-12 students.Comment: Accepted to EMNLP 2023 (Main

    Evaluating the Reliability of Human Brain White Matter Tractometry

    Get PDF
    Published Nov 17, 2021The validity of research results depends on the reliability of analysis methods. In recent years, there have been concerns about the validity of research that uses diffusion-weighted MRI (dMRI) to understand human brain white matter connections in vivo, in part based on the reliability of analysis methods used in this field. We defined and assessed three dimensions of reliability in dMRI-based tractometry, an analysis technique that assesses the physical properties of white matter pathways: (1) reproducibility, (2) test-retest reliability, and (3) robustness. To facilitate reproducibility, we provide software that automates tractometry (https://yeatmanlab.github.io/pyAFQ). In measurements from the Human Connectome Project, as well as clinical-grade measurements, we find that tractometry has high test-retest reliability that is comparable to most standardized clinical assessment tools. We find that tractometry is also robust: showing high reliability with different choices of analysis algorithms. Taken together, our results suggest that tractometry is a reliable approach to analysis of white matter connections. The overall approach taken here both demonstrates the specific trustworthiness of tractometry analysis and outlines what researchers can do to establish the reliability of computational analysis pipelines in neuroimaging.This work was supported through grant 1RF1MH121868- 01 from the National Institute of Mental Health/the BRAIN Initiative, through grant 5R01EB027585-02 to Eleftherios Garyfallidis (Indiana University) from the National Institute of Biomedical Imaging and Bioengineering, through Azure Cloud Computing Credits for Research & Teaching provided through the University of Washington’s Research Computing unit and the University of Washington eScience Institute, and NICHD R21HD092771 to Jason D. Yeatma

    Rapid online assessment of reading ability

    Get PDF
    Published18 March 2021An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the webbrowser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test (r = 0.91, disattenuated r = 0.94). Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 76 trials (2–3 min) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resourceintensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability.We would like to thank the Pavlovia and PsychoPy team for their support on the browser-based experiments. This work was funded by NIH NICHD R01HD09586101, research grants from Microsoft and Jacobs Foundation Research Fellowship to J.D.Y

    Speed accuracy tradeoff? Not so fast: Marginal changes in speed have inconsistent relationships with accuracy in real-world settings

    Get PDF
    The speed-accuracy tradeoff suggests that responses generated under time constraints will be less accurate. While it has undergone extensive experimental verification, it is less clear whether it applies in settings where time pressures are not being experimentally manipulated (but where respondents still vary in their utilization of time). Using a large corpus of 29 response time datasets containing data from cognitive tasks without experimental manipulation of time pressure, we probe whether the speed-accuracy tradeoff holds across a variety of tasks using idiosyncratic within-person variation in speed. We find inconsistent relationships between marginal increases in time spent responding and accuracy; in many cases, marginal increases in time do not predict increases in accuracy. However, we do observe time pressures (in the form of time limits) to consistently reduce accuracy and for rapid responses to typically show the anticipated relationship (i.e., they are more accurate if they are slower). We also consider analysis of items and individuals. We find substantial variation in the item-level associations between speed and accuracy. On the person side, respondents who exhibit more within-person variation in response speed are typically of lower ability. Finally, we consider the predictive power of a person's response time in predicting out-of-sample responses; it is generally a weak predictor. Collectively, our findings suggest the speed-accuracy tradeoff may be limited as a conceptual model in its application in non-experimental settings and, more generally, offer empirical results and an analytic approach that will be useful as more response time data is collected

    Temporal Tuning of Word- and Face-selective Cortex

    No full text

    Annotating digital text with phonemic cues to support decoding in struggling readers.

    No full text
    An advantage of digital media is the flexibility to personalize the presentation of text to an individual's needs and embed tools that support pedagogy. The goal of this study was to develop a tablet-based reading tool, grounded in the principles of phonics-based instruction, and determine whether struggling readers could leverage this technology to decode challenging words. The tool presents a small icon below each vowel to represent its sound. Forty struggling child readers were randomly assigned to an intervention or control group to test the efficacy of the phonemic cues. We found that struggling readers could leverage the cues to improve pseudoword decoding: after two weeks of practice, the intervention group showed greater improvement than controls. This study demonstrates the potential of a text annotation, grounded in intervention research, to help children decode novel words. These results highlight the opportunity for educational technologies to support and supplement classroom instruction
    • …
    corecore