67 research outputs found

    Misspecifications of Stimulus Presentation Durations in Experimental Psychology: A Systematic Review of the Psychophysics Literature

    Get PDF
    BACKGROUND: In visual psychophysics, precise display timing, particularly for brief stimulus presentations, is often required. The aim of this study was to systematically review the commonly applied methods for the computation of stimulus durations in psychophysical experiments and to contrast them with the true luminance signals of stimuli on computer displays. METHODOLOGY/PRINCIPAL FINDINGS: In a first step, we systematically scanned the citation index Web of Science for studies with experiments with stimulus presentations for brief durations. Articles which appeared between 2003 and 2009 in three different journals were taken into account if they contained experiments with stimuli presented for less than 50 milliseconds. The 79 articles that matched these criteria were reviewed for their method of calculating stimulus durations. For those 75 studies where the method was either given or could be inferred, stimulus durations were calculated by the sum of frames (SOF) method. In a second step, we describe the luminance signal properties of the two monitor technologies which were used in the reviewed studies, namely cathode ray tube (CRT) and liquid crystal display (LCD) monitors. We show that SOF is inappropriate for brief stimulus presentations on both of these technologies. In extreme cases, SOF specifications and true stimulus durations are even unrelated. Furthermore, the luminance signals of the two monitor technologies are so fundamentally different that the duration of briefly presented stimuli cannot be calculated by a single method for both technologies. Statistics over stimulus durations given in the reviewed studies are discussed with respect to different duration calculation methods. CONCLUSIONS/SIGNIFICANCE: The SOF method for duration specification which was clearly dominating in the reviewed studies leads to serious misspecifications particularly for brief stimulus presentations. We strongly discourage its use for brief stimulus presentations on CRT and LCD monitors

    The Early Time Course of Compensatory Face Processing in Congenital Prosopagnosia

    Get PDF
    BACKGROUND: Prosopagnosia is a selective deficit in facial identification which can be either acquired, (e.g., after brain damage), or present from birth (congenital). The face recognition deficit in prosopagnosia is characterized by worse accuracy, longer reaction times, more dispersed gaze behavior and a strong reliance on featural processing. METHODS/PRINCIPAL FINDINGS: We introduce a conceptual model of an apperceptive/associative type of congenital prosopagnosia where a deficit in holistic processing is compensated by a serial inspection of isolated, informative features. Based on the model proposed we investigated performance differences in different face and shoe identification tasks between a group of 16 participants with congenital prosopagnosia and a group of 36 age-matched controls. Given enough training and unlimited stimulus presentation prosopagnosics achieved normal face identification accuracy evincing longer reaction times. The latter increase was paralleled by an equally-sized increase in stimulus presentation times needed achieve an accuracy of 80%. When the inspection time of stimuli was limited (50 ms to 750 ms), prosopagnosics only showed worse accuracy but no difference in reaction time. Tested for the ability to generalize from frontal to rotated views, prosopagnosics performed worse than controls across all rotation angles but the magnitude of the deficit didn't change with increasing rotation. All group differences in accuracy, reaction or presentation times were selective to face stimuli and didn't extend to shoes. CONCLUSIONS/SIGNIFICANCE: Our study provides a characterization of congenital prosopagnosia in terms of early processing differences. More specifically, compensatory processing in congenital prosopagnosia requires an inspection of faces that is sufficiently long to allow for sequential focusing on informative features. This characterization of dysfunctional processing in prosopagnosia further emphasizes fast and holistic information encoding as two defining characteristics of normal face processing

    Harvard Eye Fairness: A Large-Scale 3D Imaging Dataset for Equitable Eye Diseases Screening and Fair Identity Scaling

    Full text link
    Fairness or equity in machine learning is profoundly important for societal well-being, but limited public datasets hinder its progress, especially in the area of medicine. It is undeniable that fairness in medicine is one of the most important areas for fairness learning's applications. Currently, no large-scale public medical datasets with 3D imaging data for fairness learning are available, while 3D imaging data in modern clinics are standard tests for disease diagnosis. In addition, existing medical fairness datasets are actually repurposed datasets, and therefore they typically have limited demographic identity attributes with at most three identity attributes of age, gender, and race for fairness modeling. To address this gap, we introduce our Eye Fairness dataset with 30,000 subjects (Harvard-EF) covering three major eye diseases including age-related macular degeneration, diabetic retinopathy, and glaucoma affecting 380 million patients globally. Our Harvard-EF dataset includes both 2D fundus photos and 3D optical coherence tomography scans with six demographic identity attributes including age, gender, race, ethnicity, preferred language, and marital status. We also propose a fair identity scaling (FIS) approach combining group and individual scaling together to improve model fairness. Our FIS approach is compared with various state-of-the-art fairness learning methods with superior performance in the racial, gender, and ethnicity fairness tasks with 2D and 3D imaging data, which demonstrate the utilities of our Harvard-EF dataset for fairness learning. To facilitate fairness comparisons between different models, we propose performance-scaled disparity measures, which can be used to compare model fairness accounting for overall performance levels. The dataset and code are publicly accessible via https://ophai.hms.harvard.edu/datasets/harvard-ef30k

    Heritability of Face Recognition

    Get PDF

    Harvard Glaucoma Fairness: A Retinal Nerve Disease Dataset for Fairness Learning and Fair Identity Normalization

    Full text link
    Fairness (also known as equity interchangeably) in machine learning is important for societal well-being, but limited public datasets hinder its progress. Currently, no dedicated public medical datasets with imaging data for fairness learning are available, though minority groups suffer from more health issues. To address this gap, we introduce Harvard Glaucoma Fairness (Harvard-GF), a retinal nerve disease dataset with both 2D and 3D imaging data and balanced racial groups for glaucoma detection. Glaucoma is the leading cause of irreversible blindness globally with Blacks having doubled glaucoma prevalence than other races. We also propose a fair identity normalization (FIN) approach to equalize the feature importance between different identity groups. Our FIN approach is compared with various the-state-of-the-art fairness learning methods with superior performance in the racial, gender, and ethnicity fairness tasks with 2D and 3D imaging data, which demonstrate the utilities of our dataset Harvard-GF for fairness learning. To facilitate fairness comparisons between different models, we propose an equity-scaled performance measure, which can be flexibly used to compare all kinds of performance metrics in the context of fairness. The dataset and code are publicly accessible via \url{https://ophai.hms.harvard.edu/datasets/harvard-glaucoma-fairness-3300-samples/}

    Machine learning-derived baseline visual field patterns predict future glaucoma onset in the Ocular Hypertension Treatment Study

    Get PDF
    PURPOSE: The Ocular Hypertension Treatment Study (OHTS) identified risk factors for primary open-angle glaucoma (POAG) in patients with ocular hypertension, including pattern standard deviation (PSD). Archetypal analysis, an unsupervised machine learning method, may offer a more interpretable approach to risk stratification by identifying patterns in baseline visual fields (VFs). METHODS: There were 3272 eyes available in the OHTS. Archetypal analysis was applied using 24-2 baseline VFs, and model selection was performed with cross-validation. Decomposition coefficients for archetypes (ATs) were calculated. A penalized Cox proportional hazards model was implemented to select discriminative ATs. The AT model was compared to the OHTS model. Associations were identified between ATs with both POAG onset and VF progression, defined by mean deviation change per year. RESULTS: We selected 8494 baseline VFs. Optimal AT count was 19. The highest prevalence ATs were AT9, AT11, and AT7. The AT-based prediction model had a C-index of 0.75 for POAG onset. Multivariable models demonstrated that a one-interquartile range increase in the AT5 (hazard ratio [HR] = 1.14; 95% confidence interval [CI], 1.04-1.25), AT8 (HR = 1.22; 95% CI, 1.09-1.37), AT15 (HR = 1.26; 95% CI, 1.12-1.41), and AT17 (HR = 1.17; 95% CI, 1.03-1.31) coefficients conferred increased risk of POAG onset. AT5, AT10, and AT14 were significantly associated with rapid VF progression. In a subgroup analysis by high-risk ATs (\u3e95th percentile or \u3c75th percentile coefficients), PSD lost significance as a predictor of POAG in the low-risk group. CONCLUSIONS: Baseline VFs, prior to detectable glaucomatous damage, contain occult patterns representing early changes that may increase the risk of POAG onset and VF progression in patients with ocular hypertension. The relationship between PSD and POAG is modified by the presence of high-risk patterns at baseline. An AT-based prediction model for POAG may provide more interpretable glaucoma-specific information in a clinical setting

    Chinese characters reveal impacts of prior experience on very early stages of perception

    Get PDF
    Visual perception is strongly determined by accumulated experience with the world, which has been shown for shape, color, and position perception, in the field of visuomotor learning, and in neural computation. In addition, visual perception is tuned to statistics of natural scenes. Such prior experience is modulated by neuronal top-down control the temporal properties of which had been subject to recent studies. Here, we deal with these temporal properties and address the question how early in time accumulated past experience can modulate visual perception
    corecore