87 research outputs found
Misspecifications of Stimulus Presentation Durations in Experimental Psychology: A Systematic Review of the Psychophysics Literature
BACKGROUND: In visual psychophysics, precise display timing, particularly for brief stimulus presentations, is often required. The aim of this study was to systematically review the commonly applied methods for the computation of stimulus durations in psychophysical experiments and to contrast them with the true luminance signals of stimuli on computer displays. METHODOLOGY/PRINCIPAL FINDINGS: In a first step, we systematically scanned the citation index Web of Science for studies with experiments with stimulus presentations for brief durations. Articles which appeared between 2003 and 2009 in three different journals were taken into account if they contained experiments with stimuli presented for less than 50 milliseconds. The 79 articles that matched these criteria were reviewed for their method of calculating stimulus durations. For those 75 studies where the method was either given or could be inferred, stimulus durations were calculated by the sum of frames (SOF) method. In a second step, we describe the luminance signal properties of the two monitor technologies which were used in the reviewed studies, namely cathode ray tube (CRT) and liquid crystal display (LCD) monitors. We show that SOF is inappropriate for brief stimulus presentations on both of these technologies. In extreme cases, SOF specifications and true stimulus durations are even unrelated. Furthermore, the luminance signals of the two monitor technologies are so fundamentally different that the duration of briefly presented stimuli cannot be calculated by a single method for both technologies. Statistics over stimulus durations given in the reviewed studies are discussed with respect to different duration calculation methods. CONCLUSIONS/SIGNIFICANCE: The SOF method for duration specification which was clearly dominating in the reviewed studies leads to serious misspecifications particularly for brief stimulus presentations. We strongly discourage its use for brief stimulus presentations on CRT and LCD monitors
Temporal Properties of Liquid Crystal Displays: Implications for Vision Science Experiments
Liquid crystal displays (LCD) are currently replacing the previously dominant cathode ray tubes (CRT) in most vision science applications. While the properties of the CRT technology are widely known among vision scientists, the photometric and temporal properties of LCDs are unfamiliar to many practitioners. We provide the essential theory, present measurements to assess the temporal properties of different LCD panel types, and identify the main determinants of the photometric output. Our measurements demonstrate that the specifications of the manufacturers are insufficient for proper display selection and control for most purposes. Furthermore, we show how several novel display technologies developed to improve fast transitions or the appearance of moving objects may be accompanied by side–effects in some areas of vision research. Finally, we unveil a number of surprising technical deficiencies. The use of LCDs may cause problems in several areas in vision science. Aside from the well–known issue of motion blur, the main problems are the lack of reliable and precise onsets and offsets of displayed stimuli, several undesirable and uncontrolled components of the photometric output, and input lags which make LCDs problematic for real–time applications. As a result, LCDs require extensive individual measurements prior to applications in vision science
The Early Time Course of Compensatory Face Processing in Congenital Prosopagnosia
BACKGROUND: Prosopagnosia is a selective deficit in facial identification which can be either acquired, (e.g., after brain damage), or present from birth (congenital). The face recognition deficit in prosopagnosia is characterized by worse accuracy, longer reaction times, more dispersed gaze behavior and a strong reliance on featural processing. METHODS/PRINCIPAL FINDINGS: We introduce a conceptual model of an apperceptive/associative type of congenital prosopagnosia where a deficit in holistic processing is compensated by a serial inspection of isolated, informative features. Based on the model proposed we investigated performance differences in different face and shoe identification tasks between a group of 16 participants with congenital prosopagnosia and a group of 36 age-matched controls. Given enough training and unlimited stimulus presentation prosopagnosics achieved normal face identification accuracy evincing longer reaction times. The latter increase was paralleled by an equally-sized increase in stimulus presentation times needed achieve an accuracy of 80%. When the inspection time of stimuli was limited (50 ms to 750 ms), prosopagnosics only showed worse accuracy but no difference in reaction time. Tested for the ability to generalize from frontal to rotated views, prosopagnosics performed worse than controls across all rotation angles but the magnitude of the deficit didn't change with increasing rotation. All group differences in accuracy, reaction or presentation times were selective to face stimuli and didn't extend to shoes. CONCLUSIONS/SIGNIFICANCE: Our study provides a characterization of congenital prosopagnosia in terms of early processing differences. More specifically, compensatory processing in congenital prosopagnosia requires an inspection of faces that is sufficiently long to allow for sequential focusing on informative features. This characterization of dysfunctional processing in prosopagnosia further emphasizes fast and holistic information encoding as two defining characteristics of normal face processing
Harvard Eye Fairness: A Large-Scale 3D Imaging Dataset for Equitable Eye Diseases Screening and Fair Identity Scaling
Fairness or equity in machine learning is profoundly important for societal
well-being, but limited public datasets hinder its progress, especially in the
area of medicine. It is undeniable that fairness in medicine is one of the most
important areas for fairness learning's applications. Currently, no large-scale
public medical datasets with 3D imaging data for fairness learning are
available, while 3D imaging data in modern clinics are standard tests for
disease diagnosis. In addition, existing medical fairness datasets are actually
repurposed datasets, and therefore they typically have limited demographic
identity attributes with at most three identity attributes of age, gender, and
race for fairness modeling. To address this gap, we introduce our Eye Fairness
dataset with 30,000 subjects (Harvard-EF) covering three major eye diseases
including age-related macular degeneration, diabetic retinopathy, and glaucoma
affecting 380 million patients globally. Our Harvard-EF dataset includes both
2D fundus photos and 3D optical coherence tomography scans with six demographic
identity attributes including age, gender, race, ethnicity, preferred language,
and marital status. We also propose a fair identity scaling (FIS) approach
combining group and individual scaling together to improve model fairness. Our
FIS approach is compared with various state-of-the-art fairness learning
methods with superior performance in the racial, gender, and ethnicity fairness
tasks with 2D and 3D imaging data, which demonstrate the utilities of our
Harvard-EF dataset for fairness learning. To facilitate fairness comparisons
between different models, we propose performance-scaled disparity measures,
which can be used to compare model fairness accounting for overall performance
levels. The dataset and code are publicly accessible via
https://ophai.hms.harvard.edu/datasets/harvard-ef30k
Impact of Natural Blind Spot Location on Perimetry.
We study the spatial distribution of natural blind spot location (NBSL) and its impact on perimetry. Pattern deviation (PD) values of 11,449 reliable visual fields (VFs) that are defined as clinically unaffected based on summary indices were extracted from 11,449 glaucoma patients. We modeled NBSL distribution using a two-dimensional non-linear regression approach and correlated NBSL with spherical equivalent (SE). Additionally, we compared PD values of groups with longer and shorter distances than median, and larger and smaller angles than median between NBSL and fixation. Mean and standard deviation of horizontal and vertical NBSL were 14.33° ± 1.37° and -2.06° ± 1.27°, respectively. SE decreased with increasing NBSL (correlation: r = -0.14, p \u3c 0.001). For NBSL distances longer than median distance (14.32°), average PD values decreased in the upper central (average difference for significant points (ADSP): -0.18 dB) and increased in the lower nasal VF region (ADSP: 0.14 dB). For angles in the direction of upper hemifield relative to the median angle (-8.13°), PD values decreased in lower nasal (ADSP: -0.11 dB) and increased in upper temporal VF areas (ADSP: 0.19 dB). In conclusion, we demonstrate that NBSL has a systematic effect on the spatial distribution of VF sensitivity
Harvard Glaucoma Fairness: A Retinal Nerve Disease Dataset for Fairness Learning and Fair Identity Normalization
Fairness (also known as equity interchangeably) in machine learning is
important for societal well-being, but limited public datasets hinder its
progress. Currently, no dedicated public medical datasets with imaging data for
fairness learning are available, though minority groups suffer from more health
issues. To address this gap, we introduce Harvard Glaucoma Fairness
(Harvard-GF), a retinal nerve disease dataset with both 2D and 3D imaging data
and balanced racial groups for glaucoma detection. Glaucoma is the leading
cause of irreversible blindness globally with Blacks having doubled glaucoma
prevalence than other races. We also propose a fair identity normalization
(FIN) approach to equalize the feature importance between different identity
groups. Our FIN approach is compared with various the-state-of-the-art fairness
learning methods with superior performance in the racial, gender, and ethnicity
fairness tasks with 2D and 3D imaging data, which demonstrate the utilities of
our dataset Harvard-GF for fairness learning. To facilitate fairness
comparisons between different models, we propose an equity-scaled performance
measure, which can be flexibly used to compare all kinds of performance metrics
in the context of fairness. The dataset and code are publicly accessible via
\url{https://ophai.hms.harvard.edu/datasets/harvard-glaucoma-fairness-3300-samples/}
Machine learning-derived baseline visual field patterns predict future glaucoma onset in the Ocular Hypertension Treatment Study
PURPOSE: The Ocular Hypertension Treatment Study (OHTS) identified risk factors for primary open-angle glaucoma (POAG) in patients with ocular hypertension, including pattern standard deviation (PSD). Archetypal analysis, an unsupervised machine learning method, may offer a more interpretable approach to risk stratification by identifying patterns in baseline visual fields (VFs).
METHODS: There were 3272 eyes available in the OHTS. Archetypal analysis was applied using 24-2 baseline VFs, and model selection was performed with cross-validation. Decomposition coefficients for archetypes (ATs) were calculated. A penalized Cox proportional hazards model was implemented to select discriminative ATs. The AT model was compared to the OHTS model. Associations were identified between ATs with both POAG onset and VF progression, defined by mean deviation change per year.
RESULTS: We selected 8494 baseline VFs. Optimal AT count was 19. The highest prevalence ATs were AT9, AT11, and AT7. The AT-based prediction model had a C-index of 0.75 for POAG onset. Multivariable models demonstrated that a one-interquartile range increase in the AT5 (hazard ratio [HR] = 1.14; 95% confidence interval [CI], 1.04-1.25), AT8 (HR = 1.22; 95% CI, 1.09-1.37), AT15 (HR = 1.26; 95% CI, 1.12-1.41), and AT17 (HR = 1.17; 95% CI, 1.03-1.31) coefficients conferred increased risk of POAG onset. AT5, AT10, and AT14 were significantly associated with rapid VF progression. In a subgroup analysis by high-risk ATs (\u3e95th percentile or \u3c75th percentile coefficients), PSD lost significance as a predictor of POAG in the low-risk group.
CONCLUSIONS: Baseline VFs, prior to detectable glaucomatous damage, contain occult patterns representing early changes that may increase the risk of POAG onset and VF progression in patients with ocular hypertension. The relationship between PSD and POAG is modified by the presence of high-risk patterns at baseline. An AT-based prediction model for POAG may provide more interpretable glaucoma-specific information in a clinical setting
Recommended from our members
An Artificial Intelligence Approach to Detect Visual Field Progression in Glaucoma Based on Spatial Pattern Analysis.
Purpose: To detect visual field (VF) progression by analyzing spatial pattern changes.
Methods: We selected 12,217 eyes from 7360 patients with at least five reliable 24-2 VFs and 5 years of follow-up with an interval of at least 6 months. VFs were decomposed into 16 archetype patterns previously derived by artificial intelligence techniques. Linear regressions were applied to the 16 archetype weights of VF series over time. We defined progression as the decrease rate of the normal archetype or any increase rate of the 15 VF defect archetypes to be outside normal limits. The archetype method was compared with mean deviation (MD) slope, Advanced Glaucoma Intervention Study (AGIS) scoring, Collaborative Initial Glaucoma Treatment Study (CIGTS) scoring, and the permutation of pointwise linear regression (PoPLR), and was validated by a subset of VFs assessed by three glaucoma specialists.
Results: In the method development cohort of 11,817 eyes, the archetype method agreed more with MD slope (kappa: 0.37) and PoPLR (0.33) than AGIS (0.12) and CIGTS (0.22). The most frequently progressed patterns included decreased normal pattern (63.7%), and increased nasal steps (16.4%), altitudinal loss (15.9%), superior-peripheral defect (12.1%), paracentral/central defects (10.5%), and near total loss (10.4%). In the clinical validation cohort of 397 eyes with 27.5% of confirmed progression, the agreement (kappa) and accuracy (mean of hit rate and correct rejection rate) of the archetype method (0.51 and 0.77) significantly (P \u3c 0.001 for all) outperformed AGIS (0.06 and 0.52), CIGTS (0.24 and 0.59), MD slope (0.21 and 0.59), and PoPLR (0.26 and 0.60).
Conclusions: The archetype method can inform clinicians of VF progression patterns
Genome-wide association meta-analysis for early age-related macular degeneration highlights novel loci and insights for advanced disease
Peer reviewedPublisher PD
- …