33 research outputs found
HPMI: A retinal fundus image dataset for identification of high and pathological myopia based on deep learning
Myopia is one of the leading causes of visual impairment worldwide and can progress to high or pathological myopia (HM or PM) if proper measures were not taken. Accurate identification of HM and PH plays an important role for their intervention and treatment, which can be implemented leveraging deep learning technology on sufficient annotation image data. However, few efforts have been made to construct public accessible annotation data for this task. In this paper, we constructed a retinal fundus image dataset (called HPMI) to identify the HM and PM. This dataset consists of 4011 fundus images with their corresponding annotations for HM and PM, which were confirmed by multiple ophthalmic examinations (e.g., visual acuity and axial length). To the best of our knowledge, this is the largest fundus image dataset for the classification of HM and PM. Based on the dataset, we further validated the classification potential of three representative deep learning networks (i.e., ResNet50, DenseNet121, and InceptionV3) and analyzed the consistency between prediction results and annotations</p
Boundaries of intra-retinal layers in OCT macular images.
<p>As seen in this image taken by UHR-OCT in the horizontal meridian, nine boundaries of intra-retinal layers were visualized. Images taken in the vertical meridian by UHR-OCT and in both meridians by the RTVue100 were similar to this.</p
The detailed sequence in the boundary segmentation process.
<p>(a) Original image. (b) Image smoothing. (c) Gradient image. (d) The ILM and the boundary between the RPE and choroid layers were first segmented. (e) Limiting detection area and search the minimum-weighted path. (f) Segmented image.</p
Segmentation errors in scans of lower image quality and corresponding corrected segmentation after applying the semi-automated approach.
<p>(A) The algorithm mistakenly identified the OPL/ONL interface. (B) Corrected segmentation corresponding (A) after applying the semi-automated approach. (C) The algorithm mistakenly identified the RNFL/GCL boundary. (D) Corrected segmentation corresponding (C) after applying the semi-automated approach.</p
Bland-Altman plots of thickness measurements determined with the automated segmentation algorithm on UHR-OCT and RTVue100 images.
<p>Only the images along the horizontal meridian were analyzed. The horizontal full lines represent the mean of thickness differences, and the horizontal dashed lines represent the mean differences ±1.96 standard deviation.</p
Comparison of the thickness measurements for eight intra-retinal layers and total retina between UHR-OCT and RTVue100 devices.
<p>P value: tested by paired-t test.</p
profiles of eight intra-retinal layers determined from the UHR-OCT and RTVue100 images in the horizontal meridian.
<p>Thickness profiles of eight intra-retinal layers along the horizontal meridian were averaged for 20 normal healthy eyes. Error bars represent standard deviation.</p
Configuration of the two OCT instruments.
<p>Configuration of the two OCT instruments.</p
Repeatability and reproducibility of thickness measurements for eight intra-retinal layers measured by the RTVue100.
<p>T1: mean thickness for the first measurement by examiner 1; T2: mean thickness for the second measurement by examiner 1; T3: mean thickness for the first measurement by examiner 2; ICCa: intraclass correlation coefficients of repeatability; ICCb: intraclass correlation coefficients of reproducibility; CORa: coefficients of repeatability; CORb: coefficients of reproducibility; n = 20 eyes.</p
Thickness profiles of eight intra-retinal layers determined from the UHR-OCT and RTVue100 images in the vertical meridian.
<p>Thickness profiles of eight intra-retinal layers along the vertical meridian were averaged for 20 normal healthy eyes. Error bars represent standard deviation.</p
