Retinal optical coherence tomography (OCT) images provide crucial insights
into the health of the posterior ocular segment. Therefore, the advancement of
automated image analysis methods is imperative to equip clinicians and
researchers with quantitative data, thereby facilitating informed
decision-making. The application of deep learning (DL)-based approaches has
gained extensive traction for executing these analysis tasks, demonstrating
remarkable performance compared to labor-intensive manual analyses. However,
the acquisition of Retinal OCT images often presents challenges stemming from
privacy concerns and the resource-intensive labeling procedures, which
contradicts the prevailing notion that DL models necessitate substantial data
volumes for achieving superior performance. Moreover, limitations in available
computational resources constrain the progress of high-performance medical
artificial intelligence, particularly in less developed regions and countries.
This paper introduces a novel ensemble learning mechanism designed for
recognizing retinal diseases under limited resources (e.g., data, computation).
The mechanism leverages insights from multiple pre-trained models, facilitating
the transfer and adaptation of their knowledge to Retinal OCT images. This
approach establishes a robust model even when confronted with limited labeled
data, eliminating the need for an extensive array of parameters, as required in
learning from scratch. Comprehensive experimentation on real-world datasets
demonstrates that the proposed approach can achieve superior performance in
recognizing Retinal OCT images, even when dealing with exceedingly restricted
labeled datasets. Furthermore, this method obviates the necessity of learning
extensive-scale parameters, making it well-suited for deployment in
low-resource scenarios.Comment: Ongoing wor