111 research outputs found
EDDense-Net: Fully Dense Encoder Decoder Network for Joint Segmentation of Optic Cup and Disc
Glaucoma is an eye disease that causes damage to the optic nerve, which can
lead to visual loss and permanent blindness. Early glaucoma detection is
therefore critical in order to avoid permanent blindness. The estimation of the
cup-to-disc ratio (CDR) during an examination of the optical disc (OD) is used
for the diagnosis of glaucoma. In this paper, we present the EDDense-Net
segmentation network for the joint segmentation of OC and OD. The encoder and
decoder in this network are made up of dense blocks with a grouped
convolutional layer in each block, allowing the network to acquire and convey
spatial information from the image while simultaneously reducing the network's
complexity. To reduce spatial information loss, the optimal number of filters
in all convolution layers were utilised. In semantic segmentation, dice pixel
classification is employed in the decoder to alleviate the problem of class
imbalance. The proposed network was evaluated on two publicly available
datasets where it outperformed existing state-of-the-art methods in terms of
accuracy and efficiency. For the diagnosis and analysis of glaucoma, this
method can be used as a second opinion system to assist medical
ophthalmologists
Boundary and Entropy-driven Adversarial Learning for Fundus Image Segmentation
Accurate segmentation of the optic disc (OD) and cup (OC)in fundus images
from different datasets is critical for glaucoma disease screening. The
cross-domain discrepancy (domain shift) hinders the generalization of deep
neural networks to work on different domain datasets.In this work, we present
an unsupervised domain adaptation framework,called Boundary and Entropy-driven
Adversarial Learning (BEAL), to improve the OD and OC segmentation performance,
especially on the ambiguous boundary regions. In particular, our proposed BEAL
frame-work utilizes the adversarial learning to encourage the boundary
prediction and mask probability entropy map (uncertainty map) of the target
domain to be similar to the source ones, generating more accurate boundaries
and suppressing the high uncertainty predictions of OD and OC segmentation. We
evaluate the proposed BEAL framework on two public retinal fundus image
datasets (Drishti-GS and RIM-ONE-r3), and the experiment results demonstrate
that our method outperforms the state-of-the-art unsupervised domain adaptation
methods. Codes will be available at https://github.com/EmmaW8/BEAL.Comment: Accepted at MICCAI 201
Reconstruction-driven Dynamic Refinement based Unsupervised Domain Adaptation for Joint Optic Disc and Cup Segmentation
Glaucoma is one of the leading causes of irreversible blindness. Segmentation
of optic disc (OD) and optic cup (OC) on fundus images is a crucial step in
glaucoma screening. Although many deep learning models have been constructed
for this task, it remains challenging to train an OD/OC segmentation model that
could be deployed successfully to different healthcare centers. The
difficulties mainly comes from the domain shift issue, i.e., the fundus images
collected at these centers usually vary greatly in the tone, contrast, and
brightness. To address this issue, in this paper, we propose a novel
unsupervised domain adaptation (UDA) method called Reconstruction-driven
Dynamic Refinement Network (RDR-Net), where we employ a due-path segmentation
backbone for simultaneous edge detection and region prediction and design three
modules to alleviate the domain gap. The reconstruction alignment (RA) module
uses a variational auto-encoder (VAE) to reconstruct the input image and thus
boosts the image representation ability of the network in a self-supervised
way. It also uses a style-consistency constraint to force the network to retain
more domain-invariant information. The low-level feature refinement (LFR)
module employs input-specific dynamic convolutions to suppress the
domain-variant information in the obtained low-level features. The
prediction-map alignment (PMA) module elaborates the entropy-driven adversarial
learning to encourage the network to generate source-like boundaries and
regions. We evaluated our RDR-Net against state-of-the-art solutions on four
public fundus image datasets. Our results indicate that RDR-Net is superior to
competing models in both segmentation performance and generalization abilit
Unsupervised Domain Adaptive Fundus Image Segmentation with Few Labeled Source Data
Deep learning-based segmentation methods have been widely employed for
automatic glaucoma diagnosis and prognosis. In practice, fundus images obtained
by different fundus cameras vary significantly in terms of illumination and
intensity. Although recent unsupervised domain adaptation (UDA) methods enhance
the models' generalization ability on the unlabeled target fundus datasets,
they always require sufficient labeled data from the source domain, bringing
auxiliary data acquisition and annotation costs. To further facilitate the data
efficiency of the cross-domain segmentation methods on the fundus images, we
explore UDA optic disc and cup segmentation problems using few labeled source
data in this work. We first design a Searching-based Multi-style Invariant
Mechanism to diversify the source data style as well as increase the data
amount. Next, a prototype consistency mechanism on the foreground objects is
proposed to facilitate the feature alignment for each kind of tissue under
different image styles. Moreover, a cross-style self-supervised learning stage
is further designed to improve the segmentation performance on the target
images. Our method has outperformed several state-of-the-art UDA segmentation
methods under the UDA fundus segmentation with few labeled source data.Comment: Accepted by The 33rd British Machine Vision Conference (BMVC) 202
- …