16 research outputs found
A New Ensemble Learning Framework for 3D Biomedical Image Segmentation
3D image segmentation plays an important role in biomedical image analysis.
Many 2D and 3D deep learning models have achieved state-of-the-art segmentation
performance on 3D biomedical image datasets. Yet, 2D and 3D models have their
own strengths and weaknesses, and by unifying them together, one may be able to
achieve more accurate results. In this paper, we propose a new ensemble
learning framework for 3D biomedical image segmentation that combines the
merits of 2D and 3D models. First, we develop a fully convolutional network
based meta-learner to learn how to improve the results from 2D and 3D models
(base-learners). Then, to minimize over-fitting for our sophisticated
meta-learner, we devise a new training method that uses the results of the
base-learners as multiple versions of "ground truths". Furthermore, since our
new meta-learner training scheme does not depend on manual annotation, it can
utilize abundant unlabeled 3D image data to further improve the model.
Extensive experiments on two public datasets (the HVSMR 2016 Challenge dataset
and the mouse piriform cortex dataset) show that our approach is effective
under fully-supervised, semi-supervised, and transductive settings, and attains
superior performance over state-of-the-art image segmentation methods.Comment: To appear in AAAI-2019. The first three authors contributed equally
to the pape
Efficient approximation of Earth Mover's Distance Based on Nearest Neighbor Search
Earth Mover's Distance (EMD) is an important similarity measure between two
distributions, used in computer vision and many other application domains.
However, its exact calculation is computationally and memory intensive, which
hinders its scalability and applicability for large-scale problems. Various
approximate EMD algorithms have been proposed to reduce computational costs,
but they suffer lower accuracy and may require additional memory usage or
manual parameter tuning. In this paper, we present a novel approach, NNS-EMD,
to approximate EMD using Nearest Neighbor Search (NNS), in order to achieve
high accuracy, low time complexity, and high memory efficiency. The NNS
operation reduces the number of data points compared in each NNS iteration and
offers opportunities for parallel processing. We further accelerate NNS-EMD via
vectorization on GPU, which is especially beneficial for large datasets. We
compare NNS-EMD with both the exact EMD and state-of-the-art approximate EMD
algorithms on image classification and retrieval tasks. We also apply NNS-EMD
to calculate transport mapping and realize color transfer between images.
NNS-EMD can be 44x to 135x faster than the exact EMD implementation, and
achieves superior accuracy, speedup, and memory efficiency over existing
approximate EMD methods
SHMC-Net: A Mask-guided Feature Fusion Network for Sperm Head Morphology Classification
Male infertility accounts for about one-third of global infertility cases.
Manual assessment of sperm abnormalities through head morphology analysis
encounters issues of observer variability and diagnostic discrepancies among
experts. Its alternative, Computer-Assisted Semen Analysis (CASA), suffers from
low-quality sperm images, small datasets, and noisy class labels. We propose a
new approach for sperm head morphology classification, called SHMC-Net, which
uses segmentation masks of sperm heads to guide the morphology classification
of sperm images. SHMC-Net generates reliable segmentation masks using image
priors, refines object boundaries with an efficient graph-based method, and
trains an image network with sperm head crops and a mask network with the
corresponding masks. In the intermediate stages of the networks, image and mask
features are fused with a fusion scheme to better learn morphological features.
To handle noisy class labels and regularize training on small datasets,
SHMC-Net applies Soft Mixup to combine mixup augmentation and a loss function.
We achieve state-of-the-art results on SCIAN and HuSHeM datasets, outperforming
methods that use additional pre-training or costly ensembling techniques.Comment: Published on ISBI 202
GIFM: an image restoration method with generalized image formation model for poor visible conditions
Recently, image restoration has attracted considerable attention from researchers, and these methods generally restore degraded images based on the atmospheric scattering model (ATSM) and retinex model (RM). The two models only take into the single attenuation process during imaging, thereby introducing undesirable results. To deal with this issue, we propose an image restoration method based on a generalized image formation model (GIFM). First, unlike the existing image restoration methods, we rebuild a novel image formation model, which describes the light attenuation process that includes the light source-scene path and scene-sensor path. Second, we construct an objective optimization function to decompose a degraded image into a color distorted component and color corrected component, and an augmented Lagrange multiplier-based alternating direction minimization algorithm is provided to solve the optimization problem. Finally, we fully consider the advantages of the small-scale neighborhood and large-scale neighborhood in image restoration, and an image itself brightness-based weighted fusion strategy is proposed to balance brightness enhancement and contrast improvement. Extensive experiments on three image enhancement datasets show that our GIFM achieves better results than state-of-the-art methods. Experiments further suggest that our GIFM performs well for image restoration of extreme scenes, keypoint detection, object detection, and image segmentation.This work was supported in part by the China Postdoctoral Science Foundation under Grant 2019M660438; in part by the National Natural Science Foundation of China under Grant 62171252, Grant 62071272, Grant 61701247, Grant 62001158, and Grant 62273001; in part by the Postdoctoral Science Foundation of China under Grant 2021M701903; in part by the National Key Research and Development Program of China under Grant 2020AAA0130000; in part by the MindSpore, CANN, and Ascend AI Processor; and in part by the CAAI-Huawei MindSpore Open Fund
家庭如何提供健康和積極的成長環境
專題討論:家庭如何提供健康和積極的成長環境 主持:陳沃聰博士 (香港理工大學應用社會科學系社會政策研究中心副主任) 評論員: (1) 梁蕙儀女士 (香港電台監製) (2) 雷張慎佳女士 (防止虐待兒童會主席) (3) 阮嘉毅醫生 (廣華醫院兒科顧問醫生) (4) 王佩賢女士 (家長代表) (5) 鄧毓華先生 (家長代表) (6) 梁緻珞 (學生代表
Biomedical Image Segmentation via Representative Annotation
Deep learning has been applied successfully to many biomedical image segmentation tasks. However, due to the diversity and complexity of biomedical image data, manual annotation for training common deep learning models is very timeconsuming and labor-intensive, especially because normally only biomedical experts can annotate image data well. Human experts are often involved in a long and iterative process of annotation, as in active learning type annotation schemes. In this paper, we propose representative annotation (RA), a new deep learning framework for reducing annotation effort in biomedical image segmentation. RA uses unsupervised networks for feature extraction and selects representative image patches for annotation in the latent space of learned feature descriptors, which implicitly characterizes the underlying data while minimizing redundancy. A fully convolutional network (FCN) is then trained using the annotated selected image patches for image segmentation. Our RA scheme offers three compelling advantages: (1) It leverages the ability of deep neural networks to learn better representations of image data; (2) it performs one-shot selection for manual annotation and frees annotators from the iterative process of common active learning based annotation schemes; (3) it can be deployed to 3D images with simple extensions. We evaluate our RA approach using three datasets (two 2D and one 3D) and show our framework yields competitive segmentation results comparing with state-of-the-art methods