11 research outputs found
Osteoporotic and Neoplastic Compression Fracture Classification on Longitudinal CT
Classification of vertebral compression fractures (VCF) having osteoporotic
or neoplastic origin is fundamental to the planning of treatment. We developed
a fracture classification system by acquiring quantitative morphologic and bone
density determinants of fracture progression through the use of automated
measurements from longitudinal studies. A total of 250 CT studies were acquired
for the task, each having previously identified VCFs with osteoporosis or
neoplasm. Thirty-six features or each identified VCF were computed and
classified using a committee of support vector machines. Ten-fold cross
validation on 695 identified fractured vertebrae showed classification
accuracies of 0.812, 0.665, and 0.820 for the measured, longitudinal, and
combined feature sets respectively.Comment: Contributed 4-Page Paper to be presented at the 2016 IEEE
International Symposium on Biomedical Imaging (ISBI), April 13-16, 2016,
Prague, Czech Republi
RAR-U-Net: a Residual Encoder to Attention Decoder by Residual Connections Framework for Spine Segmentation under Noisy Labels
Segmentation algorithms of medical image volumes are widely studied for many
clinical and research purposes. We propose a novel and efficient framework for
medical image segmentation. The framework functions under a deep learning
paradigm, incorporating four novel contributions. Firstly, a residual
interconnection is explored in different scale encoders. Secondly, four copy
and crop connections are replaced to residual-block-based concatenation to
alleviate the disparity between encoders and decoders, respectively. Thirdly,
convolutional attention modules for feature refinement are studied on all scale
decoders. Finally, an adaptive denoising learning strategy(ADL) based on the
training process from underfitting to overfitting is studied. Experimental
results are illustrated on a publicly available benchmark database of spine
CTs. Our segmentation framework achieves competitive performance with other
state-of-the-art methods over a variety of different evaluation measures
Comparing Normalization Methods for Limited Batch Size Segmentation Neural Networks
The widespread use of Batch Normalization has enabled training deeper neural
networks with more stable and faster results. However, the Batch Normalization
works best using large batch size during training and as the state-of-the-art
segmentation convolutional neural network architectures are very memory
demanding, large batch size is often impossible to achieve on current hardware.
We evaluate the alternative normalization methods proposed to solve this issue
on a problem of binary spine segmentation from 3D CT scan. Our results show the
effectiveness of Instance Normalization in the limited batch size neural
network training environment. Out of all the compared methods the Instance
Normalization achieved the highest result with Dice coefficient = 0.96 which is
comparable to our previous results achieved by deeper network with longer
training time. We also show that the Instance Normalization implementation used
in this experiment is computational time efficient when compared to the network
without any normalization method
Dealing with unreliable annotations: noise-robust network for semantic segmentation through transformer-improved-encoder and convolution-decoder
Conventional deep learning methods have shown promising results in the medical domain when trained on accurate ground truth data. Pragmatically, due to constraints like lack of time or annotator inexperience, the ground truth data obtained from clinical environments may not always be impeccably accurate. In this paper, we investigate whether the presence of noise in ground truth data can be mitigated. We propose an innovative and efficient approach that addresses the challenge posed by noise in segmentation labels. Our method consists of four key components within a deep learning framework. First, we introduce a Vision Transformer-based modified encoder combined with a convolution-based decoder for the segmentation network, capitalizing on the recent success of self-attention mechanisms. Second, we consider a public CT spine segmentation dataset and devise a preprocessing step to generate (and even exaggerate) noisy labels, simulating real-world clinical situations. Third, to counteract the influence of noisy labels, we incorporate an adaptive denoising learning strategy (ADL) into the network training. Finally, we demonstrate through experimental results that the proposed method achieves noise-robust performance, outperforming existing baseline segmentation methods across multiple evaluation metrics