2,378 research outputs found
Improving Kidney Tumor Detection Accuracy Using Hybrid U-Net Segmentation
Kidney cancer stands as a significant factor in cancer-related mortality, highlighting the critical importance of early and precise tumor detection This study introduces a computer-aided approach using the KiTS19 dataset and a hybrid U-Net architecture. Manual tumor segmentation is resource-intensive and prone to errors. Leveraging the hybrid U-Net, known for its proficiency in medical image analysis, we achieve precise tumor identification. Our method involves initial kidney and tumor segmentation in high-resolution CT images, followed by region of interest (ROI) generation and benign/malignant tumor classification. The assessment conducted on the KiTS19 dataset demonstrates encouraging outcomes, with Dice coefficients of 0.974 for kidney segmentation and 0.818 for tumor segmentation, accompanied by a tumor classification accuracy rate of 94.3%.The hybrid U-Net’s advanced feature extraction and spatial context awareness contribute to these outcomes. By streamlining diagnosis, our approach has the potential to significantly improve patient outcomes. The use of the KiTS19 dataset ensures robustness across various clinical cases and imaging modalities. This method represents a valuable advancement in computer-aided kidney tumor detection, promising to enhance patient care
U-Capkidnets++-: A Novel Hybrid Capsule Networks with Optimized Deep Feed Forward Networks for an Effective Classification of Kidney Tumours Using CT Kidney Images
Chronic Kidney Diseases (CKD) has become one among the world wide health crisis and needs the associated efforts to prevent the complete organ damage. A considerable research effort has been put forward onto the effective seperation and classification of kidney tumors from the kidney CT Images. Emerging machine learning along with deep learning algorithms have waved the novel paths of tumor detections. But these methods are proved to be laborious and its success rate is purely depends on the previous experiences. To achieve the better classification and segmentation of tumors, this paper proposes the hybrid ensemble of visual capsule networks in U-NET deep learning architecture and w deep feed-forward extreme learning machines. The proposed framework incorporates the data-preprocessing powerful data augmentation, saliency tumor segmentation (STS) followed by the classification phase. Furthermore, classification levels are constructed based upon the feed forward extreme learning machines (FFELM) to enhance the effectiveness of the suggested model .The extensive experimentation has been conducted to evaluate the efficacy of the recommended structure and matched with the other prevailing hybrid deep learning model. Experimentation demonstrates that the suggested model has showed the superior predominance over the other models and exhibited DICE co-efficient of kidney tumors as high as 0.96 and accuracy of 97.5 %respectively
Attention Mechanisms in Medical Image Segmentation: A Survey
Medical image segmentation plays an important role in computer-aided
diagnosis. Attention mechanisms that distinguish important parts from
irrelevant parts have been widely used in medical image segmentation tasks.
This paper systematically reviews the basic principles of attention mechanisms
and their applications in medical image segmentation. First, we review the
basic concepts of attention mechanism and formulation. Second, we surveyed over
300 articles related to medical image segmentation, and divided them into two
groups based on their attention mechanisms, non-Transformer attention and
Transformer attention. In each group, we deeply analyze the attention
mechanisms from three aspects based on the current literature work, i.e., the
principle of the mechanism (what to use), implementation methods (how to use),
and application tasks (where to use). We also thoroughly analyzed the
advantages and limitations of their applications to different tasks. Finally,
we summarize the current state of research and shortcomings in the field, and
discuss the potential challenges in the future, including task specificity,
robustness, standard evaluation, etc. We hope that this review can showcase the
overall research context of traditional and Transformer attention methods,
provide a clear reference for subsequent research, and inspire more advanced
attention research, not only in medical image segmentation, but also in other
image analysis scenarios.Comment: Submitted to Medical Image Analysis, survey paper, 34 pages, over 300
reference
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Two-Stage Hybrid Supervision Framework for Fast, Low-resource, and Accurate Organ and Pan-cancer Segmentation in Abdomen CT
Abdominal organ and tumour segmentation has many important clinical
applications, such as organ quantification, surgical planning, and disease
diagnosis. However, manual assessment is inherently subjective with
considerable inter- and intra-expert variability. In the paper, we propose a
hybrid supervised framework, StMt, that integrates self-training and mean
teacher for the segmentation of abdominal organs and tumors using partially
labeled and unlabeled data. We introduce a two-stage segmentation pipeline and
whole-volume-based input strategy to maximize segmentation accuracy while
meeting the requirements of inference time and GPU memory usage. Experiments on
the validation set of FLARE2023 demonstrate that our method achieves excellent
segmentation performance as well as fast and low-resource model inference. Our
method achieved an average DSC score of 89.79\% and 45.55 \% for the organs and
lesions on the validation set and the average running time and area under GPU
memory-time cure are 11.25s and 9627.82MB, respectively
- …