899 research outputs found

    A Novel Skin Disease Detection Technique Using Machine Learning

    Get PDF
    Skin sicknesses present critical medical care difficulties around the world, requiring precise and opportune location for successful therapy. AI became promising stuff for computerizing the discovery and characterization of skin illnesses. This study presents a clever methodology that uses the choice tree strategy for skin sickness location. In computerized location, we utilize an exhaustive dataset containing different skin sickness pictures, including melanoma, psoriasis, dermatitis, and contagious diseases. Dermatologists skillfully mark the dataset, guaranteeing solid ground truth for precise grouping. Preprocessing strategies like resizing, standardization, and quality improvement are applied to set up the symbolism for the choice tree calculation. Then, we remove applicable elements from the preprocessed pictures, enveloping surface, variety, and shape descriptors to catch infection explicit examples successfully. The choice tree model is prepared utilizing these removed elements and the named dataset. Utilizing the choice tree's capacity to learn progressive designs and choice principles, our methodology accomplishes an elevated degree of exactness in grouping skin sicknesses. Extensive experiments and evaluations on a dedicated validation set demonstrate the effectiveness of our decision tree-based method, achieving a classification accuracy of 96%. Our proposed method provides a reliable and automated solution for skin disease detection, with potential applications in clinical settings. By enabling early and accurate diagnoses, our approach has the capacity to improve patient outcomes, trim down healthcare overheads, and alleviate the burden on dermatologists

    Performance Analysis of UNet and Variants for Medical Image Segmentation

    Full text link
    Medical imaging plays a crucial role in modern healthcare by providing non-invasive visualisation of internal structures and abnormalities, enabling early disease detection, accurate diagnosis, and treatment planning. This study aims to explore the application of deep learning models, particularly focusing on the UNet architecture and its variants, in medical image segmentation. We seek to evaluate the performance of these models across various challenging medical image segmentation tasks, addressing issues such as image normalization, resizing, architecture choices, loss function design, and hyperparameter tuning. The findings reveal that the standard UNet, when extended with a deep network layer, is a proficient medical image segmentation model, while the Res-UNet and Attention Res-UNet architectures demonstrate smoother convergence and superior performance, particularly when handling fine image details. The study also addresses the challenge of high class imbalance through careful preprocessing and loss function definitions. We anticipate that the results of this study will provide useful insights for researchers seeking to apply these models to new medical imaging problems and offer guidance and best practices for their implementation

    Impact of eye fundus image preprocessing on key objects segmentation for glaucoma identification

    Get PDF
    The pathological changes in the eye fundus image, especially around Optic Disc (OD) and Optic Cup (OC) may indicate eye diseases such as glaucoma. Therefore, accurate OD and OC segmentation is essential. The variety in images caused by different eye fundus cameras makes the complexity for the existing deep learning (DL) networks in OD and OC segmentation. In most research cases, experiments were conducted on individual data sets only and the results were obtained for that specific data sample. Our future goal is to develop a DL method that segments OD and OC in any kind of eye fundus image but the application of the mixed training data strategy is in the initiation stage and the image preprocessing is not discussed. Therefore, the aim of this paper is to evaluate the mage preprocessing impact on OD and OC segmentation in different eye fundus images aligned by size. We adopted a mixed training data strategy by combining images of DRISHTI-GS, REFUGE, and RIM-ONE datasets, and applied image resizing incorporating various interpolation methods, namely bilinear, nearest neighbor, and bicubic for image resolution alignment. The impact of image preprocessing on OD and OC segmentation was evaluated using three convolutional neural networks Attention U-Net, Residual Attention U-Net (RAUNET), and U-Net++. The experimental results show that the most accurate segmentation is achieved by resizing images to a size of 512 x 512 px and applying bicubic interpolation. The highest Dice of 0.979 for OD and 0.877 for OC are achieved on  RISHTI-GS test dataset, 0.973 for OD and 0.874 for OC on the REFUGE test dataset, 0.977 for OD and 0:855 for OC on RIM-ONE test dataset. Anova and Levene’s tests with statistically significant evidence at α = 0.05 show that the chosen size in image resizing has impact on the OD and OC segmentation results, meanwhile, the interpolation method does influent OC segmentation only

    Use of Hyperspectral Images (HSI) and Convolutional Neural Network (CNN) To Identify Normal, Precancerous and Cancerous Tissues

    Get PDF
    Cancer detection has been a great topic of research for a long time, as early detection of cancer can help in increasing the survival rate of patients by providing on time better treatment. A robust system is required in order to detect early-stage cancer as its difficult to identify early-stage cancer from the normal clinical process. The computer vision techniques provide a new way to understand the challenges related to the medical image analysis. This thesis presents the medical image analysis using a combination of Convolutional Neural Network and Hyperspectral Images of cancer patient\u27s tissues. The idea behind choosing the CNN is it has been doing really well in image processing and outperformed the other traditional techniques. An attempt is made to distinguish between Normal Tissues, Premature Tissues and Oesophageal adenocarcinoma (OAC) tissues. The dataset used here posses many challenges like less number of instances and most importantly imbalanced data, which means some instances are very few in comparison to others. This thesis focuses on improving the F1 Score of the CNN classifier and the performance is measured after fine-tuning the baseline model. The experiment result shows that fine-tuning the CNN algorithm help in improving the F1 Score a bit though haven\u27t achieved great result due to the limitation of imbalanced data. This work is a contribution towards detection of early-stage cancer through images, which clinical processes are unable to detect

    A framework for dynamically training and adapting deep reinforcement learning models to different, low-compute, and continuously changing radiology deployment environments

    Full text link
    While Deep Reinforcement Learning has been widely researched in medical imaging, the training and deployment of these models usually require powerful GPUs. Since imaging environments evolve rapidly and can be generated by edge devices, the algorithm is required to continually learn and adapt to changing environments, and adjust to low-compute devices. To this end, we developed three image coreset algorithms to compress and denoise medical images for selective experience replayed-based lifelong reinforcement learning. We implemented neighborhood averaging coreset, neighborhood sensitivity-based sampling coreset, and maximum entropy coreset on full-body DIXON water and DIXON fat MRI images. All three coresets produced 27x compression with excellent performance in localizing five anatomical landmarks: left knee, right trochanter, left kidney, spleen, and lung across both imaging environments. Maximum entropy coreset obtained the best performance of 11.97±12.0211.97\pm 12.02 average distance error, compared to the conventional lifelong learning framework's 19.24±50.7719.24\pm 50.77

    Semi-Supervised Approach Based Brain Tumor Detection with Noise Removal

    Get PDF
    Brain tumor detection and segmentation is the most important challenging and time consuming task in the medical field. In this paper, Magnetic Resonance Imaging (MRI) sample image is considered and it is very useful to detect the Tumor growth. It is mainly used by the radiologist for visualization process of an internal structure of the human body without any surgery. Generally, the Tumor is classified into two types such as malignant and benign. There are many variations in tumor tissue characteristics like its shape, size, gray level intensities and its locations. In this paper, we propose a new cooperative scheme that applies a semi-supervised fuzzy clustering algorithm. Specifically, the Otsu (Oral Tracheal Stylet Unit) method is used to remove the Background area from a Magnetic Resonance Image. Finally, Semi-supervised Entropy Regularized Fuzzy Clustering algorithm (SER-FCM) is applied to improve the quality level. The intensity, shape deformation, symmetry and texture features were extracted from each image. The usefulness and significance of this research are fully demonstrated within the extent of real-life application
    corecore