12,600 research outputs found

    DiaMe: IoMT deep predictive model based on threshold aware region growing technique

    Get PDF
    Medical images magnetic resonance imaging (MRI) analysis is a very challenging domain especially in the segmentation process for predicting tumefactions with high accuracy. Although deep learning techniques achieve remarkable success in classification and segmentation phases, it remains a rich area to investigate, due to the variance of tumefactions sizes, locations and shapes. Moreover, the high fusion between tumors and their anatomical appearance causes an imprecise detection for tumor boundaries. So, using hybrid segmentation technique will strengthen the reliability and generality of the diagnostic model. This paper presents an automated hybrid segmentation approach combined with convolution neural network (CNN) model for brain tumor detection and prediction, as one of many offered functions by the previously introduced IoMT medical service “DiaMe”. The developed model aims to improve extracting region of interest (ROI), especially with the variation sizes of tumor and its locations; and hence improve the overall performance of detecting the tumor. The MRI brain tumor dataset obtained from Kaggle, where all needed augmentation, edge detection, contouring and binarization are presented. The results showed 97.32% accuracy for detection, 96.5% Sensitivity, and 94.8% for specificity

    TPU Cloud-Based Generalized U-Net for Eye Fundus Image Segmentation

    Get PDF
    Medical images from different clinics are acquired with different instruments and settings. To perform segmentation on these images as a cloud-based service we need to train with multiple datasets to increase the segmentation independency from the source. We also require an ef cient and fast segmentation network. In this work these two problems, which are essential for many practical medical imaging applications, are studied. As a segmentation network, U-Net has been selected. U-Net is a class of deep neural networks which have been shown to be effective for medical image segmentation. Many different U-Net implementations have been proposed.With the recent development of tensor processing units (TPU), the execution times of these algorithms can be drastically reduced. This makes them attractive for cloud services. In this paper, we study, using Google's publicly available colab environment, a generalized fully con gurable Keras U-Net implementation which uses Google TPU processors for training and prediction. As our application problem, we use the segmentation of Optic Disc and Cup, which can be applied to glaucoma detection. To obtain networks with a good performance, independently of the image acquisition source, we combine multiple publicly available datasets (RIM-One V3, DRISHTI and DRIONS). As a result of this study, we have developed a set of functions that allow the implementation of generalized U-Nets adapted to TPU execution and are suitable for cloud-based service implementation.Ministerio de EconomĂ­a y Competitividad TEC2016-77785-

    Deep Learning-aided Brain Tumor Detection: An Initial ‎Experience based Cloud Framework ‎

    Get PDF
    Lately, the uncertainty of diagnosing diseases increased and spread due to the huge intertwined and ambiguity of symptoms, that leads to overwhelming and hindering the reliability of the diagnosis ‎process. Since tumor detection from ‎MRI scans depends mainly on the specialist experience, ‎misdetection will result an inaccurate curing that might cause ‎critical harm consequent results. In this paper, detection service for brain tumors is introduced as ‎an aiding function for both patients and specialist. The ‎paper focuses on automatic MRI brain tumor detection under a cloud based framework for multi-medical diagnosed services. The proposed CNN-aided deep architecture contains two phases: the features extraction phase followed by a detection phase. The contour ‎detection and binary segmentation were applied to extract the region ‎of interest and reduce the unnecessary information before injecting the data into the model for training. The brain tumor ‎data was obtained from Kaggle datasets, it contains 2062 cases, ‎‎1083 tumorous and 979 non-tumorous after preprocessing and ‎augmentation phases. The training and validation phases have been ‎done using different images’ sizes varied between (16, 16) to ‎‎ (128,128). The experimental results show 97.3% for detection ‎accuracy, 96.9% for Sensitivity, and 96.1% specificity. Moreover, ‎using small filters with such type of images ensures better and faster ‎performance with more deep learning.

    A Robust Interpretable Deep Learning Classifier for Heart Anomaly Detection Without Segmentation

    Full text link
    Traditionally, abnormal heart sound classification is framed as a three-stage process. The first stage involves segmenting the phonocardiogram to detect fundamental heart sounds; after which features are extracted and classification is performed. Some researchers in the field argue the segmentation step is an unwanted computational burden, whereas others embrace it as a prior step to feature extraction. When comparing accuracies achieved by studies that have segmented heart sounds before analysis with those who have overlooked that step, the question of whether to segment heart sounds before feature extraction is still open. In this study, we explicitly examine the importance of heart sound segmentation as a prior step for heart sound classification, and then seek to apply the obtained insights to propose a robust classifier for abnormal heart sound detection. Furthermore, recognizing the pressing need for explainable Artificial Intelligence (AI) models in the medical domain, we also unveil hidden representations learned by the classifier using model interpretation techniques. Experimental results demonstrate that the segmentation plays an essential role in abnormal heart sound classification. Our new classifier is also shown to be robust, stable and most importantly, explainable, with an accuracy of almost 100% on the widely used PhysioNet dataset

    Federated Learning for Medical Applications: A Taxonomy, Current Trends, Challenges, and Future Research Directions

    Full text link
    With the advent of the IoT, AI, ML, and DL algorithms, the landscape of data-driven medical applications has emerged as a promising avenue for designing robust and scalable diagnostic and prognostic models from medical data. This has gained a lot of attention from both academia and industry, leading to significant improvements in healthcare quality. However, the adoption of AI-driven medical applications still faces tough challenges, including meeting security, privacy, and quality of service (QoS) standards. Recent developments in \ac{FL} have made it possible to train complex machine-learned models in a distributed manner and have become an active research domain, particularly processing the medical data at the edge of the network in a decentralized way to preserve privacy and address security concerns. To this end, in this paper, we explore the present and future of FL technology in medical applications where data sharing is a significant challenge. We delve into the current research trends and their outcomes, unravelling the complexities of designing reliable and scalable \ac{FL} models. Our paper outlines the fundamental statistical issues in FL, tackles device-related problems, addresses security challenges, and navigates the complexity of privacy concerns, all while highlighting its transformative potential in the medical field. Our study primarily focuses on medical applications of \ac{FL}, particularly in the context of global cancer diagnosis. We highlight the potential of FL to enable computer-aided diagnosis tools that address this challenge with greater effectiveness than traditional data-driven methods. We hope that this comprehensive review will serve as a checkpoint for the field, summarizing the current state-of-the-art and identifying open problems and future research directions.Comment: Accepted at IEEE Internet of Things Journa

    A Review of Physical Human Activity Recognition Chain Using Sensors

    Get PDF
    In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.

    Secure Collaborative Augmented Reality Framework for Biomedical Informatics

    Get PDF
    Augmented reality is currently a great interest in biomedical health informatics. At the same time, several challenges have been appeared, in particular with the rapid progress of smart sensors technologies, and medical artificial intelligence. This yields the necessity of new needs in biomedical health informatics. Collaborative learning and privacy are some of the challenges of augmented reality technology in biomedical health informatics. This paper introduces a novel secure collaborative augmented reality framework for biomedical health informatics-based applications. Distributed deep learning is first performed across a multi-agent system platform. The privacy strategy is developed for ensuring better communications of the different intelligent agents in the system. In this research work, a system of multiple agents is created for the simulation of the collective behaviours of the smart components of biomedical health informatics. Augmented reality is also incorporated for better visualization of the resulted medical patterns. A novel privacy strategy based on blockchain is investigated for ensuring the confidentiality of the learning process. Experiments are conducted on the real use case of the biomedical segmentation process. Our strong experimental analysis reveals the strength of the proposed framework when directly compared to state-of-the-art biomedical health informatics solutions.acceptedVersio

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
    • …
    corecore