66 research outputs found

    An Integrated Design for Classification and Localization of Diabetic Foot Ulcer based on CNN and YOLOv2-DFU Models

    Get PDF
    Diabetes is a chronic disease, if not treated in time may lead to many complications including diabetic foot ulcers (DFU). DFU is a dangerous disease, it needs regular treatment otherwise it may lead towards foot amputation. The DFU is classified into two categories such as infection (bacteria) and the ischaemia (inadequate supply of the blood). The DFU detection at an initial phase is a tough procedure. Therefore in this research work 16 layers convolutional neural network (CNN) for example 01 input, 03 convolutional, 03 batch-normalization, 01 average pooling, 01 skips convolutional, 03 ReLU, 01 add (element-wise addition of two inputs), fully connected, softmax and classification output layers for classification and YOLOv2-DFU for localization of infection/ischaemia models are proposed. In the classification phase, deep features are extracted and supplied to the number of classifiers such as KNN, DT, Ensemble, softmax, and NB to analyze the classification results for the selection of best classifiers. After the experimentation, we observed that DT and softmax achieved consistent results for the detection of ischaemia/infection in all performance metrics such as sensitivity, specificity, and accuracy as compared with other classifiers. In addition, after the classification, the Gradient-weighted class activation mapping (Grad-Cam) model is used to visualize the high-level features of the infected region for better understanding. The classified images are passed to the YOLOv2-DFU network for infected region localization. The Shuffle network is utilized as a mainstay of the YOLOv2 model in which bottleneck extracted features through ReLU node-199 layer and passed to the YOLOv2 model. The proposed method is validated on the newly developed DFU-Part (B) dataset and the results are compared with the latest published work using the same dataset

    Recognition of different types of leukocytes using YOLoV2 and optimized bag-of-features

    Get PDF
    White blood cells (WBCs) protect human body against different types of infections including fungal, parasitic, viral, and bacterial. The detection of abnormal regions in WBCs is a difficult task. Therefore a method is proposed for the localization of WBCs based on YOLOv2-Nucleus-Cytoplasm, which contains darkNet-19 as a basenetwork of the YOLOv2 model. In this model features are extracted from LeakyReLU-18 of darkNet-19 and supplied as an input to the YOLOv2 model. The YOLOv2-Nucleus-Cytoplasm model localizes and classifies the WBCs with maximum score labels. It also localize the WBCs into the blast and non-blast cells. After localization, the bag-of-features are extracted and optimized by using particle swarm optimization(PSO). The improved feature vector is fed to classifiers i.e., optimized naĂŻve Bayes (O-NB) & optimized discriminant analysis (O-DA) for WBCs classification. The experiments are performed on LISC, ALL-IDB1, and ALL-IDB2 datasets

    Deep Semantic Segmentation and Multi-Class Skin Lesion Classification Based on Convolutional Neural Network

    Get PDF
    Skin cancer is developed due to abnormal cell growth. These cells are grown rapidly and destroy the normal skin cells. However, it's curable at an initial stage to reduce the patient's mortality rate. In this article, the method is proposed for localization, segmentation and classification of the skin lesion at an early stage. The proposed method contains three phases. In phase I, different types of the skin lesion are localized using tinyYOLOv2 model in which open neural network (ONNX) and squeeze Net model are used as a backbone. The features are extracted from depthconcat7 layer of squeeze Net and passed as an input to the tinyYOLOv2. The propose model accurately localize the affected part of the skin. In Phase II, 13-layer 3D-semantic segmentation model (01 input, 04 convolutional, 03 batch-normalization, 03 ReLU, softmax and pixel classification) is used for segmentation. In the proposed segmentation model, pixel classification layer is used for computing the overlap region between the segmented and ground truth images. Later in Phase III, extract deep features using ResNet-18 model and optimized features are selected using ant colony optimization (ACO) method. The optimized features vector is passed to the classifiers such as optimized (O)-SVM and O-NB. The proposed method is evaluated on the top MICCAI ISIC challenging 2017, 2018 and 2019 datasets. The proposed method accurately localized, segmented and classified the skin lesion at an early stage.Qatar University [IRCC-2020-009]

    Vascular Implications of COVID-19: Role of Radiological Imaging, Artificial Intelligence, and Tissue Characterization: A Special Report

    Get PDF
    The SARS-CoV-2 virus has caused a pandemic, infecting nearly 80 million people worldwide, with mortality exceeding six million. The average survival span is just 14 days from the time the symptoms become aggressive. The present study delineates the deep-driven vascular damage in the pulmonary, renal, coronary, and carotid vessels due to SARS-CoV-2. This special report addresses an important gap in the literature in understanding (i) the pathophysiology of vascular damage and the role of medical imaging in the visualization of the damage caused by SARS-CoV-2, and (ii) further understanding the severity of COVID-19 using artificial intelligence (AI)-based tissue characterization (TC). PRISMA was used to select 296 studies for AI-based TC. Radiological imaging techniques such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound were selected for imaging of the vasculature infected by COVID-19. Four kinds of hypotheses are presented for showing the vascular damage in radiological images due to COVID-19. Three kinds of AI models, namely, machine learning, deep learning, and transfer learning, are used for TC. Further, the study presents recommendations for improving AI-based architectures for vascular studies. We conclude that the process of vascular damage due to COVID-19 has similarities across vessel types, even though it results in multi-organ dysfunction. Although the mortality rate is ~2% of those infected, the long-term effect of COVID-19 needs monitoring to avoid deaths. AI seems to be penetrating the health care industry at warp speed, and we expect to see an emerging role in patient care, reduce the mortality and morbidity rate

    Vascular Implications of COVID-19: Role of Radiological Imaging, Artificial Intelligence, and Tissue Characterization: A Special Report

    Get PDF
    The SARS-CoV-2 virus has caused a pandemic, infecting nearly 80 million people worldwide, with mortality exceeding six million. The average survival span is just 14 days from the time the symptoms become aggressive. The present study delineates the deep-driven vascular damage in the pulmonary, renal, coronary, and carotid vessels due to SARS-CoV-2. This special report addresses an important gap in the literature in understanding (i) the pathophysiology of vascular damage and the role of medical imaging in the visualization of the damage caused by SARS-CoV-2, and (ii) further understanding the severity of COVID-19 using artificial intelligence (AI)-based tissue characterization (TC). PRISMA was used to select 296 studies for AI-based TC. Radiological imaging techniques such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound were selected for imaging of the vasculature infected by COVID-19. Four kinds of hypotheses are presented for showing the vascular damage in radiological images due to COVID-19. Three kinds of AI models, namely, machine learning, deep learning, and transfer learning, are used for TC. Further, the study presents recommendations for improving AI-based architectures for vascular studies. We conclude that the process of vascular damage due to COVID-19 has similarities across vessel types, even though it results in multi-organ dysfunction. Although the mortality rate is ~2% of those infected, the long-term effect of COVID-19 needs monitoring to avoid deaths. AI seems to be penetrating the health care industry at warp speed, and we expect to see an emerging role in patient care, reduce the mortality and morbidity rate

    Machine Learning for Biomedical Application

    Get PDF
    Biomedicine is a multidisciplinary branch of medical science that consists of many scientific disciplines, e.g., biology, biotechnology, bioinformatics, and genetics; moreover, it covers various medical specialties. In recent years, this field of science has developed rapidly. This means that a large amount of data has been generated, due to (among other reasons) the processing, analysis, and recognition of a wide range of biomedical signals and images obtained through increasingly advanced medical imaging devices. The analysis of these data requires the use of advanced IT methods, which include those related to the use of artificial intelligence, and in particular machine learning. It is a summary of the Special Issue “Machine Learning for Biomedical Application”, briefly outlining selected applications of machine learning in the processing, analysis, and recognition of biomedical data, mostly regarding biosignals and medical images

    Automatic tissue characterization from optical coherence tomography images for smart laser osteotomy

    Get PDF
    Fascinating experiments have proved that in the very near future, laser will completely replace mechanical tools in bone surgery or osteotomy. Laser osteotomy overcomes mechanical tools’ shortcomings, with less damage to surrounding tissue, lower risk of viral and bacterial infections, and faster wound healing. Furthermore, the current development of artificial intelligence has pushed the direction of research toward smart laser osteotomy. This thesis project aimed to advance smart laser osteotomy by introducing an image-based automatic tissue characterization or feedback system. The Optical Coherence Tomography (OCT) imaging system was selected because it could provide a high-resolution subsurface image slice over the laser ablation site. Experiments were conducted and published to show the feasibility of the feedback system. In the first part of this thesis project, a deep-learning-based OCT image denoising method was demonstrated and yielded a faster processing time than classical denoising methods, while maintaining image quality comparable to a frame-averaged image. Next part, it was necessary to find the best deep-learning model for tissue type identification in the absence of laser ablation. The results showed that the DenseNet model is sufficient for detecting tissue types based on the OCT image patch. The model could differentiate five different tissue types (bone, bone marrow, fat, muscle, and skin tissues) with an accuracy of 94.85 %. The last part of this thesis project presents the result of applying the deep-learning-based OCT-guided laser osteotomy in real-time. The first trial experiment took place at the time of the writing of this thesis. The feedback system was evaluated based on its ability to stop bone cutting when bone marrow was detected. The results show that the deep-learning-based setup successfully stopped the ablation laser when bone marrow was detected. The average maximum depth of bone marrow perforation was only 216 μm. This thesis project provides the basic framework for OCT-based smart laser osteotomy. It also shows that deep learning is a robust approach to achieving real-time application of OCT-guided laser osteotomy. Nevertheless, future research directions, such as a combination of depth control and tissue classification setup, and optimization of the ablation strategy, would make the use of OCT in laser osteotomy even more feasible

    Deep Learning Models For Biomedical Data Analysis

    Get PDF
    The field of biomedical data analysis is a vibrant area of research dedicated to extracting valuable insights from a wide range of biomedical data sources, including biomedical images and genomics data. The emergence of deep learning, an artificial intelligence approach, presents significant prospects for enhancing biomedical data analysis and knowledge discovery. This dissertation focused on exploring innovative deep-learning methods for biomedical image processing and gene data analysis. During the COVID-19 pandemic, biomedical imaging data, including CT scans and chest x-rays, played a pivotal role in identifying COVID-19 cases by categorizing patient chest x-ray outcomes as COVID-19-positive or negative. While supervised deep learning methods have effectively recognized COVID-19 patterns in chest x-ray datasets, the availability of annotated training data remains limited. To address this challenge, the thesis introduced a semi-supervised deep learning model named ssResNet, built upon the Residual Neural Network (ResNet) architecture. The model combines supervised and unsupervised paths, incorporating a weighted supervised loss function to manage data imbalance. The strategies to diminish prediction uncertainty in deep learning models for critical applications like medical image processing is explore. It achieves this through an ensemble deep learning model, integrating bagging deep learning and model calibration techniques. This ensemble model not only boosts biomedical image segmentation accuracy but also reduces prediction uncertainty, as validated on a comprehensive chest x-ray image segmentation dataset. Furthermore, the thesis introduced an ensemble model integrating Proformer and ensemble learning methodologies. This model constructs multiple independent Proformers for predicting gene expression, their predictions are combined through weighted averaging to generate final predictions. Experimental outcomes underscore the efficacy of this ensemble model in enhancing prediction performance across various metrics. In conclusion, this dissertation advances biomedical data analysis by harnessing the potential of deep learning techniques. It devises innovative approaches for processing biomedical images and gene data. By leveraging deep learning\u27s capabilities, this work paves the way for further progress in biomedical data analytics and its applications within clinical contexts. Index Terms- biomedical data analysis, COVID-19, deep learning, ensemble learning, gene data analytics, medical image segmentation, prediction uncertainty, Proformer, Residual Neural Network (ResNet), semi-supervised learning

    Digital Twin of Cardiovascular Systems

    Get PDF
    Patient specific modelling using numerical methods is widely used in understanding diseases and disorders. It produces medical analysis based on the current state of patient’s health. Concurrently, as a parallel development, emerging data driven Artificial Intelligence (AI) has accelerated patient care. It provides medical analysis using algorithms that rely upon knowledge from larger human population data. AI systems are also known to have the capacity to provide a prognosis with overallaccuracy levels that are better than those provided by trained professionals. When these two independent and robust methods are combined, the concept of human digital twins arise. A Digital Twin is a digital replica of any given system or process. They combine knowledge from general data with subject oriented knowledge for past, current and future analyses and predictions. Assumptions made during numerical modelling are compensated using knowledge from general data. For humans, they can provide an accurate current diagnosis as well as possible future outcomes. This allows forprecautions to be taken so as to avoid further degradation of patient’s health.In this thesis, we explore primary forms of human digital twins for the cardiovascular system, that are capable of replicating various aspects of the cardiovascular system using different types of data. Since different types of medical data are available, such as images, videos and waveforms, and the kinds of analysis required may be offline or online in nature, digital twin systems should be uniquely designed to capture each type of data for different kinds of analysis. Therefore, passive, active and semi-active digital twins, as the three primary forms of digital twins, for different kinds of applications are proposed in this thesis. By the virtue of applications and the kind of data involved ineach of these applications, the performance and importance of human digital twins for the cardiovascular system are demonstrated. The idea behind these twins is to allow for the application of the digital twin concept for online analysis, offline analysis or a combination of the two in healthcare. In active digital twins active data, such as signals, is analysed online in real-time; in semi-active digital twin some of the components being analysed are active but the analysis itself is carried out offline; and finally, passive digital twins perform offline analysis of data that involves no active component.For passive digital twin, an automatic workflow to calculate Fractional Flow Reserve (FFR) is proposed and tested on a cohort of 25 patients with acceptable results. For semi-active digital twin, detection of carotid stenosis and its severity using face videos is proposed and tested with satisfactory results from one carotid stenosis patient and a small cohort of healthy adults. Finally, for the active digital twin, an enabling model is proposed using inverse analysis and its application in the detection of Abdominal Aortic Aneurysm (AAA) and its severity, with the help of a virtual patient database. This enabling model detected artificially generated AAA with an accuracy as high as 99.91% and classified its severity with acceptable accuracy of 97.79%. Further, for active digital twin, a truly active model is proposed for continuous cardiovascular state monitoring. It is tested on a small cohort of five patients from a publicly available database for three 10-minute periods, wherein this model satisfactorily replicated and forecasted patients’ cardiovascular state. In addition to the three forms of human digital twins for the cardiovascular system, an additional work on patient prioritisation in pneumonia patients for ITU care using data-driven digital twin is also proposed. The severity indices calculated by these models are assessed using the standard benchmark of Area Under Receiving Operating Characteristic Curve (AUROC). The results indicate that using these models, the ITU and mechanical ventilation can be prioritised correctly to an AUROC value as high as 0.89

    Advanced Signal Processing in Wearable Sensors for Health Monitoring

    Get PDF
    Smart, wearables devices on a miniature scale are becoming increasingly widely available, typically in the form of smart watches and other connected devices. Consequently, devices to assist in measurements such as electroencephalography (EEG), electrocardiogram (ECG), electromyography (EMG), blood pressure (BP), photoplethysmography (PPG), heart rhythm, respiration rate, apnoea, and motion detection are becoming more available, and play a significant role in healthcare monitoring. The industry is placing great emphasis on making these devices and technologies available on smart devices such as phones and watches. Such measurements are clinically and scientifically useful for real-time monitoring, long-term care, and diagnosis and therapeutic techniques. However, a pertaining issue is that recorded data are usually noisy, contain many artefacts, and are affected by external factors such as movements and physical conditions. In order to obtain accurate and meaningful indicators, the signal has to be processed and conditioned such that the measurements are accurate and free from noise and disturbances. In this context, many researchers have utilized recent technological advances in wearable sensors and signal processing to develop smart and accurate wearable devices for clinical applications. The processing and analysis of physiological signals is a key issue for these smart wearable devices. Consequently, ongoing work in this field of study includes research on filtration, quality checking, signal transformation and decomposition, feature extraction and, most recently, machine learning-based methods
    • …
    corecore