21 research outputs found

    Smartphone-based Thermal Imaging System for Diabetic Foot Ulcer Assessment

    Get PDF
    International audienceThis research work is part of the STANDUP project http://www.standupproject.eu/ dedicated to improve diabetic foot ulcer prevention and treatment of the plantar foot surface using smartphone-embedded thermal imaging system. The aim of this preliminary work is to build an ulcer assessment tool based on a smartphone and an IR thermal camera. The proposed system represents a practical tool for accurate DFU healing assessment, combining color and thermal information in a single user-friendly system. To ensure robust tissue identification, an annotation software was developed based on SLIC superpixel segmentation algorithm. The tool thus developed allows clinicians to achieve objective and accurate tissue identification and annotation. The proposed system could serve as an intelligent telemedicine system to be deployed by clinicians at hospitals and healthcare centers for more accurate diagnosis of diabetic foot ulcers

    Wound Image Classification Using Deep Convolutional Neural Networks

    Get PDF
    Artificial Intelligence (AI) includes subfields like Machine Learning (ML) and DeepLearning (DL) and discusses intelligent systems that mimic human behaviors. ML has been used in a wide range of fields. Particularly in the healthcare domain, medical images often need to be carefully processed via such operations as classification and segmentation. Unlike traditional ML methods, DL algorithms are based on deep neural networks that are trained on a large amount of labeled data to extract features without human intervention. DL algorithms have become popular and powerful in classifying and segmenting medical images in recent years. In this thesis, we shall study the image classification problem in smartphone wound images using deep learning. Specifically, we apply deep convolutional neural networks (DCNN) on wound images to classify them into multiple types including diabetic, pressure, venous, and surgical. Also, we use DCNNs for wound tissue classification. First, an extensive review of existing DL-based methods in wound image classification is conducted and comprehensive taxonomies are provided for the reviewed studies. Then, we use a DCNN for binary and 3-class classification of burn wound images. The accuracy was considerably improved for the binary case in comparison with previous work in the literature. In addition, we propose an ensemble DCNN-based classifier for image-wise wound classification. We train and test our model on a new valuable set of wound images from different types that are kindly shared by the AZH Wound and Vascular Center in Milwaukee. The dataset has been shared for researchers in the field. Our proposed classifier outperforms the common DCNNs in classification accuracy on our own dataset. Also, it was evaluated on a public wound image dataset. The results showed that the proposed method can be used for wound image classification tasks or other similar applications. Finally, experiments are conducted on a dataset including different tissue types such as slough, granulation, callous, etc., annotated by the wound specialists from AZH Center to classify the wound pixels into different classes. The preliminary results of tissue classification experiments using DCNNs along with the future directions have been provided

    Recognition of ischaemia and infection in diabetic foot ulcers: Dataset and techniques

    Get PDF
    © 2020 The Authors Recognition and analysis of Diabetic Foot Ulcers (DFU) using computerized methods is an emerging research area with the evolution of image-based machine learning algorithms. Existing research using visual computerized methods mainly focuses on recognition, detection, and segmentation of the visual appearance of the DFU as well as tissue classification. According to DFU medical classification systems, the presence of infection (bacteria in the wound) and ischaemia (inadequate blood supply) has important clinical implications for DFU assessment, which are used to predict the risk of amputation. In this work, we propose a new dataset and computer vision techniques to identify the presence of infection and ischaemia in DFU. This is the first time a DFU dataset with ground truth labels of ischaemia and infection cases is introduced for research purposes. For the handcrafted machine learning approach, we propose a new feature descriptor, namely the Superpixel Colour Descriptor. Then we use the Ensemble Convolutional Neural Network (CNN) model for more effective recognition of ischaemia and infection. We propose to use a natural data-augmentation method, which identifies the region of interest on foot images and focuses on finding the salient features existing in this area. Finally, we evaluate the performance of our proposed techniques on binary classification, i.e. ischaemia versus non-ischaemia and infection versus non-infection. Overall, our method performed better in the classification of ischaemia than infection. We found that our proposed Ensemble CNN deep learning algorithms performed better for both classification tasks as compared to handcrafted machine learning algorithms, with 90% accuracy in ischaemia classification and 73% in infection classification

    Diabetic Foot Ulcers Classification using a fine-tuned CNNs Ensemble

    Get PDF
    Diabetic Foot Ulcers (DFU) are lesions in the foot region caused by diabetes mellitus. It is essential to define the appropriate treatment in the early stages of the disease once late treatment may result in amputation. This article proposes an ensemble approach composed of five modified convolutional neural networks (CNNs) - VGG-16, VGG-19, Resnet50, InceptionV3, and Densenet-201 - to classify DFU images. To define the parameters, we fine-tuned the CNNs, evaluated different configurations of fully connected layers, and used batch normalization and dropout operations. The modified CNNs were well suited to the problem; however, we observed that the union of the five CNNs significantly increased the success rates. We performed tests using 8,250 images with different resolution, contrast, color, and texture characteristics and included data augmentation operations to expand the training dataset. 5-fold cross-validation led to an average accuracy of 95.04%, resulting in a Kappa index greater than 91.85%, considered Excellent

    Use of Image Processing Techniques to Automatically Diagnose Sickle-Cell Anemia Present in Red Blood Cells Smear

    Get PDF
    Sickle Cell Anemia is a blood disorder which results from the abnormalities of red blood cells and shortens the life expectancy to 42 and 48 years for males and females respectively. It also causes pain, jaundice, shortness of breath, etc. Sickle Cell Anemia is characterized by the presence of abnormal cells like sickle cell, ovalocyte, anisopoikilocyte. Sickle cell disease usually presenting in childhood, occurs more commonly in people from parts of tropical and subtropical regions where malaria is or was very common. A healthy RBC is usually round in shape. But sometimes it changes its shape to form a sickle cell structure; this is called as sickling of RBC. Majority of the sickle cells (whose shape is like crescent moon) found are due to low haemoglobin content. An image processing algorithm to automate the diagnosis of sickle-cells present in thin blood smears is developed. Images are acquired using a charge-coupled device camera connected to a light microscope. Clustering based segmentation techniques are used to identify erythrocytes (red blood cells) and Sickle-cells present on microscopic slides. Image features based on colour, texture and the geometry of the cells are generated, as well as features that make use of a priori knowledge of the classification problem and mimic features used by human technicians. The red blood cell smears were obtained from IG Hospital, Rourkela. The proposed image processing based identification of sickle-cells in anemic patient will be very helpful for automatic, sleek and effective diagnosis of the disease

    Monitoring Wound Healing with Contactless Measurements and Augmented Reality

    Get PDF
    Objective: This work presents a device for non-invasive wound parameters assessment, designed to overcome the drawbacks of traditional methods, which are mostly rough, inaccurate, and painful for the patient. The device estimates the morphological parameters of the wound and provides augmented reality (AR) visual feedback on the wound healing status by projecting the wound border acquired during the last examination, thus improving doctor-patient communication. Methods: An accurate 3D model of the wound is created by stereophotogrammetry and refined through self-organizing maps. The 3D model is used to estimate physical parameters for wound healing assessment and integrates AR functionalities based on a miniaturized projector. The physical parameter estimation functionalities are evaluated in terms of precision, accuracy, inter-operator variability, and repeatability, whereas AR wound border projection is evaluated in terms of accuracy on the same phantom. Results: The accuracy and precision of the device are respectively 2% and 1.2% for linear parameters, and 1.7% and 1.3% for area and volume. The AR projection shows an error distance <1 mm. No statistical difference was found between the measurements of different operators. Conclusion: The device has proven to be an objective and non-operator-dependent tool for assessing the morphological parameters of the wound. Comparison with non-contact devices shows improved accuracy, offering reliable and rigorous measurements. Clinical Impact: Chronic wounds represent a significant health problem with high recurrence rates due to the ageing of the population and diseases such as diabetes and obesity. The device presented in this work provides an easy-to-use non-invasive tool to obtain useful information for treatment

    Semantic Segmentation of Smartphone Wound Images: Comparative Analysis of AHRF and CNN-Based Approaches

    Get PDF
    Smartphone wound image analysis has recently emerged as a viable way to assess healing progress and provide actionable feedback to patients and caregivers between hospital appointments. Segmentation is a key image analysis step, after which attributes of the wound segment (e.g. wound area and tissue composition) can be analyzed. The Associated Hierarchical Random Field (AHRF) formulates the image segmentation problem as a graph optimization problem. Handcrafted features are extracted, which are then classified using machine learning classifiers. More recently deep learning approaches have emerged and demonstrated superior performance for a wide range of image analysis tasks. FCN, U-Net and DeepLabV3 are Convolutional Neural Networks used for semantic segmentation. While in separate experiments each of these methods have shown promising results, no prior work has comprehensively and systematically compared the approaches on the same large wound image dataset, or more generally compared deep learning vs non-deep learning wound image segmentation approaches. In this paper, we compare the segmentation performance of AHRF and CNN approaches (FCN, U-Net, DeepLabV3) using various metrics including segmentation accuracy (dice score), inference time, amount of training data required and performance on diverse wound sizes and tissue types. Improvements possible using various image pre- and post-processing techniques are also explored. As access to adequate medical images/data is a common constraint, we explore the sensitivity of the approaches to the size of the wound dataset. We found that for small datasets ( \u3c 300 images), AHRF is more accurate than U-Net but not as accurate as FCN and DeepLabV3. AHRF is also over 1000x slower. For larger datasets ( \u3e 300 images), AHRF saturates quickly, and all CNN approaches (FCN, U-Net and DeepLabV3) are significantly more accurate than AHRF

    The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound Classification

    Get PDF
    Artificial Intelligence (AI) has been among the most emerging research and industrial application fields, especially in the healthcare domain, but operated as a black-box model with a limited understanding of its inner working over the past decades. AI algorithms are, in large part, built on weights calculated as a result of large matrix multiplications. It is typically hard to interpret and debug the computationally intensive processes. Explainable Artificial Intelligence (XAI) aims to solve black-box and hard-to-debug approaches through the use of various techniques and tools. In this study, XAI techniques are applied to chronic wound classification. The proposed model classifies chronic wounds through the use of transfer learning and fully connected layers. Classified chronic wound images serve as input to the XAI model for an explanation. Interpretable results can help shed new perspectives to clinicians during the diagnostic phase. The proposed method successfully provides chronic wound classification and its associated explanation to extract additional knowledge that can also be interpreted by non-data-science experts, such as medical scientists and physicians. This hybrid approach is shown to aid with the interpretation and understanding of AI decision-making processes
    corecore