128 research outputs found

    Novel Computerised Techniques for Recognition and Analysis of Diabetic Foot Ulcers

    Get PDF
    Diabetic Foot Ulcers (DFU) that affect the lower extremities are a major complication of Diabetes Mellitus (DM). It has been estimated that patients with diabetes have a lifetime risk of 15% to 25% in developing DFU contributing up to 85% of the lower limb amputation due to failure to recognise and treat DFU properly. Current practice for DFU screening involves manual inspection of the foot by podiatrists and further medical tests such as vascular and blood tests are used to determine the presence of ischemia and infection in DFU. A comprehensive review of computerized techniques for recognition of DFU has been performed to identify the work done so far in this field. During this stage, it became clear that computerized analysis of DFU is relatively emerging field that is why related literature and research works are limited. There is also a lack of standardised public database of DFU and other wound-related pathologies. We have received approximately 1500 DFU images through the ethical approval with Lancashire Teaching Hospitals. In this work, we standardised both DFU dataset and expert annotations to perform different computer vision tasks such as classification, segmentation and localization on popular deep learning frameworks. The main focus of this thesis is to develop automatic computer vision methods that can recognise the DFU of different stages and grades. Firstly, we used machine learning algorithms to classify the DFU patches against normal skin patches of the foot region to determine the possible misclassified cases of both classes. Secondly, we used fully convolutional networks for the segmentation of DFU and surrounding skin in full foot images with high specificity and sensitivity. Finally, we used robust and lightweight deep localisation methods in mobile devices to detect the DFU on foot images for remote monitoring. Despite receiving very good performance for the recognition of DFU, these algorithms were not able to detect pre-ulcer conditions and very subtle DFU. Although recognition of DFU by computer vision algorithms is a valuable study, we performed the further analysis of DFU on foot images to determine factors that predict the risk of amputation such as the presence of infection and ischemia in DFU. The complete DFU diagnosis system with these computer vision algorithms have the potential to deliver a paradigm shift in diabetic foot care among diabetic patients, which represent a cost-effective, remote and convenient healthcare solution with more data and expert annotations

    Wound Image Classification Using Deep Convolutional Neural Networks

    Get PDF
    Artificial Intelligence (AI) includes subfields like Machine Learning (ML) and DeepLearning (DL) and discusses intelligent systems that mimic human behaviors. ML has been used in a wide range of fields. Particularly in the healthcare domain, medical images often need to be carefully processed via such operations as classification and segmentation. Unlike traditional ML methods, DL algorithms are based on deep neural networks that are trained on a large amount of labeled data to extract features without human intervention. DL algorithms have become popular and powerful in classifying and segmenting medical images in recent years. In this thesis, we shall study the image classification problem in smartphone wound images using deep learning. Specifically, we apply deep convolutional neural networks (DCNN) on wound images to classify them into multiple types including diabetic, pressure, venous, and surgical. Also, we use DCNNs for wound tissue classification. First, an extensive review of existing DL-based methods in wound image classification is conducted and comprehensive taxonomies are provided for the reviewed studies. Then, we use a DCNN for binary and 3-class classification of burn wound images. The accuracy was considerably improved for the binary case in comparison with previous work in the literature. In addition, we propose an ensemble DCNN-based classifier for image-wise wound classification. We train and test our model on a new valuable set of wound images from different types that are kindly shared by the AZH Wound and Vascular Center in Milwaukee. The dataset has been shared for researchers in the field. Our proposed classifier outperforms the common DCNNs in classification accuracy on our own dataset. Also, it was evaluated on a public wound image dataset. The results showed that the proposed method can be used for wound image classification tasks or other similar applications. Finally, experiments are conducted on a dataset including different tissue types such as slough, granulation, callous, etc., annotated by the wound specialists from AZH Center to classify the wound pixels into different classes. The preliminary results of tissue classification experiments using DCNNs along with the future directions have been provided

    A Mobile App for Wound Localization using Deep Learning

    Full text link
    We present an automated wound localizer from 2D wound and ulcer images by using deep neural network, as the first step towards building an automated and complete wound diagnostic system. The wound localizer has been developed by using YOLOv3 model, which is then turned into an iOS mobile application. The developed localizer can detect the wound and its surrounding tissues and isolate the localized wounded region from images, which would be very helpful for future processing such as wound segmentation and classification due to the removal of unnecessary regions from wound images. For Mobile App development with video processing, a lighter version of YOLOv3 named tiny-YOLOv3 has been used. The model is trained and tested on our own image dataset in collaboration with AZH Wound and Vascular Center, Milwaukee, Wisconsin. The YOLOv3 model is compared with SSD model, showing that YOLOv3 gives a mAP value of 93.9%, which is much better than the SSD model (86.4%). The robustness and reliability of these models are also tested on a publicly available dataset named Medetec and shows a very good performance as well.Comment: 8 pages, 5 figures, 1 tabl

    Efficient refinements on YOLOv3 for real-time detection and assessment of diabetic foot Wagner grades

    Full text link
    Currently, the screening of Wagner grades of diabetic feet (DF) still relies on professional podiatrists. However, in less-developed countries, podiatrists are scarce, which led to the majority of undiagnosed patients. In this study, we proposed the real-time detection and location method for Wagner grades of DF based on refinements on YOLOv3. We collected 2,688 data samples and implemented several methods, such as a visual coherent image mixup, label smoothing, and training scheduler revamping, based on the ablation study. The experimental results suggested that the refinements on YOLOv3 achieved an accuracy of 91.95% and the inference speed of a single picture reaches 31ms with the NVIDIA Tesla V100. To test the performance of the model on a smartphone, we deployed the refinements on YOLOv3 models on an Android 9 system smartphone. This work has the potential to lead to a paradigm shift for clinical treatment of the DF in the future, to provide an effective healthcare solution for DF tissue analysis and healing status.Comment: 11 pages with 11 figure

    Integrated Image and Location Analysis for Wound Classification: A Deep Learning Approach

    Full text link
    The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79% to 100% for Region of Interest (ROI) without location classifications, 73.98% to 100% for ROI with location classifications, and 78.10% to 100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts

    Diabetic Foot Ulcers Classification using a fine-tuned CNNs Ensemble

    Get PDF
    Diabetic Foot Ulcers (DFU) are lesions in the foot region caused by diabetes mellitus. It is essential to define the appropriate treatment in the early stages of the disease once late treatment may result in amputation. This article proposes an ensemble approach composed of five modified convolutional neural networks (CNNs) - VGG-16, VGG-19, Resnet50, InceptionV3, and Densenet-201 - to classify DFU images. To define the parameters, we fine-tuned the CNNs, evaluated different configurations of fully connected layers, and used batch normalization and dropout operations. The modified CNNs were well suited to the problem; however, we observed that the union of the five CNNs significantly increased the success rates. We performed tests using 8,250 images with different resolution, contrast, color, and texture characteristics and included data augmentation operations to expand the training dataset. 5-fold cross-validation led to an average accuracy of 95.04%, resulting in a Kappa index greater than 91.85%, considered Excellent
    corecore