833 research outputs found

    Robust Methods for Real-Time Diabetic Foot Ulcer Detection and Localization on Mobile Devices

    Get PDF
    Current practice for Diabetic Foot Ulcers (DFU) screening involves detection and localization by podiatrists. Existing automated solutions either focus on segmentation or classification. In this work, we design deep learning methods for real-time DFU localization. To produce a robust deep learning model, we collected an extensive database of 1775 images of DFU. Two medical experts produced the ground truths of this dataset by outlining the region of interest of DFU with an annotator software. Using 5-fold cross-validation, overall, Faster R-CNN with InceptionV2 model using two-tier transfer learning achieved a mean average precision of 91.8%, the speed of 48 ms for inferencing a single image and with a model size of 57.2 MB. To demonstrate the robustness and practicality of our solution to real-time prediction, we evaluated the performance of the models on a NVIDIA Jetson TX2 and a smartphone app. This work demonstrates the capability of deep learning in real-time localization of DFU, which can be further improved with a more extensive dataset

    Efficient refinements on YOLOv3 for real-time detection and assessment of diabetic foot Wagner grades

    Full text link
    Currently, the screening of Wagner grades of diabetic feet (DF) still relies on professional podiatrists. However, in less-developed countries, podiatrists are scarce, which led to the majority of undiagnosed patients. In this study, we proposed the real-time detection and location method for Wagner grades of DF based on refinements on YOLOv3. We collected 2,688 data samples and implemented several methods, such as a visual coherent image mixup, label smoothing, and training scheduler revamping, based on the ablation study. The experimental results suggested that the refinements on YOLOv3 achieved an accuracy of 91.95% and the inference speed of a single picture reaches 31ms with the NVIDIA Tesla V100. To test the performance of the model on a smartphone, we deployed the refinements on YOLOv3 models on an Android 9 system smartphone. This work has the potential to lead to a paradigm shift for clinical treatment of the DF in the future, to provide an effective healthcare solution for DF tissue analysis and healing status.Comment: 11 pages with 11 figure

    Novel Computerised Techniques for Recognition and Analysis of Diabetic Foot Ulcers

    Get PDF
    Diabetic Foot Ulcers (DFU) that affect the lower extremities are a major complication of Diabetes Mellitus (DM). It has been estimated that patients with diabetes have a lifetime risk of 15% to 25% in developing DFU contributing up to 85% of the lower limb amputation due to failure to recognise and treat DFU properly. Current practice for DFU screening involves manual inspection of the foot by podiatrists and further medical tests such as vascular and blood tests are used to determine the presence of ischemia and infection in DFU. A comprehensive review of computerized techniques for recognition of DFU has been performed to identify the work done so far in this field. During this stage, it became clear that computerized analysis of DFU is relatively emerging field that is why related literature and research works are limited. There is also a lack of standardised public database of DFU and other wound-related pathologies. We have received approximately 1500 DFU images through the ethical approval with Lancashire Teaching Hospitals. In this work, we standardised both DFU dataset and expert annotations to perform different computer vision tasks such as classification, segmentation and localization on popular deep learning frameworks. The main focus of this thesis is to develop automatic computer vision methods that can recognise the DFU of different stages and grades. Firstly, we used machine learning algorithms to classify the DFU patches against normal skin patches of the foot region to determine the possible misclassified cases of both classes. Secondly, we used fully convolutional networks for the segmentation of DFU and surrounding skin in full foot images with high specificity and sensitivity. Finally, we used robust and lightweight deep localisation methods in mobile devices to detect the DFU on foot images for remote monitoring. Despite receiving very good performance for the recognition of DFU, these algorithms were not able to detect pre-ulcer conditions and very subtle DFU. Although recognition of DFU by computer vision algorithms is a valuable study, we performed the further analysis of DFU on foot images to determine factors that predict the risk of amputation such as the presence of infection and ischemia in DFU. The complete DFU diagnosis system with these computer vision algorithms have the potential to deliver a paradigm shift in diabetic foot care among diabetic patients, which represent a cost-effective, remote and convenient healthcare solution with more data and expert annotations

    A Mobile App for Wound Localization using Deep Learning

    Full text link
    We present an automated wound localizer from 2D wound and ulcer images by using deep neural network, as the first step towards building an automated and complete wound diagnostic system. The wound localizer has been developed by using YOLOv3 model, which is then turned into an iOS mobile application. The developed localizer can detect the wound and its surrounding tissues and isolate the localized wounded region from images, which would be very helpful for future processing such as wound segmentation and classification due to the removal of unnecessary regions from wound images. For Mobile App development with video processing, a lighter version of YOLOv3 named tiny-YOLOv3 has been used. The model is trained and tested on our own image dataset in collaboration with AZH Wound and Vascular Center, Milwaukee, Wisconsin. The YOLOv3 model is compared with SSD model, showing that YOLOv3 gives a mAP value of 93.9%, which is much better than the SSD model (86.4%). The robustness and reliability of these models are also tested on a publicly available dataset named Medetec and shows a very good performance as well.Comment: 8 pages, 5 figures, 1 tabl

    Venn Diagram Multi-label Class Interpretation of Diabetic Foot Ulcer with Color and Sharpness Enhancement

    Full text link
    DFU is a severe complication of diabetes that can lead to amputation of the lower limb if not treated properly. Inspired by the 2021 Diabetic Foot Ulcer Grand Challenge, researchers designed automated multi-class classification of DFU, including infection, ischaemia, both of these conditions, and none of these conditions. However, it remains a challenge as classification accuracy is still not satisfactory. This paper proposes a Venn Diagram interpretation of multi-label CNN-based method, utilizing different image enhancement strategies, to improve the multi-class DFU classification. We propose to reduce the four classes into two since both class wounds can be interpreted as the simultaneous occurrence of infection and ischaemia and none class wounds as the absence of infection and ischaemia. We introduce a novel Venn Diagram representation block in the classifier to interpret all four classes from these two classes. To make our model more resilient, we propose enhancing the perceptual quality of DFU images, particularly blurry or inconsistently lit DFU images, by performing color and sharpness enhancements on them. We also employ a fine-tuned optimization technique, adaptive sharpness aware minimization, to improve the CNN model generalization performance. The proposed method is evaluated on the test dataset of DFUC2021, containing 5,734 images and the results are compared with the top-3 winning entries of DFUC2021. Our proposed approach outperforms these existing approaches and achieves Macro-Average F1, Recall and Precision scores of 0.6592, 0.6593, and 0.6652, respectively.Additionally, We perform ablation studies and image quality measurements to further interpret our proposed method. This proposed method will benefit patients with DFUs since it tackles the inconsistencies in captured images and can be employed for a more robust remote DFU wound classification.Comment: 12 Pages, 7 Figure

    Leveraging deep neural networks for automatic and standardised wound image acquisition

    Get PDF
    Wound monitoring is a time-consuming and error-prone activity performed daily by healthcare professionals. Capturing wound images is crucial in the current clinical practice, though image inadequacy can undermine further assessments. To provide sufficient information for wound analysis, the images should also contain a minimal periwound area. This work proposes an automatic wound image acquisition methodology that exploits deep learning models to guarantee compliance with the mentioned adequacy requirements, using a marker as a metric reference. A RetinaNet model detects the wound and marker regions, further analysed by a post-processing module that validates if both structures are present and verifies that a periwound radius of 4 centimetres is included. This pipeline was integrated into a mobile application that processes the camera frames and automatically acquires the image once the adequacy requirements are met. The detection model achieved [email protected] values of 0.39 and 0.95 for wound and marker detection, exhibiting a robust detection performance for varying acquisition conditions. Mobile tests demonstrated that the application is responsive, requiring 1.4 seconds on average to acquire an image. The robustness of this solution for real-time smartphone-based usage evidences its capability to standardise the acquisition of adequate wound images, providing a powerful tool for healthcare professionals.info:eu-repo/semantics/publishedVersio

    Artificial intelligence for automated detection of diabetic foot ulcers: A real-world proof-of-concept clinical evaluation

    Get PDF
    Objective: Conduct a multicenter proof-of-concept clinical evaluation to assess the accuracy of an artificial intelligence system on a smartphone for automated detection of diabetic foot ulcers. Methods: The evaluation was undertaken with patients with diabetes (n = 81) from September 2020 to January 2021. A total of 203 foot photographs were collected using a smartphone, analysed using the artificial intelligence system, and compared against expert clinician judgement, with 162 images showing at least one ulcer, and 41 showing no ulcer. Sensitivity and specificity of the system against clinician decisions was determined and inter- and intra-rater reliability analysed. Results: Predictions/decisions made by the system showed excellent sensitivity (0.9157) and high specificity (0.8857). Merging of intersecting predictions improved specificity to 0.9243. High levels of inter- and intra-rater reliability for clinician agreement on the ability of the artificial intelligence system to detect diabetic foot ulcers was also demonstrated (Kα > 0.8000 for all studies, between and within raters). Conclusions: We demonstrate highly accurate automated diabetic foot ulcer detection using an artificial intelligence system with a low-end smartphone. This is the first key stage in the creation of a fully automated diabetic foot ulcer detection and monitoring system, with these findings underpinning medical device development
    corecore