66 research outputs found

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    Deep Learning based Skin-layer Segmentation for Characterizing Cutaneous Wounds from Optical Coherence Tomography Images

    Full text link
    Optical coherence tomography (OCT) is a medical imaging modality that allows us to probe deeper substructures of skin. The state-of-the-art wound care prediction and monitoring methods are based on visual evaluation and focus on surface information. However, research studies have shown that sub-surface information of the wound is critical for understanding the wound healing progression. This work demonstrated the use of OCT as an effective imaging tool for objective and non-invasive assessments of wound severity, the potential for healing, and healing progress by measuring the optical characteristics of skin components. We have demonstrated the efficacy of OCT in studying wound healing progress in vivo small animal models. Automated analysis of OCT datasets poses multiple challenges, such as limitations in the training dataset size, variation in data distribution induced by uncertainties in sample quality and experiment conditions. We have employed a U-Net-based model for the segmentation of skin layers based on OCT images and to study epithelial and regenerated tissue thickness wound closure dynamics and thus quantify the progression of wound healing. In the experimental evaluation of the OCT skin image datasets, we achieved the objective of skin layer segmentation with an average intersection over union (IOU) of 0.9234. The results have been corroborated using gold-standard histology images and co-validated using inputs from pathologists. Clinical Relevance: To monitor wound healing progression without disrupting the healing procedure by superficial, noninvasive means via the identification of pixel characteristics of individual layers.Comment: Accepte

    Comprehensive Assessment of Fine-Grained Wound Images Using a Patch-Based CNN With Context-Preserving Attention

    Get PDF
    Goal: Chronic wounds affect 6.5 million Americans. Wound assessment via algorithmic analysis of smartphone images has emerged as a viable option for remote assessment. Methods: We comprehensively score wounds based on the clinically-validated Photographic Wound Assessment Tool (PWAT), which comprehensively assesses clinically important ranges of eight wound attributes: Size, Depth, Necrotic Tissue Type, Necrotic Tissue Amount, Granulation Tissue type, Granulation Tissue Amount, Edges, Periulcer Skin Viability. We proposed a DenseNet Convolutional Neural Network (CNN) framework with patch-based context-preserving attention to assess the 8 PWAT attributes of four wound types: diabetic ulcers, pressure ulcers, vascular ulcers and surgical wounds. Results: In an evaluation on our dataset of 1639 wound images, our model estimated all 8 PWAT sub-scores with classification accuracies and F1 scores of over 80%. Conclusions: Our work is the first intelligent system that autonomously grades wounds comprehensively based on criteria in the PWAT rubric, alleviating the significant burden that manual wound grading imposes on wound care nurses

    Imparting 3D representations to artificial intelligence for a full assessment of pressure injuries.

    Get PDF
    During recent decades, researches have shown great interest to machine learning techniques in order to extract meaningful information from the large amount of data being collected each day. Especially in the medical field, images play a significant role in the detection of several health issues. Hence, medical image analysis remarkably participates in the diagnosis process and it is considered a suitable environment to interact with the technology of intelligent systems. Deep Learning (DL) has recently captured the interest of researchers as it has proven to be efficient in detecting underlying features in the data and outperformed the classical machine learning methods. The main objective of this dissertation is to prove the efficiency of Deep Learning techniques in tackling one of the important health issues we are facing in our society, through medical imaging. Pressure injuries are a dermatology related health issue associated with increased morbidity and health care costs. Managing pressure injuries appropriately is increasingly important for all the professionals in wound care. Using 2D photographs and 3D meshes of these wounds, collected from collaborating hospitals, our mission is to create intelligent systems for a full non-intrusive assessment of these wounds. Five main tasks have been achieved in this study: a literature review of wound imaging methods using machine learning techniques, the classification and segmentation of the tissue types inside the pressure injury, the segmentation of these wounds and the design of an end-to-end system which measures all the necessary quantitative information from 3D meshes for an efficient assessment of PIs, and the integration of the assessment imaging techniques in a web-based application

    Artificial Intelligence-Powered Chronic Wound Management System: Towards Human Digital Twins

    Get PDF
    Artificial Intelligence (AI) has witnessed increased application and widespread adoption over the past decade. AI applications to medical images have the potential to assist caregivers in deciding on a proper chronic wound treatment plan by helping them to understand wound and tissue classification and border segmentation, as well as visual image synthesis. This dissertation explores chronic wound management using AI methods, such as Generative Adversarial Networks (GAN) and Explainable AI (XAI) techniques. The wound images are collected, grouped, and processed. One primary objective of this research is to develop a series of AI models, not only to present the potential of AI in wound management but also to develop the building blocks of human digital twins. First of all, motivations, contributions, and the dissertation outline are summarized to introduce the aim and scope of the dissertation. The first contribution of this study is to build a chronic wound classification and its explanation utilizing XAI. This model also benefits from a transfer learning methodology to improve performance. Then a novel model is developed that achieves wound border segmentation and tissue classification tasks simultaneously. A Deep Learning (DL) architecture, i.e., the GAN, is proposed to realize these tasks. Another novel model is developed for creating lifelike wounds. The output of the previously proposed model is used as an input for this model, which generates new chronic wound images. Any tissue distribution could be converted to lifelike wounds, preserving the shape of the original wound. The aforementioned research is extended to build a digital twin for chronic wound management. Chronic wounds, enabling technologies for wound care digital twins, are examined, and a general framework for chronic wound management using the digital twin concept is investigated. The last contribution of this dissertation includes a chronic wound healing prediction model using DL techniques. It utilizes the previously developed AI models to build a chronic wound management framework using the digital twin concept. Lastly, the overall conclusions are drawn. Future challenges and further developments in chronic wound management are discussed by utilizing emerging technologies

    Medical Image Segmentation Using Machine Learning

    Get PDF
    Image segmentation is the most crucial step in image processing and analysis. It can divide an image into meaningfully descriptive components or pathological structures. The result of the image division helps analyze images and classify objects. Therefore, getting the most accurate segmented image is essential, especially in medical images. Segmentation methods can be divided into three categories: manual, semiautomatic, and automatic. Manual is the most general and straightforward approach. Manual segmentation is not only time-consuming but also is imprecise. However, automatic image segmentation techniques, such as thresholding and edge detection, are not accurate in the presence of artifacts like noise and texture. This research aims to show how to extract features and use traditional machine learning methods like a random forest to obtain the most accurate regions of interest in CT images. In addition, this study shows how to use a deep learning model to segment the wound area in raw pictures and then analyze the corresponding area in near-infrared images. This thesis first gives a brief review of current approaches to medical image segmentation and deep learning background. Furthermore, we describe different approaches to build a model for segmenting CT-Scan images and Wound Images. For the results, we achieve 97.4% accuracy in CT-image segmentation and 89.8% F1-Score For wound image segmentation

    Advancing Wound Filling Extraction on 3D Faces: A Auto-Segmentation and Wound Face Regeneration Approach

    Full text link
    Facial wound segmentation plays a crucial role in preoperative planning and optimizing patient outcomes in various medical applications. In this paper, we propose an efficient approach for automating 3D facial wound segmentation using a two-stream graph convolutional network. Our method leverages the Cir3D-FaIR dataset and addresses the challenge of data imbalance through extensive experimentation with different loss functions. To achieve accurate segmentation, we conducted thorough experiments and selected a high-performing model from the trained models. The selected model demonstrates exceptional segmentation performance for complex 3D facial wounds. Furthermore, based on the segmentation model, we propose an improved approach for extracting 3D facial wound fillers and compare it to the results of the previous study. Our method achieved a remarkable accuracy of 0.9999986\% on the test suite, surpassing the performance of the previous method. From this result, we use 3D printing technology to illustrate the shape of the wound filling. The outcomes of this study have significant implications for physicians involved in preoperative planning and intervention design. By automating facial wound segmentation and improving the accuracy of wound-filling extraction, our approach can assist in carefully assessing and optimizing interventions, leading to enhanced patient outcomes. Additionally, it contributes to advancing facial reconstruction techniques by utilizing machine learning and 3D bioprinting for printing skin tissue implants. Our source code is available at \url{https://github.com/SIMOGroup/WoundFilling3D}

    Simultaneous Wound Border Segmentation and Tissue Classification Using a Conditional Generative Adversarial Network

    Get PDF
    Generative adversarial network (GAN) applications on medical image synthesis have the potential to assist caregivers in deciding a proper chronic wound treatment plan by understanding the border segmentation and the wound tissue classification visually. This study proposes a hybrid wound border segmentation and tissue classification method utilising conditional GAN, which can mimic real data without expert knowledge. We trained the network on chronic wound datasets with different sizes. The performance of the GAN algorithm is evaluated through the mean squared error, Dice coefficient metrics and visual inspection of generated images. This study also analyses the optimum number of training images as well as the number of epochs using GAN for wound border segmentation and tissue classification. The results show that the proposed GAN model performs efficiently for wound border segmentation and tissue classification tasks with a set of 2000 images at 200 epochs
    • …
    corecore