125 research outputs found

    Imparting 3D representations to artificial intelligence for a full assessment of pressure injuries.

    Get PDF
    During recent decades, researches have shown great interest to machine learning techniques in order to extract meaningful information from the large amount of data being collected each day. Especially in the medical field, images play a significant role in the detection of several health issues. Hence, medical image analysis remarkably participates in the diagnosis process and it is considered a suitable environment to interact with the technology of intelligent systems. Deep Learning (DL) has recently captured the interest of researchers as it has proven to be efficient in detecting underlying features in the data and outperformed the classical machine learning methods. The main objective of this dissertation is to prove the efficiency of Deep Learning techniques in tackling one of the important health issues we are facing in our society, through medical imaging. Pressure injuries are a dermatology related health issue associated with increased morbidity and health care costs. Managing pressure injuries appropriately is increasingly important for all the professionals in wound care. Using 2D photographs and 3D meshes of these wounds, collected from collaborating hospitals, our mission is to create intelligent systems for a full non-intrusive assessment of these wounds. Five main tasks have been achieved in this study: a literature review of wound imaging methods using machine learning techniques, the classification and segmentation of the tissue types inside the pressure injury, the segmentation of these wounds and the design of an end-to-end system which measures all the necessary quantitative information from 3D meshes for an efficient assessment of PIs, and the integration of the assessment imaging techniques in a web-based application

    A Mobile App for Wound Localization using Deep Learning

    Full text link
    We present an automated wound localizer from 2D wound and ulcer images by using deep neural network, as the first step towards building an automated and complete wound diagnostic system. The wound localizer has been developed by using YOLOv3 model, which is then turned into an iOS mobile application. The developed localizer can detect the wound and its surrounding tissues and isolate the localized wounded region from images, which would be very helpful for future processing such as wound segmentation and classification due to the removal of unnecessary regions from wound images. For Mobile App development with video processing, a lighter version of YOLOv3 named tiny-YOLOv3 has been used. The model is trained and tested on our own image dataset in collaboration with AZH Wound and Vascular Center, Milwaukee, Wisconsin. The YOLOv3 model is compared with SSD model, showing that YOLOv3 gives a mAP value of 93.9%, which is much better than the SSD model (86.4%). The robustness and reliability of these models are also tested on a publicly available dataset named Medetec and shows a very good performance as well.Comment: 8 pages, 5 figures, 1 tabl

    Medical Image Segmentation with Deep Convolutional Neural Networks

    Get PDF
    Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images are time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images has been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and sparked research interests in medical image segmentation using deep learning. We propose three convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    Wound Image Classification Using Deep Convolutional Neural Networks

    Get PDF
    Artificial Intelligence (AI) includes subfields like Machine Learning (ML) and DeepLearning (DL) and discusses intelligent systems that mimic human behaviors. ML has been used in a wide range of fields. Particularly in the healthcare domain, medical images often need to be carefully processed via such operations as classification and segmentation. Unlike traditional ML methods, DL algorithms are based on deep neural networks that are trained on a large amount of labeled data to extract features without human intervention. DL algorithms have become popular and powerful in classifying and segmenting medical images in recent years. In this thesis, we shall study the image classification problem in smartphone wound images using deep learning. Specifically, we apply deep convolutional neural networks (DCNN) on wound images to classify them into multiple types including diabetic, pressure, venous, and surgical. Also, we use DCNNs for wound tissue classification. First, an extensive review of existing DL-based methods in wound image classification is conducted and comprehensive taxonomies are provided for the reviewed studies. Then, we use a DCNN for binary and 3-class classification of burn wound images. The accuracy was considerably improved for the binary case in comparison with previous work in the literature. In addition, we propose an ensemble DCNN-based classifier for image-wise wound classification. We train and test our model on a new valuable set of wound images from different types that are kindly shared by the AZH Wound and Vascular Center in Milwaukee. The dataset has been shared for researchers in the field. Our proposed classifier outperforms the common DCNNs in classification accuracy on our own dataset. Also, it was evaluated on a public wound image dataset. The results showed that the proposed method can be used for wound image classification tasks or other similar applications. Finally, experiments are conducted on a dataset including different tissue types such as slough, granulation, callous, etc., annotated by the wound specialists from AZH Center to classify the wound pixels into different classes. The preliminary results of tissue classification experiments using DCNNs along with the future directions have been provided

    Novel Computerised Techniques for Recognition and Analysis of Diabetic Foot Ulcers

    Get PDF
    Diabetic Foot Ulcers (DFU) that affect the lower extremities are a major complication of Diabetes Mellitus (DM). It has been estimated that patients with diabetes have a lifetime risk of 15% to 25% in developing DFU contributing up to 85% of the lower limb amputation due to failure to recognise and treat DFU properly. Current practice for DFU screening involves manual inspection of the foot by podiatrists and further medical tests such as vascular and blood tests are used to determine the presence of ischemia and infection in DFU. A comprehensive review of computerized techniques for recognition of DFU has been performed to identify the work done so far in this field. During this stage, it became clear that computerized analysis of DFU is relatively emerging field that is why related literature and research works are limited. There is also a lack of standardised public database of DFU and other wound-related pathologies. We have received approximately 1500 DFU images through the ethical approval with Lancashire Teaching Hospitals. In this work, we standardised both DFU dataset and expert annotations to perform different computer vision tasks such as classification, segmentation and localization on popular deep learning frameworks. The main focus of this thesis is to develop automatic computer vision methods that can recognise the DFU of different stages and grades. Firstly, we used machine learning algorithms to classify the DFU patches against normal skin patches of the foot region to determine the possible misclassified cases of both classes. Secondly, we used fully convolutional networks for the segmentation of DFU and surrounding skin in full foot images with high specificity and sensitivity. Finally, we used robust and lightweight deep localisation methods in mobile devices to detect the DFU on foot images for remote monitoring. Despite receiving very good performance for the recognition of DFU, these algorithms were not able to detect pre-ulcer conditions and very subtle DFU. Although recognition of DFU by computer vision algorithms is a valuable study, we performed the further analysis of DFU on foot images to determine factors that predict the risk of amputation such as the presence of infection and ischemia in DFU. The complete DFU diagnosis system with these computer vision algorithms have the potential to deliver a paradigm shift in diabetic foot care among diabetic patients, which represent a cost-effective, remote and convenient healthcare solution with more data and expert annotations

    Medical Image Segmentation with Deep Learning

    Get PDF
    Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images is time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images have been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and spark research interests in medical image segmentation using deep learning. We propose two convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published

    Structural integrity of aortic scaffolds decellularized by sonication decellularization system

    Get PDF
    Sonication decellularization technique has shown effectiveness to remove all the cellular components by the disruption of the cell membranes and removal of the cell debris to prepare the bioscaffolds. However, it is important to confirm whether this technique does not have a detrimental effect on elastin and collagen in bioscaffolds. The objectives of this study are to evaluate the structural integrity of bioscaffolds using histological staining and quantitatively collagen and elastin measurement. Aortic tissues were sonicated in 0.1% SDS for 10 hours at the frequency of 170 kHz with the power output of 15W and washed in Phosphate Buffer Solution (PBS) for 5 days. Then the sonicated aortic tissues were evaluated by Hematoxylin & Eosin (H&E) staining for cell removal analysis, Verhoeff-van Gieson (VVG) staining for visualizing elastin and Picrosirius Red (PSR) staining for visualizing collagen. The collagen and elastic fibres were semi-quantified by ImageJ software. The results showed that sonication decellularization system can remove all the cellular components while maintaining the structural integrity of elastin and collagen on bioscaffolds. This study indicates that sonication decellularization system could remove all cellular components and maintain the structure of the extracellular matrix

    Artificial Intelligence-Powered Chronic Wound Management System: Towards Human Digital Twins

    Get PDF
    Artificial Intelligence (AI) has witnessed increased application and widespread adoption over the past decade. AI applications to medical images have the potential to assist caregivers in deciding on a proper chronic wound treatment plan by helping them to understand wound and tissue classification and border segmentation, as well as visual image synthesis. This dissertation explores chronic wound management using AI methods, such as Generative Adversarial Networks (GAN) and Explainable AI (XAI) techniques. The wound images are collected, grouped, and processed. One primary objective of this research is to develop a series of AI models, not only to present the potential of AI in wound management but also to develop the building blocks of human digital twins. First of all, motivations, contributions, and the dissertation outline are summarized to introduce the aim and scope of the dissertation. The first contribution of this study is to build a chronic wound classification and its explanation utilizing XAI. This model also benefits from a transfer learning methodology to improve performance. Then a novel model is developed that achieves wound border segmentation and tissue classification tasks simultaneously. A Deep Learning (DL) architecture, i.e., the GAN, is proposed to realize these tasks. Another novel model is developed for creating lifelike wounds. The output of the previously proposed model is used as an input for this model, which generates new chronic wound images. Any tissue distribution could be converted to lifelike wounds, preserving the shape of the original wound. The aforementioned research is extended to build a digital twin for chronic wound management. Chronic wounds, enabling technologies for wound care digital twins, are examined, and a general framework for chronic wound management using the digital twin concept is investigated. The last contribution of this dissertation includes a chronic wound healing prediction model using DL techniques. It utilizes the previously developed AI models to build a chronic wound management framework using the digital twin concept. Lastly, the overall conclusions are drawn. Future challenges and further developments in chronic wound management are discussed by utilizing emerging technologies
    • ā€¦
    corecore