4,703 research outputs found

    Efficient refinements on YOLOv3 for real-time detection and assessment of diabetic foot Wagner grades

    Full text link
    Currently, the screening of Wagner grades of diabetic feet (DF) still relies on professional podiatrists. However, in less-developed countries, podiatrists are scarce, which led to the majority of undiagnosed patients. In this study, we proposed the real-time detection and location method for Wagner grades of DF based on refinements on YOLOv3. We collected 2,688 data samples and implemented several methods, such as a visual coherent image mixup, label smoothing, and training scheduler revamping, based on the ablation study. The experimental results suggested that the refinements on YOLOv3 achieved an accuracy of 91.95% and the inference speed of a single picture reaches 31ms with the NVIDIA Tesla V100. To test the performance of the model on a smartphone, we deployed the refinements on YOLOv3 models on an Android 9 system smartphone. This work has the potential to lead to a paradigm shift for clinical treatment of the DF in the future, to provide an effective healthcare solution for DF tissue analysis and healing status.Comment: 11 pages with 11 figure

    Integrated Image and Location Analysis for Wound Classification: A Deep Learning Approach

    Full text link
    The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79% to 100% for Region of Interest (ROI) without location classifications, 73.98% to 100% for ROI with location classifications, and 78.10% to 100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    Applications of Machine Learning in Medical Prognosis Using Electronic Medical Records

    Get PDF
    Approximately 84 % of hospitals are adopting electronic medical records (EMR) In the United States. EMR is a vital resource to help clinicians diagnose the onset or predict the future condition of a specific disease. With machine learning advances, many research projects attempt to extract medically relevant and actionable data from massive EMR databases using machine learning algorithms. However, collecting patients\u27 prognosis factors from Electronic EMR is challenging due to privacy, sensitivity, and confidentiality. In this study, we developed medical generative adversarial networks (GANs) to generate synthetic EMR prognosis factors using minimal information collected during routine care in specialized healthcare facilities. The generated prognosis variables used in developing predictive models for (1) chronic wound healing in patients diagnosed with Venous Leg Ulcers (VLUs) and (2) antibiotic resistance in patients diagnosed with Skin and soft tissue infections (SSTIs). Our proposed medical GANs, EMR-TCWGAN and DermaGAN, can produce both continuous and categorical features from EMR. We utilized conditional training strategies to enhance training and generate classified data regarding healing vs. non-healing in EMR-TCWGAN and susceptibility vs. resistance in DermGAN. The ability of the proposed GAN models to generate realistic EMR data was evaluated by TSTR (test on the synthetic, train on the real), discriminative accuracy, and visualization. We analyzed the synthetic data augmentation technique\u27s practicality in improving the wound healing prognostic model and antibiotic resistance classifier. We achieved the area under the curve (AUC) of 0.875 in the wound healing prognosis model and an average AUC of 0.830 in the antibiotic resistance classifier by using the synthetic samples generated by GANs in the training process. These results suggest that GANs can be considered a data augmentation method to generate realistic EMR data

    Artificial Intelligence-Powered Chronic Wound Management System: Towards Human Digital Twins

    Get PDF
    Artificial Intelligence (AI) has witnessed increased application and widespread adoption over the past decade. AI applications to medical images have the potential to assist caregivers in deciding on a proper chronic wound treatment plan by helping them to understand wound and tissue classification and border segmentation, as well as visual image synthesis. This dissertation explores chronic wound management using AI methods, such as Generative Adversarial Networks (GAN) and Explainable AI (XAI) techniques. The wound images are collected, grouped, and processed. One primary objective of this research is to develop a series of AI models, not only to present the potential of AI in wound management but also to develop the building blocks of human digital twins. First of all, motivations, contributions, and the dissertation outline are summarized to introduce the aim and scope of the dissertation. The first contribution of this study is to build a chronic wound classification and its explanation utilizing XAI. This model also benefits from a transfer learning methodology to improve performance. Then a novel model is developed that achieves wound border segmentation and tissue classification tasks simultaneously. A Deep Learning (DL) architecture, i.e., the GAN, is proposed to realize these tasks. Another novel model is developed for creating lifelike wounds. The output of the previously proposed model is used as an input for this model, which generates new chronic wound images. Any tissue distribution could be converted to lifelike wounds, preserving the shape of the original wound. The aforementioned research is extended to build a digital twin for chronic wound management. Chronic wounds, enabling technologies for wound care digital twins, are examined, and a general framework for chronic wound management using the digital twin concept is investigated. The last contribution of this dissertation includes a chronic wound healing prediction model using DL techniques. It utilizes the previously developed AI models to build a chronic wound management framework using the digital twin concept. Lastly, the overall conclusions are drawn. Future challenges and further developments in chronic wound management are discussed by utilizing emerging technologies

    Long‐Term Imaging of Wound Angiogenesis with Large Scale Optoacoustic Microscopy

    Full text link
    Wound healing is a well-coordinated process, necessitating efficient formation of new blood vessels. Vascularization defects are therefore a major risk factor for chronic, non-healing wounds. The dynamics of mammalian tissue revascularization, vessel maturation, and remodeling remain poorly understood due to lack of suitable in vivo imaging tools. A label-free large-scale optoacoustic microscopy (LSOM) approach is developed for rapid, non-invasive, volumetric imaging of tissue regeneration over large areas spanning up to 50 mm with a depth penetration of 1.5 mm. Vascular networks in dorsal mouse skin and full-thickness excisional wounds are imaged with capillary resolution during the course of healing, revealing previously undocumented views of the angiogenesis process in an unperturbed wound environment. Development of an automatic analysis framework enables the identification of key features of wound angiogenesis, including vessel length, diameter, tortuosity, and angular alignment. The approach offers a versatile tool for preclinical research in tissue engineering and regenerative medicine, empowering label-free, longitudinal, high-throughput, and quantitative studies of the microcirculation in processes associated with normal and impaired vascular remodeling, and analysis of vascular responses to pharmacological interventions in vivo

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    A Deep Learning Approach to Teeth Segmentation and Orientation from Panoramic X-rays

    Full text link
    Accurate teeth segmentation and orientation are fundamental in modern oral healthcare, enabling precise diagnosis, treatment planning, and dental implant design. In this study, we present a comprehensive approach to teeth segmentation and orientation from panoramic X-ray images, leveraging deep learning techniques. We build our model based on FUSegNet, a popular model originally developed for wound segmentation, and introduce modifications by incorporating grid-based attention gates into the skip connections. We introduce oriented bounding box (OBB) generation through principal component analysis (PCA) for precise tooth orientation estimation. Evaluating our approach on the publicly available DNS dataset, comprising 543 panoramic X-ray images, we achieve the highest Intersection-over-Union (IoU) score of 82.43% and Dice Similarity Coefficient (DSC) score of 90.37% among compared models in teeth instance segmentation. In OBB analysis, we obtain the Rotated IoU (RIoU) score of 82.82%. We also conduct detailed analyses of individual tooth labels and categorical performance, shedding light on strengths and weaknesses. The proposed model's accuracy and versatility offer promising prospects for improving dental diagnoses, treatment planning, and personalized healthcare in the oral domain. Our generated OBB coordinates and codes are available at https://github.com/mrinal054/Instance_teeth_segmentation
    corecore