397 research outputs found

    Medical Image Segmentation with Deep Learning

    Get PDF
    Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images is time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images have been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and spark research interests in medical image segmentation using deep learning. We propose two convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published

    Deep Learning-Based Object Detection in Wound Images

    Get PDF
    Developing a deep neural network for wound localization was the first step towards an efficient and fully automated wound healing system. A wound localizer was developed in this research using the YOLOv3 model, and an iOS mobile app was also created with the developed localization algorithm. The developed system can detect the wound and its surrounding tissue and isolate the portion of the localized wound for future care. This will support the segmentation and classification of wound by eliminating a lot of redundant details from photos of wound. A lighter variant of YOLOv3 called tiny-YOLOv3 is used for mobile device video processing. The model is trained and tested on an independently created dataset, designed in collaboration with AZH Wound and Vascular Center, Milwaukee, Wisconsin. Model YOLOv3 is contrasted with model SSD, showing that YOLOv3 gives 93.9% of the mAP value, which is much better than the SSD model (86.4%). These models’ robustness and reliability are shown to be very good when evaluated on a dataset that is publicly available

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Efficient refinements on YOLOv3 for real-time detection and assessment of diabetic foot Wagner grades

    Full text link
    Currently, the screening of Wagner grades of diabetic feet (DF) still relies on professional podiatrists. However, in less-developed countries, podiatrists are scarce, which led to the majority of undiagnosed patients. In this study, we proposed the real-time detection and location method for Wagner grades of DF based on refinements on YOLOv3. We collected 2,688 data samples and implemented several methods, such as a visual coherent image mixup, label smoothing, and training scheduler revamping, based on the ablation study. The experimental results suggested that the refinements on YOLOv3 achieved an accuracy of 91.95% and the inference speed of a single picture reaches 31ms with the NVIDIA Tesla V100. To test the performance of the model on a smartphone, we deployed the refinements on YOLOv3 models on an Android 9 system smartphone. This work has the potential to lead to a paradigm shift for clinical treatment of the DF in the future, to provide an effective healthcare solution for DF tissue analysis and healing status.Comment: 11 pages with 11 figure

    Medical Image Segmentation with Deep Convolutional Neural Networks

    Get PDF
    Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images are time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images has been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and sparked research interests in medical image segmentation using deep learning. We propose three convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published

    Automated wound segmentation and classification of seven common injuries in forensic medicine

    Full text link
    In forensic medical investigations, physical injuries are documented with photographs accompanied by written reports. Automatic segmentation and classification of wounds on these photographs could provide forensic pathologists with a tool to improve the assessment of injuries and accelerate the reporting process. In this pilot study, we trained and compared several preexisting deep learning architectures for image segmentation and wound classification on forensically relevant photographs in our database. The best scores were a mean pixel accuracy of 69.4% and a mean intersection over union (IoU) of 48.6% when evaluating the trained models on our test set. The models had difficulty distinguishing the background from wounded areas. As an example, image pixels showing subcutaneous hematomas or skin abrasions were assigned to the background class in 31% of cases. Stab wounds, on the other hand, were reliably classified with a pixel accuracy of 93%. These results can be partially attributed to undefined wound boundaries for some types of injuries, such as subcutaneous hematoma. However, despite the large class imbalance, we demonstrate that the best trained models could reliably distinguish among seven of the most common wounds encountered in forensic medical investigations

    Deep Learning Applications in Medical Image and Shape Analysis

    Get PDF
    Deep learning is one of the most rapidly growing fields in computer and data science in the past few years. It has been widely used for feature extraction and recognition in various applications. The training process as a black-box utilizes deep neural networks, whose parameters are adjusted by minimizing the difference between the predicted feedback and labeled data (so-called training dataset). The trained model is then applied to unknown inputs to predict the results that mimic human\u27s decision-making. This technology has found tremendous success in many fields involving data analysis such as images, shapes, texts, audio and video signals and so on. In medical applications, images have been regularly used by physicians for diagnosis of diseases, making treatment plans, and tracking progress of patient treatment. One of the most challenging and common problems in image processing is segmentation of features of interest, so-called feature extraction. To this end, we aim to develop a deep learning framework in the current thesis to extract regions of interest in wound images. In addition, we investigate deep learning approaches for segmentation of 3D surface shapes as a potential tool for surface analysis in our future work. Experiments are presented and discussed for both 2D image and 3D shape analysis using deep learning networks

    Deep Learning Applications in Medical Image and Shape Analysis

    Get PDF
    Deep learning is one of the most rapidly growing fields in computer and data science in the past few years. It has been widely used for feature extraction and recognition in various applications. The training process as a black-box utilizes deep neural networks, whose parameters are adjusted by minimizing the difference between the predicted feedback and labeled data (so-called training dataset). The trained model is then applied to unknown inputs to predict the results that mimic human\u27s decision-making. This technology has found tremendous success in many fields involving data analysis such as images, shapes, texts, audio and video signals and so on. In medical applications, images have been regularly used by physicians for diagnosis of diseases, making treatment plans, and tracking progress of patient treatment. One of the most challenging and common problems in image processing is segmentation of features of interest, so-called feature extraction. To this end, we aim to develop a deep learning framework in the current thesis to extract regions of interest in wound images. In addition, we investigate deep learning approaches for segmentation of 3D surface shapes as a potential tool for surface analysis in our future work. Experiments are presented and discussed for both 2D image and 3D shape analysis using deep learning networks

    Image Analysis System for Early Detection of Cardiothoracic Surgery Wound Alterations Based on Artificial Intelligence Models

    Get PDF
    Funding Information: This work is part of a research project funded by Fundação para a Ciência e Tecnologia, which aims to design and implement a post-surgical digital telemonitoring service for cardiothoracic surgery patients. The main goals of the research project are: to study the impact of daily telemonitoring on early diagnosis, to reduce hospital readmissions, and to improve patient safety, during the 30-day period after hospital discharge. This remote follow-up involves a digital remote patient monitoring kit which includes a sphygmomanometer, a scale, a smartwatch, and a smartphone, allowing daily patient data collection. One of the daily outcomes was the daily photographs taken by patients regarding surgical wounds. Every day, the clinical team had to analyze the image of each patient, which could take a long time. The automatic analysis of these images would allow implementing an alert related to the detection of wound modifications that could represent a risk of infection. Such an alert would spare time for the clinical team in follow-up care. Funding Information: This research has been supported by Fundação para a Ciência e Tecnologia (FCT) under CardioFollow.AI project (DSAIPA/AI/0094/2020), Lisboa-05-3559-FSE-000003 and UIDB/04559/2020. Publisher Copyright: © 2023 by the authors.Cardiothoracic surgery patients have the risk of developing surgical site infections which cause hospital readmissions, increase healthcare costs, and may lead to mortality. This work aims to tackle the problem of surgical site infections by predicting the existence of worrying alterations in wound images with a wound image analysis system based on artificial intelligence. The developed system comprises a deep learning segmentation model (MobileNet-Unet), which detects the wound region area and categorizes the wound type (chest, drain, and leg), and a machine learning classification model, which predicts the occurrence of wound alterations (random forest, support vector machine and k-nearest neighbors for chest, drain, and leg, respectively). The deep learning model segments the image and assigns the wound type. Then, the machine learning models classify the images from a group of color and textural features extracted from the output region of interest to feed one of the three wound-type classifiers that reach the final binary decision of wound alteration. The segmentation model achieved a mean Intersection over Union of 89.9% and a mean average precision of 90.1%. Separating the final classification into different classifiers was more effective than a single classifier for all the wound types. The leg wound classifier exhibited the best results with an 87.6% recall and 52.6% precision.publishersversionpublishe
    corecore