558 research outputs found
Imparting 3D representations to artificial intelligence for a full assessment of pressure injuries.
During recent decades, researches have shown great interest to machine learning techniques in order to extract meaningful information from the large amount of data being collected each day. Especially in the medical field, images play a significant role in the detection of several health issues. Hence, medical image analysis remarkably participates in the diagnosis process and it is considered a suitable environment to interact with the technology of intelligent systems. Deep Learning (DL) has recently captured the interest of researchers as it has proven to be efficient in detecting underlying features in the data and outperformed the classical machine learning methods. The main objective of this dissertation is to prove the efficiency of Deep Learning techniques in tackling one of the important health issues we are facing in our society, through medical imaging. Pressure injuries are a dermatology related health issue associated with increased morbidity and health care costs. Managing pressure injuries appropriately is increasingly important for all the professionals in wound care. Using 2D photographs and 3D meshes of these wounds, collected from collaborating hospitals, our mission is to create intelligent systems for a full non-intrusive assessment of these wounds. Five main tasks have been achieved in this study: a literature review of wound imaging methods using machine learning techniques, the classification and segmentation of the tissue types inside the pressure injury, the segmentation of these wounds and the design of an end-to-end system which measures all the necessary quantitative information from 3D meshes for an efficient assessment of PIs, and the integration of the assessment imaging techniques in a web-based application
Mobile Wound Assessment and 3D Modeling from a Single Image
The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image
Novel Computerised Techniques for Recognition and Analysis of Diabetic Foot Ulcers
Diabetic Foot Ulcers (DFU) that affect the lower extremities are a major complication
of Diabetes Mellitus (DM). It has been estimated that patients with
diabetes have a lifetime risk of 15% to 25% in developing DFU contributing up
to 85% of the lower limb amputation due to failure to recognise and treat DFU
properly. Current practice for DFU screening involves manual inspection of the
foot by podiatrists and further medical tests such as vascular and blood tests are
used to determine the presence of ischemia and infection in DFU. A comprehensive
review of computerized techniques for recognition of DFU has been performed
to identify the work done so far in this field. During this stage, it became clear
that computerized analysis of DFU is relatively emerging field that is why related
literature and research works are limited. There is also a lack of standardised
public database of DFU and other wound-related pathologies.
We have received approximately 1500 DFU images through the ethical approval
with Lancashire Teaching Hospitals. In this work, we standardised both
DFU dataset and expert annotations to perform different computer vision tasks
such as classification, segmentation and localization on popular deep learning
frameworks. The main focus of this thesis is to develop automatic computer vision methods that can recognise the DFU of different stages and grades. Firstly, we used machine learning algorithms to classify the DFU patches against normal skin
patches of the foot region to determine the possible misclassified cases of both
classes. Secondly, we used fully convolutional networks for the segmentation of
DFU and surrounding skin in full foot images with high specificity and sensitivity.
Finally, we used robust and lightweight deep localisation methods in mobile devices
to detect the DFU on foot images for remote monitoring. Despite receiving
very good performance for the recognition of DFU, these algorithms were not able
to detect pre-ulcer conditions and very subtle DFU.
Although recognition of DFU by computer vision algorithms is a valuable
study, we performed the further analysis of DFU on foot images to determine
factors that predict the risk of amputation such as the presence of infection and
ischemia in DFU. The complete DFU diagnosis system with these computer vision
algorithms have the potential to deliver a paradigm shift in diabetic foot care
among diabetic patients, which represent a cost-effective, remote and convenient
healthcare solution with more data and expert annotations
Cardiovascular/Stroke Risk Stratification in Diabetic Foot Infection Patients Using Deep Learning-Based Artificial Intelligence: An Investigative Study
A diabetic foot infection (DFI) is among the most serious, incurable, and costly to treat conditions. The presence of a DFI renders machine learning (ML) systems extremely nonlinear, posing difficulties in CVD/stroke risk stratification. In addition, there is a limited number of well-explained ML paradigms due to comorbidity, sample size limits, and weak scientific and clinical validation methodologies. Deep neural networks (DNN) are potent machines for learning that generalize nonlinear situations. The objective of this article is to propose a novel investigation of deep learning (DL) solutions for predicting CVD/stroke risk in DFI patients. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) search strategy was used for the selection of 207 studies. We hypothesize that a DFI is responsible for increased morbidity and mortality due to the worsening of atherosclerotic disease and affecting coronary artery disease (CAD). Since surrogate biomarkers for CAD, such as carotid artery disease, can be used for monitoring CVD, we can thus use a DL-based model, namely, Long Short-Term Memory (LSTM) and Recurrent Neural Networks (RNN) for CVD/stroke risk prediction in DFI patients, which combines covariates such as office and laboratory-based biomarkers, carotid ultrasound image phenotype (CUSIP) lesions, along with the DFI severity. We confirmed the viability of CVD/stroke risk stratification in the DFI patients. Strong designs were found in the research of the DL architectures for CVD/stroke risk stratification. Finally, we analyzed the AI bias and proposed strategies for the early diagnosis of CVD/stroke in DFI patients. Since DFI patients have an aggressive atherosclerotic disease, leading to prominent CVD/stroke risk, we, therefore, conclude that the DL paradigm is very effective for predicting the risk of CVD/stroke in DFI patients
Syn3DWound: A Synthetic Dataset for 3D Wound Bed Analysis
Wound management poses a significant challenge, particularly for bedridden
patients and the elderly. Accurate diagnostic and healing monitoring can
significantly benefit from modern image analysis, providing accurate and
precise measurements of wounds. Despite several existing techniques, the
shortage of expansive and diverse training datasets remains a significant
obstacle to constructing machine learning-based frameworks. This paper
introduces Syn3DWound, an open-source dataset of high-fidelity simulated wounds
with 2D and 3D annotations. We propose baseline methods and a benchmarking
framework for automated 3D morphometry analysis and 2D/3D wound segmentation.Comment: In the IEEE International Symposium on Biomedical Imaging (ISBI) 202
Integrated Image and Location Analysis for Wound Classification: A Deep Learning Approach
The global burden of acute and chronic wounds presents a compelling case for
enhancing wound classification methods, a vital step in diagnosing and
determining optimal treatments. Recognizing this need, we introduce an
innovative multi-modal network based on a deep convolutional neural network for
categorizing wounds into four categories: diabetic, pressure, surgical, and
venous ulcers. Our multi-modal network uses wound images and their
corresponding body locations for more precise classification. A unique aspect
of our methodology is incorporating a body map system that facilitates accurate
wound location tagging, improving upon traditional wound image classification
techniques. A distinctive feature of our approach is the integration of models
such as VGG16, ResNet152, and EfficientNet within a novel architecture. This
architecture includes elements like spatial and channel-wise
Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated
Multi-Layer Perceptron, providing a robust foundation for classification. Our
multi-modal network was trained and evaluated on two distinct datasets
comprising relevant images and corresponding location information. Notably, our
proposed network outperformed traditional methods, reaching an accuracy range
of 74.79% to 100% for Region of Interest (ROI) without location
classifications, 73.98% to 100% for ROI with location classifications, and
78.10% to 100% for whole image classifications. This marks a significant
enhancement over previously reported performance metrics in the literature. Our
results indicate the potential of our multi-modal network as an effective
decision-support tool for wound image classification, paving the way for its
application in various clinical contexts
Medical Image Segmentation with Deep Learning
Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images is time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images have been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and spark research interests in medical image segmentation using deep learning. We propose two convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published
- …