24 research outputs found

    Novel Computerised Techniques for Recognition and Analysis of Diabetic Foot Ulcers

    Get PDF
    Diabetic Foot Ulcers (DFU) that affect the lower extremities are a major complication of Diabetes Mellitus (DM). It has been estimated that patients with diabetes have a lifetime risk of 15% to 25% in developing DFU contributing up to 85% of the lower limb amputation due to failure to recognise and treat DFU properly. Current practice for DFU screening involves manual inspection of the foot by podiatrists and further medical tests such as vascular and blood tests are used to determine the presence of ischemia and infection in DFU. A comprehensive review of computerized techniques for recognition of DFU has been performed to identify the work done so far in this field. During this stage, it became clear that computerized analysis of DFU is relatively emerging field that is why related literature and research works are limited. There is also a lack of standardised public database of DFU and other wound-related pathologies. We have received approximately 1500 DFU images through the ethical approval with Lancashire Teaching Hospitals. In this work, we standardised both DFU dataset and expert annotations to perform different computer vision tasks such as classification, segmentation and localization on popular deep learning frameworks. The main focus of this thesis is to develop automatic computer vision methods that can recognise the DFU of different stages and grades. Firstly, we used machine learning algorithms to classify the DFU patches against normal skin patches of the foot region to determine the possible misclassified cases of both classes. Secondly, we used fully convolutional networks for the segmentation of DFU and surrounding skin in full foot images with high specificity and sensitivity. Finally, we used robust and lightweight deep localisation methods in mobile devices to detect the DFU on foot images for remote monitoring. Despite receiving very good performance for the recognition of DFU, these algorithms were not able to detect pre-ulcer conditions and very subtle DFU. Although recognition of DFU by computer vision algorithms is a valuable study, we performed the further analysis of DFU on foot images to determine factors that predict the risk of amputation such as the presence of infection and ischemia in DFU. The complete DFU diagnosis system with these computer vision algorithms have the potential to deliver a paradigm shift in diabetic foot care among diabetic patients, which represent a cost-effective, remote and convenient healthcare solution with more data and expert annotations

    System Designs for Diabetic Foot Ulcer Image Assessment

    Get PDF
    For individuals with type 2 diabetes, diabetic foot ulcers represent a significant health issue and the wound care cost is quite high. Currently, clinicians and nurses mainly base their wound assessment on visual examination of wound size and the status of the wound tissue. This method is potentially inaccurate for wound assessment and requires extra clinical workload. In view of the prevalence of smartphones with high resolution digital camera, assessing wound healing by analyzing of real-time images using the significant computational power of today’s mobile devices is an attractive approach for managing foot ulcers. Alternatively, the smartphone may be used just for image capture and wireless transfer to a PC or laptop for image processing. To achieve accurate foot ulcer image assessment, we have developed and tested a novel automatic wound image analysis system which accomplishes the following conditions: 1) design of an easy-to-use image capture system which makes the image capture process comfortable for the patient and provides well-controlled image capture conditions; 2) synthesis of efficient and accurate algorithms for real-time wound boundary determination to measure the wound area size; 3) development of a quantitative method to assess the wound healing status based on a foot ulcer image sequence for a given patient and 4) design of a wound image assessment and management system that can be used both in the patient’s home and clinical environment in a tele-medicine fashion. In our work, the wound image is captured by the camera on the smartphone while the patient’s foot is held in place by an image capture box, which is specially design to aid patients in photographing ulcers occurring on the sole of their feet. The experimental results prove that our image capture system guarantees consistent illumination and a fixed distance between the foot and camera. These properties greatly reduce the complexity of the subsequent wound recognition and assessment. The most significant contribution of our work is the development of five different wound boundary determination approaches based on different computer vision algorithms. The first approach employs the level set algorithm to determine the wound boundary directly based on a manually set initial curve. The second and third approaches are the mean-shift segmentation based methods augmented by foot outline detection and analysis. These two approaches have been shown to be efficient to implement (especially on smartphones), prior-knowledge independent and able to provide reasonably accurate wound segmentation results given a set of well-tuned parameters. However, this method suffers from the lack of self-adaptivity due to the fact that it is not based on machine learning. Consequently, a two-stage Support Vector Machine (SVM) binary classifier based wound recognition approach is developed and implemented. This approach consists of three major steps 1) unsupervised super-pixel segmentation, 2) feature descriptor extraction for each super-pixel and 3) supervised classifier based wound boundary determination. The experimental results show that this approach provides promising performance (sensitivity: 73.3%, specificity: 95.6%) when dealing with foot ulcer images captured with our image capture box. In the third approach, we further relax the image capture constraints and generalize the application of our wound recognition system by applying the conditional random field (CRF) based model to solve the wound boundary determination. The key modules in this approach are the TextonBoost based potential learning at different scales and efficient CRF model inference to find the optimal labeling. Finally, the standard K-means clustering algorithm is applied to the determined wound area for color based wound tissue classification. To train the models used in the last two approaches, as well as to evaluate all three methods, we have collected about 100 wound images at the wound clinic in UMass Medical School by tracking 15 patients for a 2-year period, following an IRB approved protocol. The wound recognition results were compared with the ground truth generated by combining clinical labeling from three experienced clinicians. Specificity and sensitivity based measures indicate that the CRF based approach is the most reliable method despite its implementation complexity and computational demands. In addition, sample images of Moulage wound simulations are also used to increase the evaluation flexibility. The advantages and disadvantages of three approaches are described. Another important contribution of this work has been development of a healing score based mechanism for quantitative wound healing status assessment. The wound size and color composition measurements were converted to a score number ranging from 0-10, which indicates the healing trend based on comparisons of subsequent images to an initial foot ulcer image. By comparing the result of the healing score algorithm to the healing scores determined by experienced clinicians, we assess the clinical validity of our healing score algorithm. The level of agreement of our healing score with the three assessing clinicians was quantified by using the Kripendorff’s Alpha Coefficient (KAC). Finally, a collaborative wound image management system between the PC and smartphone was designed and successfully applied in the wound clinic for patients’ wound tracking purpose. This system is proven to be applicable in clinical environment and capable of providing interactive foot ulcer care in a telemedicine fashion

    Pressure Ulcer Categorisation and Reporting in Domiciliary Settings using Deep Learning and Mobile Devices: A Clinical Trial to Evaluate End-to-End Performance

    Get PDF
    Pressure ulcers are a challenge for patients and healthcare professionals. In the UK, pressure ulcers affect 700,000 people each year. Treating them costs the National Health Service £3.8 million every day. Their etiology is complex and multifactorial. However, evidence has shown a strong link between old age, disease-related sedentary lifestyles, and unhealthy eating habits. Direct skin contact with a bed or chair without frequent position changes can cause pressure ulcers. Urinary and faecal incontinence, diabetes, and injuries that restrict body position and nutrition are also known risk factors. Guidelines and treatments exist but their implementation and success vary across different healthcare settings. This is primarily because healthcare practitioners have a) minimal experience in dealing with pressure ulcers, and b) a general lack of understanding of pressure ulcer treatments. Poorly managed, pressure ulcers can lead to severe pain, a poor quality of life, and significant healthcare costs. In this paper, we report the findings of a clinical trial conducted by Mersey Care NHS Foundation Trust that evaluated the performance of a faster region-based convolutional neural network and mobile platform that categorised and documented pressure ulcers automatically. The neural network classifies category I, II, III, and IV pressure ulcers, deep tissue injuries, and pressure ulcers that are unstageable. District nurses used their mobile phones to take pictures of pressure ulcers and transmit them over 4/5G communications to an inferencing server for classification. The approach uses existing deep learning technologies to provide a novel end-to-end pipeline for pressure ulcer categorisation that works in ad hoc domiciliary settings. The strength of the approach resides within MLOPS, model deployment at scale, and the platforms in-situ operation. While solutions exist in the NHS for analysing pressure ulcers none of them automatically classify and report pressure ulcers from a service users’ residential home automatically. We acknowledge that there is a great deal of work to do, but the approach offers a convincing solution to standardise pressure ulcer categorisation and reporting. The results from the study are encouraging and show that using 216 images, collected over an eight-month trial, it was possible to generate a mean average Precision=0.6796, Recall=0.6997, F1-Score=0.6786 with 45 false positives using an @.75 confidence score threshold

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Artificial Intelligence-Powered Chronic Wound Management System: Towards Human Digital Twins

    Get PDF
    Artificial Intelligence (AI) has witnessed increased application and widespread adoption over the past decade. AI applications to medical images have the potential to assist caregivers in deciding on a proper chronic wound treatment plan by helping them to understand wound and tissue classification and border segmentation, as well as visual image synthesis. This dissertation explores chronic wound management using AI methods, such as Generative Adversarial Networks (GAN) and Explainable AI (XAI) techniques. The wound images are collected, grouped, and processed. One primary objective of this research is to develop a series of AI models, not only to present the potential of AI in wound management but also to develop the building blocks of human digital twins. First of all, motivations, contributions, and the dissertation outline are summarized to introduce the aim and scope of the dissertation. The first contribution of this study is to build a chronic wound classification and its explanation utilizing XAI. This model also benefits from a transfer learning methodology to improve performance. Then a novel model is developed that achieves wound border segmentation and tissue classification tasks simultaneously. A Deep Learning (DL) architecture, i.e., the GAN, is proposed to realize these tasks. Another novel model is developed for creating lifelike wounds. The output of the previously proposed model is used as an input for this model, which generates new chronic wound images. Any tissue distribution could be converted to lifelike wounds, preserving the shape of the original wound. The aforementioned research is extended to build a digital twin for chronic wound management. Chronic wounds, enabling technologies for wound care digital twins, are examined, and a general framework for chronic wound management using the digital twin concept is investigated. The last contribution of this dissertation includes a chronic wound healing prediction model using DL techniques. It utilizes the previously developed AI models to build a chronic wound management framework using the digital twin concept. Lastly, the overall conclusions are drawn. Future challenges and further developments in chronic wound management are discussed by utilizing emerging technologies

    Advanced techniques for classification of polarimetric synthetic aperture radar data

    Get PDF
    With various remote sensing technologies to aid Earth Observation, radar-based imaging is one of them gaining major interests due to advances in its imaging techniques in form of syn-thetic aperture radar (SAR) and polarimetry. The majority of radar applications focus on mon-itoring, detecting, and classifying local or global areas of interests to support humans within their efforts of decision-making, analysis, and interpretation of Earth’s environment. This thesis focuses on improving the classification performance and process particularly concerning the application of land use and land cover over polarimetric SAR (PolSAR) data. To achieve this, three contributions are studied related to superior feature description and ad-vanced machine-learning techniques including classifiers, principles, and data exploitation. First, this thesis investigates the application of color features within PolSAR image classi-fication to provide additional discrimination on top of the conventional scattering information and texture features. The color features are extracted over the visual presentation of fully and partially polarimetric SAR data by generation of pseudo color images. Within the experiments, the obtained results demonstrated that with the addition of the considered color features, the achieved classification performances outperformed results with common PolSAR features alone as well as achieved higher classification accuracies compared to the traditional combination of PolSAR and texture features. Second, to address the large-scale learning challenge in PolSAR image classification with the utmost efficiency, this thesis introduces the application of an adaptive and data-driven supervised classification topology called Collective Network of Binary Classifiers, CNBC. This topology incorporates active learning to support human users with the analysis and interpretation of PolSAR data focusing on collections of images, where changes or updates to the existing classifier might be required frequently due to surface, terrain, and object changes as well as certain variations in capturing time and position. Evaluations demonstrated the capabilities of CNBC over an extensive set of experimental results regarding the adaptation and data-driven classification of single as well as collections of PolSAR images. The experimental results verified that the evolutionary classification topology, CNBC, did provide an efficient solution for the problems of scalability and dynamic adaptability allowing both feature space dimensions and the number of terrain classes in PolSAR image collections to vary dynamically. Third, most PolSAR classification problems are undertaken by supervised machine learn-ing, which require manually labeled ground truth data available. To reduce the manual labeling efforts, supervised and unsupervised learning approaches are combined into semi-supervised learning to utilize the huge amount of unlabeled data. The application of semi-supervised learning in this thesis is motivated by ill-posed classification tasks related to the small training size problem. Therefore, this thesis investigates how much ground truth is actually necessary for certain classification problems to achieve satisfactory results in a supervised and semi-supervised learning scenario. To address this, two semi-supervised approaches are proposed by unsupervised extension of the training data and ensemble-based self-training. The evaluations showed that significant speed-ups and improvements in classification performance are achieved. In particular, for a remote sensing application such as PolSAR image classification, it is advantageous to exploit the location-based information from the labeled training data. Each of the developed techniques provides its stand-alone contribution from different viewpoints to improve land use and land cover classification. The introduction of a new fea-ture for better discrimination is independent of the underlying classification algorithms used. The application of the CNBC topology is applicable to various classification problems no matter how the underlying data have been acquired, for example in case of remote sensing data. Moreover, the semi-supervised learning approach tackles the challenge of utilizing the unlabeled data. By combining these techniques for superior feature description and advanced machine-learning techniques exploiting classifier topologies and data, further contributions to polarimetric SAR image classification are made. According to the performance evaluations conducted including visual and numerical assessments, the proposed and investigated tech-niques showed valuable improvements and are able to aid the analysis and interpretation of PolSAR image data. Due to the generic nature of the developed techniques, their applications to other remote sensing data will require only minor adjustments

    Surgical spectral imaging

    Get PDF
    Recent technological developments have resulted in the availability of miniaturised spectral imaging sensors capable of operating in the multi- (MSI) and hyperspectral imaging (HSI) regimes. Simultaneous advances in image-processing techniques and artificial intelligence (AI), especially in machine learning and deep learning, have made these data-rich modalities highly attractive as a means of extracting biological information non-destructively. Surgery in particular is poised to benefit from this, as spectrally-resolved tissue optical properties can offer enhanced contrast as well as diagnostic and guidance information during interventions. This is particularly relevant for procedures where inherent contrast is low under standard white light visualisation. This review summarises recent work in surgical spectral imaging (SSI) techniques, taken from Pubmed, Google Scholar and arXiv searches spanning the period 2013–2019. New hardware, optimised for use in both open and minimally-invasive surgery (MIS), is described, and recent commercial activity is summarised. Computational approaches to extract spectral information from conventional colour images are reviewed, as tip-mounted cameras become more commonplace in MIS. Model-based and machine learning methods of data analysis are discussed in addition to simulation, phantom and clinical validation experiments. A wide variety of surgical pilot studies are reported but it is apparent that further work is needed to quantify the clinical value of MSI/HSI. The current trend toward data-driven analysis emphasises the importance of widely-available, standardised spectral imaging datasets, which will aid understanding of variability across organs and patients, and drive clinical translation

    Visual Tracking of Instruments in Minimally Invasive Surgery

    Get PDF
    Reducing access trauma has been a focal point for modern surgery and tackling the challenges that arise from new operating techniques and instruments is an exciting and open area of research. Lack of awareness and control from indirect manipulation and visualization has created a need to augment the surgeon's understanding and perception of how their instruments interact with the patient's anatomy but current methods of achieving this are inaccurate and difficult to integrate into the surgical workflow. Visual methods have the potential to recover the position and orientation of the instruments directly in the reference frame of the observing camera without the need to introduce additional hardware to the operating room and perform complex calibration steps. This thesis explores how this problem can be solved with the fusion of coarse region and fine scale point features to enable the recovery of both the rigid and articulated degrees of freedom of laparoscopic and robotic instruments using only images provided by the surgical camera. Extensive experiments on different image features are used to determine suitable representations for reliable and robust pose estimation. Using this information a novel framework is presented which estimates 3D pose with a region matching scheme while using frame-to-frame optical flow to account for challenges due to symmetry in the instrument design. The kinematic structure of articulated robotic instruments is also used to track the movement of the head and claspers. The robustness of this method was evaluated on calibrated ex-vivo images and in-vivo sequences and comparative studies are performed with state-of-the-art kinematic assisted tracking methods
    corecore