1,295 research outputs found
Recommended from our members
Burn wound classification model using spatial frequency-domain imaging and machine learning.
Accurate assessment of burn severity is critical for wound care and the course of treatment. Delays in classification translate to delays in burn management, increasing the risk of scarring and infection. To this end, numerous imaging techniques have been used to examine tissue properties to infer burn severity. Spatial frequency-domain imaging (SFDI) has also been used to characterize burns based on the relationships between histologic observations and changes in tissue properties. Recently, machine learning has been used to classify burns by combining optical features from multispectral or hyperspectral imaging. Rather than employ models of light propagation to deduce tissue optical properties, we investigated the feasibility of using SFDI reflectance data at multiple spatial frequencies, with a support vector machine (SVM) classifier, to predict severity in a porcine model of graded burns. Calibrated reflectance images were collected using SFDI at eight wavelengths (471 to 851 nm) and five spatial frequencies (0 to 0.2  mm  -  1). Three models were built from subsets of this initial dataset. The first subset included data taken at all wavelengths with the planar (0  mm  -  1) spatial frequency, the second comprised data at all wavelengths and spatial frequencies, and the third used all collected data at values relative to unburned tissue. These data subsets were used to train and test cubic SVM models, and compared against burn status 28 days after injury. Model accuracy was established through leave-one-out cross-validation testing. The model based on images obtained at all wavelengths and spatial frequencies predicted burn severity at 24 h with 92.5% accuracy. The model composed of all values relative to unburned skin was 94.4% accurate. By comparison, the model that employed only planar illumination was 88.8% accurate. This investigation suggests that the combination of SFDI with machine learning has potential for accurately predicting burn severity
Recommended from our members
Burns Depth Assessment Using Deep Learning Features
YesBurns depth evaluation is a lifesaving task and very challenging that requires objective techniques to accomplish. While the visual assessment is the most commonly used by surgeons, its accuracy reliability ranges between 60 and 80% and subjective that lacks any standard guideline. Currently, the only standard adjunct to clinical evaluation of burn depth is Laser Doppler Imaging (LDI) which measures microcirculation within the dermal tissue, providing the burns potential healing time which correspond to the depth of the injury achieving up to 100% accuracy. However, the use of LDI is limited due to many factors including high affordability and diagnostic costs, its accuracy is affected by movement which makes it difficult to assess paediatric patients, high level of human expertise is required to operate the device, and 100% accuracy possible after 72Â h. These shortfalls necessitate the need for objective and affordable technique. Method: In this study, we leverage the use of deep transfer learning technique using two pretrained models ResNet50 and VGG16 for the extraction of image patterns (ResFeat50 and VggFeat16) from a a burn dataset of 2080 RGB images which composed of healthy skin, first degree, second degree and third-degree burns evenly distributed. We then use One-versus-One Support Vector Machines (SVM) for multi-class prediction and was trained using 10-folds cross validation to achieve optimum trade-off between bias and variance. Results: The proposed approach yields maximum prediction accuracy of 95.43% using ResFeat50 and 85.67% using VggFeat16. The average recall, precision and F1-score are 95.50%, 95.50%, 95.50% and 85.75%, 86.25%, 85.75% for both ResFeat50 and VggFeat16 respectively. Conclusion: The proposed pipeline achieved a state-of-the-art prediction accuracy and interestingly indicates that decision can be made in less than a minute whether the injury requires surgical intervention such as skin grafting or not
Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy
Objective: Surgical data science is evolving into a research field that aims
to observe everything occurring within and around the treatment process to
provide situation-aware data-driven assistance. In the context of endoscopic
video analysis, the accurate classification of organs in the field of view of
the camera proffers a technical challenge. Herein, we propose a new approach to
anatomical structure classification and image tagging that features an
intrinsic measure of confidence to estimate its own performance with high
reliability and which can be applied to both RGB and multispectral imaging (MI)
data. Methods: Organ recognition is performed using a superpixel classification
strategy based on textural and reflectance information. Classification
confidence is estimated by analyzing the dispersion of class probabilities.
Assessment of the proposed technology is performed through a comprehensive in
vivo study with seven pigs. Results: When applied to image tagging, mean
accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB)
and 96% (MI) with the confidence measure. Conclusion: Results showed that the
confidence measure had a significant influence on the classification accuracy,
and MI data are better suited for anatomical structure labeling than RGB data.
Significance: This work significantly enhances the state of art in automatic
labeling of endoscopic videos by introducing the use of the confidence metric,
and by being the first study to use MI data for in vivo laparoscopic tissue
classification. The data of our experiments will be released as the first in
vivo MI dataset upon publication of this paper.Comment: 7 pages, 6 images, 2 table
Medical Image Segmentation with Deep Learning
Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images is time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images have been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and spark research interests in medical image segmentation using deep learning. We propose two convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published
Meta-Analysis and Systematic Review of the Application of Machine Learning Classifiers in Biomedical Applications of Infrared Thermography
Atypical body temperature values can be an indication of abnormal physiological processes
associated with several health conditions. Infrared thermal (IRT) imaging is an innocuous imaging
modality capable of capturing the natural thermal radiation emitted by the skin surface, which is
connected to physiology-related pathological states. The implementation of artificial intelligence
(AI) methods for interpretation of thermal data can be an interesting solution to supply a second
opinion to physicians in a diagnostic/therapeutic assessment scenario. The aim of this work was to
perform a systematic review and meta-analysis concerning different biomedical thermal applications
in conjunction with machine learning strategies. The bibliographic search yielded 68 records for a
qualitative synthesis and 34 for quantitative analysis. The results show potential for the implementation
of IRT imaging with AI, but more work is needed to retrieve significant features and improve
classification metrics.info:eu-repo/semantics/publishedVersio
Mobile Wound Assessment and 3D Modeling from a Single Image
The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image
Recommended from our members
Automatic Burns Analysis Using Machine Learning
Burn injuries are a significant global health concern, causing high mortality and morbidity rates. Clinical assessment is the current standard for diagnosing burn injuries, but it suffers from interobserver variability and is not suitable for intermediate burn depths. To address these challenges, machine learning-based techniques were proposed to evaluate burn wounds in a thesis. The study utilized image-based networks to analyze two medical image databases of burn injuries from Caucasian and Black-African cohorts. The deep learning-based model, called BurnsNet, was developed and used for real-time processing, achieving high accuracy rates in discriminating between different burn depths and pressure ulcer wounds. The multiracial data representation approach was also used to address data representation bias in burn analysis, resulting in promising performance. The ML approach proved its objectivity and cost-effectiveness in assessing burn depths, providing an effective adjunct for clinical assessment. The study's findings suggest that the use of machine learning-based techniques can reduce the workflow burden for burn surgeons and significantly reduce errors in burn diagnosis. It also highlights the potential of automation to improve burn care and enhance patients' quality of life.Petroleum Technology Development Fund (PTDF);
Gombe State University study fellowshi
- …