942 research outputs found

    Preterm Infants' Pose Estimation with Spatio-Temporal Features

    Get PDF
    Objective: Preterm infants' limb monitoring in neonatal intensive care units (NICUs) is of primary importance for assessing infants' health status and motor/cognitive development. Herein, we propose a new approach to preterm infants' limb pose estimation that features spatio-temporal information to detect and track limb joints from depth videos with high reliability. Methods: Limb-pose estimation is performed using a deep-learning framework consisting of a detection and a regression convolutional neural network (CNN) for rough and precise joint localization, respectively. The CNNs are implemented to encode connectivity in the temporal direction through 3D convolution. Assessment of the proposed framework is performed through a comprehensive study with sixteen depth videos acquired in the actual clinical practice from sixteen preterm infants (the babyPose dataset). Results: When applied to pose estimation, the median root mean square distance, computed among all limbs, between the estimated and the ground-truth pose was 9.06 pixels, overcoming approaches based on spatial features only (11.27 pixels). Conclusion: Results showed that the spatio-temporal features had a significant influence on the pose-estimation performance, especially in challenging cases (e.g., homogeneous image intensity). Significance: This article significantly enhances the state of art in automatic assessment of preterm infants' health status by introducing the use of spatio-temporal features for limb detection and tracking, and by being the first study to use depth videos acquired in the actual clinical practice for limb-pose estimation. The babyPose dataset has been released as the first annotated dataset for infants' pose estimation

    Supervised cnn strategies for optical image segmentation and classification in interventional medicine

    Get PDF
    The analysis of interventional images is a topic of high interest for the medical-image analysis community. Such an analysis may provide interventional-medicine professionals with both decision support and context awareness, with the final goal of improving patient safety. The aim of this chapter is to give an overview of some of the most recent approaches (up to 2018) in the field, with a focus on Convolutional Neural Networks (CNNs) for both segmentation and classification tasks. For each approach, summary tables are presented reporting the used dataset, involved anatomical region and achieved performance. Benefits and disadvantages of each approach are highlighted and discussed. Available datasets for algorithm training and testing and commonly used performance metrics are summarized to offer a source of information for researchers that are approaching the field of interventional-image analysis. The advancements in deep learning for medical-image analysis are involving more and more the interventional-medicine field. However, these advancements are undeniably slower than in other fields (e.g. preoperative-image analysis) and considerable work still needs to be done in order to provide clinicians with all possible support during interventional-medicine procedures

    Disability through COVID-19 pandemic: neurorehabilitation cannot wait.

    Get PDF
    Coronavirus disease 2019 (CoViD-19) pandemic is strongly impacting all domains of our healthcare systems, including rehabilitation. In Italy, the first hit European country, medical activities were postponed to allow shifting of staff and facilities to intensive care, with neurorehabilitation limited to time-dependent diseases, <sup>1</sup> including CoViD-19 complications. <sup>2,3</sup> Hospital access to people with chronic neurodegenerative conditions such as multiple sclerosis, movement disorders or dementia, more at risks of serious consequences from the infection, <sup>4</sup> has been postponed. Patients also seek less for hospital care, with over 50% reduced stroke admissions as from an Italian survey, <sup>5</sup> possibly in fear of being infected or denied to see their families after being hospitalized. This situation can be bearable only for a short time, as any initial freezing reaction to a danger

    Preterm infants' limb-pose estimation from depth images using convolutional neural networks

    Get PDF
    Preterm infants' limb-pose estimation is a crucial but challenging task, which may improve patients' care and facilitate clinicians in infant's movements monitoring. Work in the literature either provides approaches to whole-body segmentation and tracking, which, however, has poor clinical value, or retrieve a posteriori limb pose from limb segmentation, increasing computational costs and introducing inaccuracy sources. In this paper, we address the problem of limb-pose estimation under a different point of view. We proposed a 2D fully-convolutional neural network for roughly detecting limb joints and joint connections, followed by a regression convolutional neural network for accurate joint and joint-connection position estimation. Joints from the same limb are then connected with a maximum bipartite matching approach. Our analysis does not require any prior modeling of infants' body structure, neither any manual interventions. For developing and testing the proposed approach, we built a dataset of four videos (video length = 90 s) recorded with a depth sensor in a neonatal intensive care unit (NICU) during the actual clinical practice, achieving median root mean square distance [pixels] of 10.790 (right arm), 10.542 (left arm), 8.294 (right leg), 11.270 (left leg) with respect to the ground-truth limb pose. The idea of estimating limb pose directly from depth images may represent a future paradigm for addressing the problem of preterm-infants' movement monitoring and offer all possible support to clinicians in NICUs

    Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy

    Get PDF
    Objective: Surgical data science is evolving into a research field that aims to observe everything occurring within and around the treatment process to provide situation-aware data-driven assistance. In the context of endoscopic video analysis, the accurate classification of organs in the field of view of the camera proffers a technical challenge. Herein, we propose a new approach to anatomical structure classification and image tagging that features an intrinsic measure of confidence to estimate its own performance with high reliability and which can be applied to both RGB and multispectral imaging (MI) data. Methods: Organ recognition is performed using a superpixel classification strategy based on textural and reflectance information. Classification confidence is estimated by analyzing the dispersion of class probabilities. Assessment of the proposed technology is performed through a comprehensive in vivo study with seven pigs. Results: When applied to image tagging, mean accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB) and 96% (MI) with the confidence measure. Conclusion: Results showed that the confidence measure had a significant influence on the classification accuracy, and MI data are better suited for anatomical structure labeling than RGB data. Significance: This work significantly enhances the state of art in automatic labeling of endoscopic videos by introducing the use of the confidence metric, and by being the first study to use MI data for in vivo laparoscopic tissue classification. The data of our experiments will be released as the first in vivo MI dataset upon publication of this paper.Comment: 7 pages, 6 images, 2 table

    Heartbeat detection by laser doppler vibrometry and machine learning

    Get PDF
    Background: Heartbeat detection is a crucial step in several clinical fields. Laser Doppler Vibrometer (LDV) is a promising non-contact measurement for heartbeat detection. The aim of this work is to assess whether machine learning can be used for detecting heartbeat from the carotid LDV signal. Methods: The performances of Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF) and K-Nearest Neighbor (KNN) were compared using the leave-one-subject-out cross-validation as the testing protocol in an LDV dataset collected from 28 subjects. The classification was conducted on LDV signal windows, which were labeled as beat, if containing a beat, or no-beat, otherwise. The labeling procedure was performed using electrocardiography as the gold standard. Results: For the beat class, the f1-score (f 1) values were 0.93, 0.93, 0.95, 0.96 for RF, DT, KNN and SVM, respectively. No statistical differences were found between the classifiers. When testing the SVM on the full-length (10 min long) LDV signals, to simulate a real-world application, we achieved a median macro-f 1 of 0.76. Conclusions: Using machine learning for heartbeat detection from carotid LDV signals showed encouraging results, representing a promising step in the field of contactless cardiovascular signal analysis

    A cloud-based healthcare infrastructure for neonatal intensive-care units

    Get PDF
    Intensive medical attention of preterm babies is crucial to avoid short-term and long- term complications. Within neonatal intensive care units (NICUs), cribs are equipped with electronic devices aimed at: monitoring, administering drugs and supporting clinician in making diagnosis and offer treatments. To manage this huge data flux, a cloud-based healthcare infrastructure that allows data collection from different devices (i.e., patient monitors, bilirubinometers, and transcutaneous bilirubinometers), storage, processing and transferring will be presented. Communication protocols were designed to enable the communication and data transfer between the three different devices and a unique database and an easy to use graphical user interface (GUI) was implemented. The infrastructure is currently used in the “Women’s and Children’s Hospital G.Salesi” in Ancona (Italy), supporting clinicians and health opertators in their daily activities

    Learning-based screening of endothelial dysfunction from photoplethysmographic signals

    Get PDF
    Endothelial-Dysfunction (ED) screening is of primary importance to early diagnosis cardiovascular diseases. Recently, approaches to ED screening are focusing more and more on photoplethysmography (PPG)-signal analysis, which is performed in a threshold-sensitive way and may not be suitable for tackling the high variability of PPG signals. The goal of this work was to present an innovative machine-learning (ML) approach to ED screening that could tackle such variability. Two research hypotheses guided this work: (H1) ML can support ED screening by classifying PPG features; and (H2) classification performance can be improved when including also anthropometric features. To investigate H1 and H2, a new dataset was built from 59 subject. The dataset is balanced in terms of subjects with and without ED. Support vector machine (SVM), random forest (RF) and k-nearest neighbors (KNN) classifiers were investigated for feature classification. With the leave-one-out evaluation protocol, the best classification results for H1 were obtained with SVM (accuracy = 71%, recall = 59%). When testing H2, the recall was further improved to 67%. Such results are a promising step for developing a novel and intelligent PPG device to assist clinicians in performing large scale and low cost ED screening
    corecore