249 research outputs found

    Networking Architecture and Key Technologies for Human Digital Twin in Personalized Healthcare: A Comprehensive Survey

    Full text link
    Digital twin (DT), refers to a promising technique to digitally and accurately represent actual physical entities. One typical advantage of DT is that it can be used to not only virtually replicate a system's detailed operations but also analyze the current condition, predict future behaviour, and refine the control optimization. Although DT has been widely implemented in various fields, such as smart manufacturing and transportation, its conventional paradigm is limited to embody non-living entities, e.g., robots and vehicles. When adopted in human-centric systems, a novel concept, called human digital twin (HDT) has thus been proposed. Particularly, HDT allows in silico representation of individual human body with the ability to dynamically reflect molecular status, physiological status, emotional and psychological status, as well as lifestyle evolutions. These prompt the expected application of HDT in personalized healthcare (PH), which can facilitate remote monitoring, diagnosis, prescription, surgery and rehabilitation. However, despite the large potential, HDT faces substantial research challenges in different aspects, and becomes an increasingly popular topic recently. In this survey, with a specific focus on the networking architecture and key technologies for HDT in PH applications, we first discuss the differences between HDT and conventional DTs, followed by the universal framework and essential functions of HDT. We then analyze its design requirements and challenges in PH applications. After that, we provide an overview of the networking architecture of HDT, including data acquisition layer, data communication layer, computation layer, data management layer and data analysis and decision making layer. Besides reviewing the key technologies for implementing such networking architecture in detail, we conclude this survey by presenting future research directions of HDT

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Quantifying cognitive and mortality outcomes in older patients following acute illness using epidemiological and machine learning approaches

    Get PDF
    Introduction: Cognitive and functional decompensation during acute illness in older people are poorly understood. It remains unclear how delirium, an acute confusional state reflective of cognitive decompensation, is contextualised by baseline premorbid cognition and relates to long-term adverse outcomes. High-dimensional machine learning offers a novel, feasible and enticing approach for stratifying acute illness in older people, improving treatment consistency while optimising future research design. Methods: Longitudinal associations were analysed from the Delirium and Population Health Informatics Cohort (DELPHIC) study, a prospective cohort ≥70 years resident in Camden, with cognitive and functional ascertainment at baseline and 2-year follow-up, and daily assessments during incident hospitalisation. Second, using routine clinical data from UCLH, I constructed an extreme gradient-boosted trees predicting 600-day mortality for unselected acute admissions of oldest-old patients with mechanistic inferences. Third, hierarchical agglomerative clustering was performed to demonstrate structure within DELPHIC participants, with predictive implications for survival and length of stay. Results: i. Delirium is associated with increased rates of cognitive decline and mortality risk, in a dose-dependent manner, with an interaction between baseline cognition and delirium exposure. Those with highest delirium exposure but also best premorbid cognition have the “most to lose”. ii. High-dimensional multimodal machine learning models can predict mortality in oldest-old populations with 0.874 accuracy. The anterior cingulate and angular gyri, and extracranial soft tissue, are the highest contributory intracranial and extracranial features respectively. iii. Clinically useful acute illness subtypes in older people can be described using longitudinal clinical, functional, and biochemical features. Conclusions: Interactions between baseline cognition and delirium exposure during acute illness in older patients result in divergent long-term adverse outcomes. Supervised machine learning can robustly predict mortality in in oldest-old patients, producing a valuable prognostication tool using routinely collected data, ready for clinical deployment. Preliminary findings suggest possible discernible subtypes within acute illness in older people

    Automatic extraction of constraints in manipulation tasks for autonomy and interaction

    Get PDF
    Tasks routinely executed by humans involve sequences of actions performed with high dexterity and coordination. Fully specifying these actions such that a robot could replicate the task is often difficult. Furthermore the uncertainties introduced by the use of different tools or changing configurations demand the specification to be generic, while enhancing the important task aspects, i.e. the constraints. Therefore the first challenge of this thesis is inferring these constraints from repeated demonstrations. In addition humans explaining a task to another person rely on the person's ability to apprehend missing or implicit information. Therefore observations contain user-specific cues, alongside knowledge on performing the task. Thus our second challenge is correlating the task constraints with the user behavior for improving the robot's performance. We address these challenges using a Programming by Demonstration framework. In the first part of the thesis we describe an approach for decomposing demonstrations into actions and extracting task-space constraints as continuous features that apply throughout each action. The constraints consist of: (1) the reference frame for performing manipulation, (2) the variables of interest relative to this frame, allowing a decomposition in force and position control, and (3) a stiffness gain modulating the contribution of force and position. We then extend this approach to asymmetrical bimanual tasks by extracting features that enable arm coordination: the master--slave role that enables precedence, and the motion--motion or force--motion coordination that facilitates the physical interaction through an object. The set of constraints and the time-independent encoding of each action form a task prototype, used to execute the task. In the second part of the thesis we focus on discovering additional features implicit in the demonstrations with respect to two aspects of the teaching interactions: (1) characterizing the user performance and (2) improving the user behavior. For the first goal we assess the skill of the user and implicitly the quality of the demonstrations by using objective task--specific metrics, related directly to the constraints. We further analyze ways of making the user aware of the robot's state during teaching by providing task--related feedback. The feedback has a direct influence on both the teaching efficiency and the user's perception of the interaction. We evaluated our approaches on robotic experiments that encompass daily activities using two 7 degrees of freedom Kuka LWR robotic arms, and a 53 degrees of freedom iCub humanoid robot

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.https://digitalcommons.unomaha.edu/isqafacbooks/1000/thumbnail.jp

    Mixed-reality visualization environments to facilitate ultrasound-guided vascular access

    Get PDF
    Ultrasound-guided needle insertions at the site of the internal jugular vein (IJV) are routinely performed to access the central venous system. Ultrasound-guided insertions maintain high rates of carotid artery puncture, as clinicians rely on 2D information to perform a 3D procedure. The limitations of 2D ultrasound-guidance motivated the research question: “Do 3D ultrasound-based environments improve IJV needle insertion accuracy”. We addressed this by developing advanced surgical navigation systems based on tracked surgical tools and ultrasound with various visualizations. The point-to-line ultrasound calibration enables the use of tracked ultrasound. We automated the fiducial localization required for this calibration method such that fiducials can be automatically localized within 0.25 mm of the manual equivalent. The point-to-line calibration obtained with both manual and automatic localizations produced average normalized distance errors less than 1.5 mm from point targets. Another calibration method was developed that registers an optical tracking system and the VIVE Pro head-mounted display (HMD) tracking system with sub-millimetre and sub-degree accuracy compared to ground truth values. This co-calibration enabled the development of an HMD needle navigation system, in which the calibrated ultrasound image and tracked models of the needle, needle trajectory, and probe were visualized in the HMD. In a phantom experiment, 31 clinicians had a 96 % success rate using the HMD system compared to 70 % for the ultrasound-only approach (p= 0.018). We developed a machine-learning-based vascular reconstruction pipeline that automatically returns accurate 3D reconstructions of the carotid artery and IJV given sequential tracked ultrasound images. This reconstruction pipeline was used to develop a surgical navigation system, where tracked models of the needle, needle trajectory, and the 3D z-buffered vasculature from a phantom were visualized in a common coordinate system on a screen. This system improved the insertion accuracy and resulted in 100 % success rates compared to 70 % under ultrasound-guidance (p=0.041) across 20 clinicians during the phantom experiment. Overall, accurate calibrations and machine learning algorithms enable the development of advanced 3D ultrasound systems for needle navigation, both in an immersive first-person perspective and on a screen, illustrating that 3D US environments outperformed 2D ultrasound-guidance used clinically

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science
    • …
    corecore