8,900 research outputs found

    Development of a cognitive robotic system for simple surgical tasks

    Get PDF
    The introduction of robotic surgery within the operating rooms has significantly improved the quality of many surgical procedures. Recently, the research on medical robotic systems focused on increasing the level of autonomy in order to give them the possibility to carry out simple surgical actions autonomously. This paper reports on the development of technologies for introducing automation within the surgical workflow. The results have been obtained during the ongoing FP7 European funded project Intelligent Surgical Robotics (I-SUR). The main goal of the project is to demonstrate that autonomous robotic surgical systems can carry out simple surgical tasks effectively and without major intervention by surgeons. To fulfil this goal, we have developed innovative solutions (both in terms of technologies and algorithms) for the following aspects: fabrication of soft organ models starting from CT images, surgical planning and execution of movement of robot arms in contact with a deformable environment, designing a surgical interface minimizing the cognitive load of the surgeon supervising the actions, intra-operative sensing and reasoning to detect normal transitions and unexpected events. All these technologies have been integrated using a component-based software architecture to control a novel robot designed to perform the surgical actions under study. In this work we provide an overview of our system and report on preliminary results of the automatic execution of needle insertion for the cryoablation of kidney tumours

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Design of a goal ontology for medical decision-support

    Get PDF
    Thesis (S.M.)--Harvard-MIT Division of Health Sciences and Technology, 2005.Includes bibliographical references (leaves 34-36).Objectives: There are several ongoing efforts aimed at developing formal models of medical knowledge and reasoning to design decision-support systems. Until now, these efforts have focused primarily on representing content of clinical guidelines and their logical structure. The present study aims to develop a computable representation of health-care providers' intentions to be used as part of a framework for implementing clinical decision-support systems. Our goal is to create an ontology that supports retrieval of plans based on the intentions or goals of the clinician. Methods: We developed an ontological representation of medical goals, plans, clinical scenarios and other relevant entities in medical decision-making. We used the resulting ontology along with an external ontology inference engine to simulate selection of clinical recommendations based on goals. The ontology instances used in the simulation were modeled from two clinical guidelines. Testing the design: Thirty-two clinical recommendations were encoded in the experimental model. Nine test cases were created to verify the ability of the model to retrieve the plans. For all nine cases, plans were successfully retrieved. Conclusion: The ontological design we developed supported effective reasoning over a medical knowledge base.(cont.) The immediate extension of this approach to be fully developed in medical applications may be partially limited by the lack of available editing tools. Many efforts in this area are currently aiming to the development of needed technologies.by Davide Zacacagnini [i.e. Zaccagnini].S.M

    Gesture Recognition and Control for Semi-Autonomous Robotic Assistant Surgeons

    Get PDF
    The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This thesis explores the solutions adopted in pursuing automation in robotic minimally-invasive surgeries (R-MIS) and presents a novel cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model-Predictive Controller

    Advancements in Medical Imaging and Diagnostics with Deep Learning Technologies

    Get PDF
    Medical imaging has long been a cornerstone in diagnostic medicine, providing clinicians with a non-invasive method to visualize internal structures and processes. However, traditional imaging techniques have faced challenges in resolution, safety concerns related to radiation exposure, and the need for invasive procedures for clearer visualization. With the advent of deep learning technologies, significant advancements have been made in the field of medical imaging, addressing many of these challenges and introducing new capabilities. This research seeks into the integration of deep learning in enhancing image resolution, leading to clearer and more detailed visualizations. Furthermore, the ability to reconstruct three-dimensional images from traditional two-dimensional scans offers a more comprehensive view of the area under examination. Automated analysis powered by deep learning algorithms not only speeds up the diagnostic process but also detects anomalies that might be overlooked by the human eye. Predictive analysis, based on these enhanced images, can forecast the likelihood of diseases, and real-time analysis during surgeries ensures immediate feedback, enhancing the precision of medical procedures. Safety in medical imaging has also seen improvements. Techniques powered by deep learning require reduced radiation, minimizing risks to patients. Additionally, the enhanced clarity and detail in images reduce the need for invasive procedures, further ensuring patient safety. The integration of imaging data with Electronic Health Records (EHR) has paved the way for personalized care recommendations, tailoring treatments based on individual patient history and current diagnostics. Lastly, the role of deep learning extends to medical education, where it aids in creating realistic simulations and models, equipping medical professionals with better training tools

    Place-based approaches to child and family services

    Get PDF
    This paper synthesizes the conceptual and empirical literature on place-based approaches to meeting the needs of young children and their families. A specific focus of the paper is on the potential contribution of place-based approaches to service reconfiguration and coordination. Outline The paper begins by outlining the sweeping social changes that have occurred in developed nations over the past few decades and their impact on children, families and communities. It explores the ‘joined up’ problems faced by families and communities in the contemporary world, and highlights the need to reconfigure services to support families more effectively. The paper then focuses on ‘joined up’ solutions, on what we know about how to meet the challenges posed by the complex problems that characterise our society. Next, the paper explores what a place-based approach involves, and what role it can play in supporting families with young children. The rationale underpinning place-based approaches is outlined and the evidence for the effectiveness of the approach is summarised. The paper then looks at what can be learned from efforts to implement place-based initiatives in Australia and overseas, and explores the issues that need to be addressed in implementing this strategy. The ways in which the early childhood service system might be reconfigured are also considered, and the paper ends with a consideration of the policy and implementation implications.&nbsp

    Interpretable task planning and learning for autonomous robotic surgery with logic programming

    Get PDF
    This thesis addresses the long-term goal of full (supervised) autonomy in surgery, characterized by dynamic environmental (anatomical) conditions, unpredictable workflow of execution and workspace constraints. The scope is to reach autonomy at the level of sub-tasks of a surgical procedure, i.e. repetitive, yet tedious operations (e.g., dexterous manipulation of small objects in a constrained environment, as needle and wire for suturing). This will help reducing time of execution, hospital costs and fatigue of surgeons during the whole procedure, while further improving the recovery time for the patients. A novel framework for autonomous surgical task execution is presented in the first part of this thesis, based on answer set programming (ASP), a logic programming paradigm, for task planning (i.e., coordination of elementary actions and motions). Logic programming allows to directly encode surgical task knowledge, representing emph{plan reasoning methodology} rather than a set of pre-defined plans. This solution introduces several key advantages, as reliable human-like interpretable plan generation, real-time monitoring of the environment and the workflow for ready adaptation and failure recovery. Moreover, an extended review of logic programming for robotics is presented, motivating the choice of ASP for surgery and providing an useful guide for robotic designers. In the second part of the thesis, a novel framework based on inductive logic programming (ILP) is presented for surgical task knowledge learning and refinement. ILP guarantees fast learning from very few examples, a common drawback of surgery. Also, a novel action identification algorithm is proposed based on automatic environmental feature extraction from videos, dealing for the first time with small and noisy datasets collecting different workflows of executions under environmental variations. This allows to define a systematic methodology for unsupervised ILP. All the results in this thesis are validated on a non-standard version of the benchmark training ring transfer task for surgeons, which mimics some of the challenges of real surgery, e.g. constrained bimanual motion in small space

    Artificial General Intelligence for Medical Imaging

    Full text link
    In this review, we explore the potential applications of Artificial General Intelligence (AGI) models in healthcare, focusing on foundational Large Language Models (LLMs), Large Vision Models, and Large Multimodal Models. We emphasize the importance of integrating clinical expertise, domain knowledge, and multimodal capabilities into AGI models. In addition, we lay out key roadmaps that guide the development and deployment of healthcare AGI models. Throughout the review, we provide critical perspectives on the potential challenges and pitfalls associated with deploying large-scale AGI models in the medical field. This comprehensive review aims to offer insights into the future implications of AGI in medical imaging, healthcare and beyond

    Brain networks under attack : robustness properties and the impact of lesions

    Get PDF
    A growing number of studies approach the brain as a complex network, the so-called ‘connectome’. Adopting this framework, we examine what types or extent of damage the brain can withstand—referred to as network ‘robustness’—and conversely, which kind of distortions can be expected after brain lesions. To this end, we review computational lesion studies and empirical studies investigating network alterations in brain tumour, stroke and traumatic brain injury patients. Common to these three types of focal injury is that there is no unequivocal relationship between the anatomical lesion site and its topological characteristics within the brain network. Furthermore, large-scale network effects of these focal lesions are compared to those of a widely studied multifocal neurodegenerative disorder, Alzheimer’s disease, in which central parts of the connectome are preferentially affected. Results indicate that human brain networks are remarkably resilient to different types of lesions, compared to other types of complex networks such as random or scale-free networks. However, lesion effects have been found to depend critically on the topological position of the lesion. In particular, damage to network hub regions—and especially those connecting different subnetworks—was found to cause the largest disturbances in network organization. Regardless of lesion location, evidence from empirical and computational lesion studies shows that lesions cause significant alterations in global network topology. The direction of these changes though remains to be elucidated. Encouragingly, both empirical and modelling studies have indicated that after focal damage, the connectome carries the potential to recover at least to some extent, with normalization of graph metrics being related to improved behavioural and cognitive functioning. To conclude, we highlight possible clinical implications of these findings, point out several methodological limitations that pertain to the study of brain diseases adopting a network approach, and provide suggestions for future research
    corecore