827 research outputs found

    Recent Advances in Artificial Intelligence-Assisted Ultrasound Scanning

    Get PDF
    Funded by the Spanish Ministry of Economic Affairs and Digital Transformation (Project MIA.2021.M02.0005 TARTAGLIA, from the Recovery, Resilience, and Transformation Plan financed by the European Union through Next Generation EU funds). TARTAGLIA takes place under the R&D Missions in Artificial Intelligence program, which is part of the Spain Digital 2025 Agenda and the Spanish National Artificial Intelligence Strategy.Ultrasound (US) is a flexible imaging modality used globally as a first-line medical exam procedure in many different clinical cases. It benefits from the continued evolution of ultrasonic technologies and a well-established US-based digital health system. Nevertheless, its diagnostic performance still presents challenges due to the inherent characteristics of US imaging, such as manual operation and significant operator dependence. Artificial intelligence (AI) has proven to recognize complicated scan patterns and provide quantitative assessments for imaging data. Therefore, AI technology has the potential to help physicians get more accurate and repeatable outcomes in the US. In this article, we review the recent advances in AI-assisted US scanning. We have identified the main areas where AI is being used to facilitate US scanning, such as standard plane recognition and organ identification, the extraction of standard clinical planes from 3D US volumes, and the scanning guidance of US acquisitions performed by humans or robots. In general, the lack of standardization and reference datasets in this field makes it difficult to perform comparative studies among the different proposed methods. More open-access repositories of large US datasets with detailed information about the acquisition are needed to facilitate the development of this very active research field, which is expected to have a very positive impact on US imaging.Depto. de Estructura de la Materia, Física Térmica y ElectrónicaFac. de Ciencias FísicasTRUEMinistry of Economic Affairs and Digital Transformation from the Recovery, Resilience, and Transformation PlanNext Generation EU fundspu

    Implementation of safe human robot collaboration for ultrasound guided radiation therapy

    Get PDF
    This thesis shows that safe human-robot-interaction and Human Robot Collaboration is possible for Ultrasound (US) guided radiotherapy. Via the chosen methodology, all components (US, optical room monitoring and robot) could be linked and integrated and realized in a realistic clinical workflow. US guided radiotherapy offers a complement and alternative to existing image-guided therapy approaches. The real-time capability of US and high soft tissue contrast allow target structures to be tracked and radiation delivery to be modulated. However, Ultrasound guided radiation therapy (USgRT) is not yet clinically established but is still under development, as reliable and safe methods of image acquisition are not yet available. In particular, the loss of contact of the US probe to the patient surface poses a problem for patient movements such as breathing. For this purpose, a Breathing and motion compensation (BaMC) was developed in this work, which together with the safe control of a lightweight robot represents a new development for USgRT. The developed BaMC can be used to control the US probe with contact to the patient. The conducted experiments have confirmed that a steady contact with the patient surface and thus a continuous image acquisition can be ensured by the developed methodology. In addition, the image position in space can be accurately maintained in the submillimeter range. The BaMC seamlessly integrates into a developed clinical workflow. The graphical user interfaces developed for this purpose, as well as direct haptic control with the robot, provide an easy interaction option for the clinical user. The developed autonomous positioning of the transducer represents a good example of the feasibility of the approach. With the help of the user interface, an acoustic plane can be defined and autonomously approached via the robot in a time-efficient and precise manner. The tests carried out show that this methodology is suitable for a wide range of transducer positions. Safety in a human-robot interaction task is essential and requires individually customized concepts. In this work, adequate monitoring mechanisms could be found to ensure both patient and staff safety. In collision tests it could be shown that the implemented detection measures work and that the robot moves into a safe parking position. The forces acting on the patient could thus be pushed well below the limits required by the standard. This work has demonstrated the first important steps towards safe robot-assisted ultrasound imaging, which is not only applicable to USgRT. The developed interfaces provide the basis for further investigations in this field, especially in the area of image recognition, for example to determine the position of the target structure. With the proof of safety of the developed system, first study in human can now follow

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    Imaging : making the invisible visible : proceedings of the symposium, 18 May 2000, Technische Universiteit Eindhoven

    Get PDF

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    An Affordable Portable Obstetric Ultrasound Simulator for Synchronous and Asynchronous Scan Training

    Get PDF
    The increasing use of Point of Care (POC) ultrasound presents a challenge in providing efficient training to new POC ultrasound users. In response to this need, we have developed an affordable, compact, laptop-based obstetric ultrasound training simulator. It offers freehand ultrasound scan on an abdomen-sized scan surface with a 5 degrees of freedom sham transducer and utilizes 3D ultrasound image volumes as training material. On the simulator user interface is rendered a virtual torso, whose body surface models the abdomen of a particular pregnant scan subject. A virtual transducer scans the virtual torso, by following the sham transducer movements on the scan surface. The obstetric ultrasound training is self-paced and guided by the simulator using a set of tasks, which are focused on three broad areas, referred to as modules: 1) medical ultrasound basics, 2) orientation to obstetric space, and 3) fetal biometry. A learner completes the scan training through the following three steps: (i) watching demonstration videos, (ii) practicing scan skills by sequentially completing the tasks in Modules 2 and 3, with scan evaluation feedback and help functions available, and (iii) a final scan exercise on new image volumes for assessing the acquired competency. After each training task has been completed, the simulator evaluates whether the task has been carried out correctly or not, by comparing anatomical landmarks identified and/or measured by the learner to reference landmark bounds created by algorithms, or pre-inserted by experienced sonographers. Based on the simulator, an ultrasound E-training system has been developed for the medical practitioners for whom ultrasound training is not accessible at local level. The system, composed of a dedicated server and multiple networked simulators, provides synchronous and asynchronous training modes, and is able to operate with a very low bit rate. The synchronous (or group-learning) mode allows all training participants to observe the same 2D image in real-time, such as a demonstration by an instructor or scan ability of a chosen learner. The synchronization of 2D images on the different simulators is achieved by directly transmitting the position and orientation of the sham transducer, rather than the ultrasound image, and results in a system performance independent of network bandwidth. The asynchronous (or self-learning) mode is described in the previous paragraph. However, the E-training system allows all training participants to stay networked to communicate with each other via text channel. To verify the simulator performance and training efficacy, we conducted several performance experiments and clinical evaluations. The performance experiment results indicated that the simulator was able to generate greater than 30 2D ultrasound images per second with acceptable image quality on medium-priced computers. In our initial experiment investigating the simulator training capability and feasibility, three experienced sonographers individually scanned two image volumes on the simulator. They agreed that the simulated images and the scan experience were adequately realistic for ultrasound training; the training procedure followed standard obstetric ultrasound protocol. They further noted that the simulator had the potential for becoming a good supplemental training tool for medical students and resident doctors. A clinic study investigating the simulator training efficacy was integrated into the clerkship program of the Department of Obstetrics and Gynecology, University of Massachusetts Memorial Medical Center. A total of 24 3rd year medical students were recruited and each of them was directed to scan six image volumes on the simulator in two 2.5-hour sessions. The study results showed that the successful scan times for the training tasks significantly decreased as the training progressed. A post-training survey answered by the students found that they considered the simulator-based training useful and suitable for medical students and resident doctors. The experiment to validate the performance of the E-training system showed that the average transmission bit rate was approximately 3-4 kB/s; the data loss was less than 1% and no loss of 2D images was visually detected. The results also showed that the 2D images on all networked simulators could be considered to be synchronous even though inter-continental communication existed

    下腹部を対象とした極細針によるCTガイド下高正確度穿刺プランニング

    Get PDF
    早大学位記番号:新8149早稲田大

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects
    corecore