17,143 research outputs found
Robot Autonomy for Surgery
Autonomous surgery involves having surgical tasks performed by a robot
operating under its own will, with partial or no human involvement. There are
several important advantages of automation in surgery, which include increasing
precision of care due to sub-millimeter robot control, real-time utilization of
biosignals for interventional care, improvements to surgical efficiency and
execution, and computer-aided guidance under various medical imaging and
sensing modalities. While these methods may displace some tasks of surgical
teams and individual surgeons, they also present new capabilities in
interventions that are too difficult or go beyond the skills of a human. In
this chapter, we provide an overview of robot autonomy in commercial use and in
research, and present some of the challenges faced in developing autonomous
surgical robots
Computer- and robot-assisted Medical Intervention
Medical robotics includes assistive devices used by the physician in order to
make his/her diagnostic or therapeutic practices easier and more efficient.
This chapter focuses on such systems. It introduces the general field of
Computer-Assisted Medical Interventions, its aims, its different components and
describes the place of robots in that context. The evolutions in terms of
general design and control paradigms in the development of medical robots are
presented and issues specific to that application domain are discussed. A view
of existing systems, on-going developments and future trends is given. A
case-study is detailed. Other types of robotic help in the medical environment
(such as for assisting a handicapped person, for rehabilitation of a patient or
for replacement of some damaged/suppressed limbs or organs) are out of the
scope of this chapter.Comment: Handbook of Automation, Shimon Nof (Ed.) (2009) 000-00
Distributed computing methodology for training neural networks in an image-guided diagnostic application
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Fast and adaptive fractal tree-based path planning for programmable bevel tip steerable needles
© 2016 IEEE. Steerable needles are a promising technology for minimally invasive surgery, as they can provide access to difficult to reach locations while avoiding delicate anatomical regions. However, due to the unpredictable tissue deformation associated with needle insertion and the complexity of many surgical scenarios, a real-time path planning algorithm with high update frequency would be advantageous. Real-time path planning for nonholonomic systems is commonly used in a broad variety of fields, ranging from aerospace to submarine navigation. In this letter, we propose to take advantage of the architecture of graphics processing units (GPUs) to apply fractal theory and thus parallelize real-time path planning computation. This novel approach, termed adaptive fractal trees (AFT), allows for the creation of a database of paths covering the entire domain, which are dense, invariant, procedurally produced, adaptable in size, and present a recursive structure. The generated cache of paths can in turn be analyzed in parallel to determine the most suitable path in a fraction of a second. The ability to cope with nonholonomic constraints, as well as constraints in the space of states of any complexity or number, is intrinsic to the AFT approach, rendering it highly versatile. Three-dimensional (3-D) simulations applied to needle steering in neurosurgery show that our approach can successfully compute paths in real-time, enabling complex brain navigation
Prefrontal cortex activation upon a demanding virtual hand-controlled task: A new frontier for neuroergonomics
open9noFunctional near-infrared spectroscopy (fNIRS) is a non-invasive vascular-based functional neuroimaging technology that can assess, simultaneously from multiple cortical areas, concentration changes in oxygenated-deoxygenated hemoglobin at the level of the cortical microcirculation blood vessels. fNIRS, with its high degree of ecological validity and its very limited requirement of physical constraints to subjects, could represent a valid tool for monitoring cortical responses in the research field of neuroergonomics. In virtual reality (VR) real situations can be replicated with greater control than those obtainable in the real world. Therefore, VR is the ideal setting where studies about neuroergonomics applications can be performed. The aim of the present study was to investigate, by a 20-channel fNIRS system, the dorsolateral/ventrolateral prefrontal cortex (DLPFC/VLPFC) in subjects while performing a demanding VR hand-controlled task (HCT). Considering the complexity of the HCT, its execution should require the attentional resources allocation and the integration of different executive functions. The HCT simulates the interaction with a real, remotely-driven, system operating in a critical environment. The hand movements were captured by a high spatial and temporal resolution 3-dimensional (3D) hand-sensing device, the LEAP motion controller, a gesture-based control interface that could be used in VR for tele-operated applications. Fifteen University students were asked to guide, with their right hand/forearm, a virtual ball (VB) over a virtual route (VROU) reproducing a 42 m narrow road including some critical points. The subjects tried to travel as long as possible without making VB fall. The distance traveled by the guided VB was 70.2 ± 37.2 m. The less skilled subjects failed several times in guiding the VB over the VROU. Nevertheless, a bilateral VLPFC activation, in response to the HCT execution, was observed in all the subjects. No correlation was found between the distance traveled by the guided VB and the corresponding cortical activation. These results confirm the suitability of fNIRS technology to objectively evaluate cortical hemodynamic changes occurring in VR environments. Future studies could give a contribution to a better understanding of the cognitive mechanisms underlying human performance either in expert or non-expert operators during the simulation of different demanding/fatiguing activities.openCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; Basso Moro, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, ValentinaCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; BASSO MORO, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentin
Recommended from our members
Trends in virtual reality technologies for the learning patient
NextMed convened the Medicine Meets Virtual Reality 22 (MMVR 22) conference in 2016. Since 1992, the conference has brought together a diverse group of researchers to share creative solutions for the evolving challenge of integrating virtual reality tools into medical education. Virtual reality (VR) and its enabling technologies utilize hardware and software to simulate environments and encounters where users can interact and learn. The MMVR 22 symposium proceedings contain projects that support a variety of learners: medical students, practitioners, soldiers, and patients. This report will contemplate the trends in virtual reality technologies for patients navigating their medical and healthcare learning. The learning patient seeks more than intervention; they seek prevention. From virtual humans and environments to motion sensors and haptic devices, patients are surrounded by increasingly rich and transformative data-driven tools. Applied data enables VR applications to simulate experience, predict health outcomes, and motivate new behavior. The MMVR 22 presents investigations into the usability of wearable devices, the efficacy of avatar inclusion, and the viability of multi-player gaming. With increasing need for individualized and scalable programming, only committed open source efforts will align instructional designers, technology integrators, trainers, and clinicians. Curriculum and InstructionCurriculum and Instructio
- …