1,296 research outputs found
Computer- and robot-assisted Medical Intervention
Medical robotics includes assistive devices used by the physician in order to
make his/her diagnostic or therapeutic practices easier and more efficient.
This chapter focuses on such systems. It introduces the general field of
Computer-Assisted Medical Interventions, its aims, its different components and
describes the place of robots in that context. The evolutions in terms of
general design and control paradigms in the development of medical robots are
presented and issues specific to that application domain are discussed. A view
of existing systems, on-going developments and future trends is given. A
case-study is detailed. Other types of robotic help in the medical environment
(such as for assisting a handicapped person, for rehabilitation of a patient or
for replacement of some damaged/suppressed limbs or organs) are out of the
scope of this chapter.Comment: Handbook of Automation, Shimon Nof (Ed.) (2009) 000-00
Intraoperative Endoscopic Augmented Reality in Third Ventriculostomy
In neurosurgery, as a result of the brain-shift, the preoperative patient models used as a intraoperative reference change. A meaningful use of the preoperative virtual models during the operation requires for a model update. The NEAR project, Neuroendoscopy towards Augmented Reality, describes a new camera calibration model for high distorted lenses and introduces the concept of active endoscopes endowed with with navigation, camera calibration, augmented reality and triangulation modules
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Augmented reality technology in image-guided therapy: State-of-the-art review.
Image-guided therapies have been on the rise in recent years as they can achieve higher accuracy and are less invasive than traditional methods. By combining augmented reality technology with image-guided therapy, more organs, and tissues can be observed by surgeons to improve surgical accuracy. In this review, 233 publications (dated from 2015 to 2020) on the design and application of augmented reality-based systems for image-guided therapy, including both research prototypes and commercial products, were considered for review. Based on their functions and applications. Sixteen studies were selected. The engineering specifications and applications were analyzed and summarized for each study. Finally, future directions and existing challenges in the field were summarized and discussed
A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom
Purpose: Benefits of minimally invasive neurosurgery mandate the development of ergonomic paradigms for neuronavigation. Augmented Reality (AR) systems can overcome the shortcomings of commercial neuronavigators. The aim of this work is to apply a novel AR system, based on a head-mounted stereoscopic video see-through display, as an aid in complex neurological lesion targeting. Effectiveness was investigated on a newly designed patient-specific head mannequin featuring an anatomically realistic brain phantom with embedded synthetically created tumors and eloquent areas. Materials and methods: A two-phase evaluation process was adopted in a simulated small tumor resection adjacent to Brocaâ\u80\u99s area. Phase I involved nine subjects without neurosurgical training in performing spatial judgment tasks. In Phase II, three surgeons were involved in assessing the effectiveness of the AR-neuronavigator in performing brain tumor targeting on a patient-specific head phantom. Results: Phase I revealed the ability of the AR scene to evoke depth perception under different visualization modalities. Phase II confirmed the potentialities of the AR-neuronavigator in aiding the determination of the optimal surgical access to the surgical target. Conclusions: The AR-neuronavigator is intuitive, easy-to-use, and provides three-dimensional augmented information in a perceptually-correct way. The system proved to be effective in guiding skin incision, craniotomy, and lesion targeting. The preliminary results encourage a structured study to prove clinical effectiveness. Moreover, our testing platform might be used to facilitate training in brain tumour resection procedures
Microscope Embedded Neurosurgical Training and Intraoperative System
In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results.
In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point.
The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery.
This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient.
The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability.
Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery.
All the components are open source or at least based on a GPL license
A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation
The application of laser technologies in surgical interventions has been accepted in the clinical
domain due to their atraumatic properties. In addition to manual application of fibre-guided
lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours
has been prevailed in ENT surgery. However, TLM requires many years of surgical training
for tumour resection in order to preserve the function of adjacent organs and thus preserve the
patient’s quality of life. The positioning of the microscopic laser applicator outside the patient
can also impede a direct line-of-sight to the target area due to anatomical variability and limit
the working space. Further clinical challenges include positioning the laser focus on the tissue
surface, imaging, planning and performing laser ablation, and motion of the target area during
surgery. This dissertation aims to address the limitations of TLM through robotic approaches and
intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no
highly integrated platform for endoscopic delivery of focused laser radiation is available to date.
Likewise, there are no known devices that incorporate scene information from endoscopic imaging
into ablation planning and execution. For focusing of the laser beam close to the target tissue, this
work first presents miniaturised focusing optics that can be integrated into endoscopic systems.
Experimental trials characterise the optical properties and the ablation performance. A robotic
platform is realised for manipulation of the focusing optics. This is based on a variable-length
continuum manipulator. The latter enables movements of the endoscopic end effector in five
degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the
robot are integrated into a modular framework that is evaluated experimentally. The manipulation
of focused laser radiation also requires precise adjustment of the focal position on the tissue. For
this purpose, visual, haptic and visual-haptic assistance functions are presented. These support
the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic
assistance are demonstrated in a user study. The system performance and usability of the overall
robotic system are assessed in an additional user study. Analogous to a clinical scenario, the
subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the
spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser
ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact
laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergefĂĽhrten
Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von
Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion
jedoch ein langjähriges chirurgisches Training, um die Funktion der angrenzenden Organe zu
sichern und damit die Lebensqualität der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet
durch anatomische Variabilität erschweren und den Arbeitsraum einschränken. Weitere klinische
Herausforderungen betreffen die Positionierung des Laserfokus auf der Gewebeoberfläche, die
Bildgebung, die Planung und AusfĂĽhrung der Laserablation sowie intraoperative Bewegungen
des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch
robotische Ansätze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal
invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen fĂĽr die endoskopische
Applikation fokussierter Laserstrahlung verfĂĽgbar. Ebenfalls sind keine Systeme bekannt, die
Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausfĂĽhrung
einbeziehen. Für eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunächst
eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur
Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem
längenveränderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer
mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fĂĽnf Freiheitsgraden.
Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework
eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine
präzise Anpassung der Fokuslage auf das Gewebe. Dafür werden visuelle, haptische und visuell haptische Assistenzfunktionen eingeführt. Diese unterstützen den Anwender bei Teleoperation
zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der
visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit
des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu
einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die
mittlere Positioniergenauigkeit des Spots beträgt dabei 0,5 mm. Zur Automatisierung der Ablation
werden abschließend Methoden der bildgestützten Regelung vorgestellt. Experimente bestätigen
einen positiven Effekt der Automationskonzepte fĂĽr die kontaktfreie Laserchirurgie
Review on Augmented Reality in Oral and Cranio-Maxillofacial Surgery: Toward 'Surgery-Specific' Head-Up Displays
In recent years, there has been an increasing interest towards the augmented reality as applied to the surgical field. We conducted a systematic review of literature classifying the augmented reality applications in oral and cranio-maxillofacial surgery (OCMS) in order to pave the way to future solutions that may ease the adoption of AR guidance in surgical practice. Publications containing the terms 'augmented reality' AND 'maxillofacial surgery', and the terms 'augmented reality' AND 'oral surgery' were searched in the PubMed database. Through the selected studies, we performed a preliminary breakdown according to general aspects, such as surgical subspecialty, year of publication and country of research; then, a more specific breakdown was provided according to technical features of AR-based devices, such as virtual data source, visualization processing mode, tracking mode, registration technique and AR display type. The systematic search identified 30 eligible publications. Most studies (14) were in orthognatic surgery, the minority (2) concerned traumatology, while 6 studies were in oncology and 8 in general OCMS. In 8 of 30 studies the AR systems were based on a head-mounted approach using smart glasses or headsets. In most of these cases (7), a video-see-through mode was implemented, while only 1 study described an optical-see-through mode. In the remaining 22 studies, the AR content was displayed on 2D displays (10), full-parallax 3D displays (6) and projectors (5). In 1 case the AR display type is not specified. AR applications are of increasing interest and adoption in oral and cranio-maxillofacial surgery, however, the quality of the AR experience represents the key requisite for a successful result. Widespread use of AR systems in the operating room may be encouraged by the availability of 'surgery-specific' head-mounted devices that should guarantee the accuracy required for surgical tasks and the optimal ergonomics
- …