18,218 research outputs found
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning
Interventional C-arm imaging is crucial to percutaneous orthopedic procedures
as it enables the surgeon to monitor the progress of surgery on the anatomy
level. Minimally invasive interventions require repeated acquisition of X-ray
images from different anatomical views to verify tool placement. Achieving and
reproducing these views often comes at the cost of increased surgical time and
radiation dose to both patient and staff. This work proposes a marker-free
"technician-in-the-loop" Augmented Reality (AR) solution for C-arm
repositioning. The X-ray technician operating the C-arm interventionally is
equipped with a head-mounted display capable of recording desired C-arm poses
in 3D via an integrated infrared sensor. For C-arm repositioning to a
particular target view, the recorded C-arm pose is restored as a virtual object
and visualized in an AR environment, serving as a perceptual reference for the
technician. We conduct experiments in a setting simulating orthopedic trauma
surgery. Our proof-of-principle findings indicate that the proposed system can
decrease the 2.76 X-ray images required per desired view down to zero,
suggesting substantial reductions of radiation dose during C-arm repositioning.
The proposed AR solution is a first step towards facilitating communication
between the surgeon and the surgical staff, improving the quality of surgical
image acquisition, and enabling context-aware guidance for surgery rooms of the
future. The concept of technician-in-the-loop design will become relevant to
various interventions considering the expected advancements of sensing and
wearable computing in the near future
Robot Autonomy for Surgery
Autonomous surgery involves having surgical tasks performed by a robot
operating under its own will, with partial or no human involvement. There are
several important advantages of automation in surgery, which include increasing
precision of care due to sub-millimeter robot control, real-time utilization of
biosignals for interventional care, improvements to surgical efficiency and
execution, and computer-aided guidance under various medical imaging and
sensing modalities. While these methods may displace some tasks of surgical
teams and individual surgeons, they also present new capabilities in
interventions that are too difficult or go beyond the skills of a human. In
this chapter, we provide an overview of robot autonomy in commercial use and in
research, and present some of the challenges faced in developing autonomous
surgical robots
Visualization-Based Mapping of Language Function in the Brain
Cortical language maps, obtained through intraoperative electrical stimulation studies, provide a rich source of information for research on language organization. Previous studies have shown interesting correlations between the distribution of essential language sites and such behavioral indicators as verbal IQ and have provided suggestive evidence for regarding human language cortex as an organization of multiple distributed systems. Noninvasive studies using ECoG, PET, and functional MR lend support to this model; however, there as yet are no studies that integrate these two forms of information. In this paper we describe a method for mapping the stimulation data onto a 3-D MRI-based neuroanatomic model of the individual patient. The mapping is done by comparing an intraoperative photograph of the exposed cortical surface with a computer-based MR visualization of the surface, interactively indicating corresponding stimulation sites, and recording 3-D MR machine coordinates of the indicated sites. Repeatability studies were performed to validate the accuracy of the mapping technique. Six observers—a neurosurgeon, a radiologist, and four computer scientists, independently mapped 218 stimulation sites from 12 patients. The mean distance of a mapping from the mean location of each site was 2.07 mm, with a standard deviation of 1.5 mm, or within 5.07 mm with 95% confidence. Since the surgical sites are accurate within approximately 1 cm, these results show that the visualization-based approach is accurate within the limits of the stimulation maps. When incorporated within the kind of information system envisioned by the Human Brain Project, this anatomically based method will not only provide a key link between noninvasive and invasive approaches to understanding language organization, but will also provide the basis for studying the relationship between language function and anatomical variability
Relational Reasoning Network (RRN) for Anatomical Landmarking
Accurately identifying anatomical landmarks is a crucial step in deformation
analysis and surgical planning for craniomaxillofacial (CMF) bones. Available
methods require segmentation of the object of interest for precise landmarking.
Unlike those, our purpose in this study is to perform anatomical landmarking
using the inherent relation of CMF bones without explicitly segmenting them. We
propose a new deep network architecture, called relational reasoning network
(RRN), to accurately learn the local and the global relations of the landmarks.
Specifically, we are interested in learning landmarks in CMF region: mandible,
maxilla, and nasal bones. The proposed RRN works in an end-to-end manner,
utilizing learned relations of the landmarks based on dense-block units and
without the need for segmentation. For a given a few landmarks as input, the
proposed system accurately and efficiently localizes the remaining landmarks on
the aforementioned bones. For a comprehensive evaluation of RRN, we used
cone-beam computed tomography (CBCT) scans of 250 patients. The proposed system
identifies the landmark locations very accurately even when there are severe
pathologies or deformations in the bones. The proposed RRN has also revealed
unique relationships among the landmarks that help us infer several reasoning
about informativeness of the landmark points. RRN is invariant to order of
landmarks and it allowed us to discover the optimal configurations (number and
location) for landmarks to be localized within the object of interest
(mandible) or nearby objects (maxilla and nasal). To the best of our knowledge,
this is the first of its kind algorithm finding anatomical relations of the
objects using deep learning.Comment: 10 pages, 6 Figures, 3 Table
- …