29,151 research outputs found
Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning
Interventional C-arm imaging is crucial to percutaneous orthopedic procedures
as it enables the surgeon to monitor the progress of surgery on the anatomy
level. Minimally invasive interventions require repeated acquisition of X-ray
images from different anatomical views to verify tool placement. Achieving and
reproducing these views often comes at the cost of increased surgical time and
radiation dose to both patient and staff. This work proposes a marker-free
"technician-in-the-loop" Augmented Reality (AR) solution for C-arm
repositioning. The X-ray technician operating the C-arm interventionally is
equipped with a head-mounted display capable of recording desired C-arm poses
in 3D via an integrated infrared sensor. For C-arm repositioning to a
particular target view, the recorded C-arm pose is restored as a virtual object
and visualized in an AR environment, serving as a perceptual reference for the
technician. We conduct experiments in a setting simulating orthopedic trauma
surgery. Our proof-of-principle findings indicate that the proposed system can
decrease the 2.76 X-ray images required per desired view down to zero,
suggesting substantial reductions of radiation dose during C-arm repositioning.
The proposed AR solution is a first step towards facilitating communication
between the surgeon and the surgical staff, improving the quality of surgical
image acquisition, and enabling context-aware guidance for surgery rooms of the
future. The concept of technician-in-the-loop design will become relevant to
various interventions considering the expected advancements of sensing and
wearable computing in the near future
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Autofluorescence lifetime augmented reality as a means for real-time robotic surgery guidance in human patients.
Due to loss of tactile feedback the assessment of tumor margins during robotic surgery is based only on visual inspection, which is neither significantly sensitive nor specific. Here we demonstrate time-resolved fluorescence spectroscopy (TRFS) as a novel technique to complement the visual inspection of oral cancers during transoral robotic surgery (TORS) in real-time and without the need for exogenous contrast agents. TRFS enables identification of cancerous tissue by its distinct autofluorescence signature that is associated with the alteration of tissue structure and biochemical profile. A prototype TRFS instrument was integrated synergistically with the da Vinci Surgical robot and the combined system was validated in swine and human patients. Label-free and real-time assessment and visualization of tissue biochemical features during robotic surgery procedure, as demonstrated here, not only has the potential to improve the intraoperative decision making during TORS but also other robotic procedures without modification of conventional clinical protocols
Recommended from our members
Trends in virtual reality technologies for the learning patient
NextMed convened the Medicine Meets Virtual Reality 22 (MMVR 22) conference in 2016. Since 1992, the conference has brought together a diverse group of researchers to share creative solutions for the evolving challenge of integrating virtual reality tools into medical education. Virtual reality (VR) and its enabling technologies utilize hardware and software to simulate environments and encounters where users can interact and learn. The MMVR 22 symposium proceedings contain projects that support a variety of learners: medical students, practitioners, soldiers, and patients. This report will contemplate the trends in virtual reality technologies for patients navigating their medical and healthcare learning. The learning patient seeks more than intervention; they seek prevention. From virtual humans and environments to motion sensors and haptic devices, patients are surrounded by increasingly rich and transformative data-driven tools. Applied data enables VR applications to simulate experience, predict health outcomes, and motivate new behavior. The MMVR 22 presents investigations into the usability of wearable devices, the efficacy of avatar inclusion, and the viability of multi-player gaming. With increasing need for individualized and scalable programming, only committed open source efforts will align instructional designers, technology integrators, trainers, and clinicians. Curriculum and InstructionCurriculum and Instructio
On uncertainty propagation in image-guided renal navigation: Exploring uncertainty reduction techniques through simulation and in vitro phantom evaluation
Image-guided interventions (IGIs) entail the use of imaging to augment or replace direct vision during therapeutic interventions, with the overall goal is to provide effective treatment in a less invasive manner, as an alternative to traditional open surgery, while reducing patient trauma and shortening the recovery time post-procedure. IGIs rely on pre-operative images, surgical tracking and localization systems, and intra-operative images to provide correct views of the surgical scene. Pre-operative images are used to generate patient-specific anatomical models that are then registered to the patient using the surgical tracking system, and often complemented with real-time, intra-operative images. IGI systems are subject to uncertainty from several sources, including surgical instrument tracking / localization uncertainty, model-to-patient registration uncertainty, user-induced navigation uncertainty, as well as the uncertainty associated with the calibration of various surgical instruments and intra-operative imaging devices (i.e., laparoscopic camera) instrumented with surgical tracking sensors. All these uncertainties impact the overall targeting accuracy, which represents the error associated with the navigation of a surgical instrument to a specific target to be treated under image guidance provided by the IGI system. Therefore, understanding the overall uncertainty of an IGI system is paramount to the overall outcome of the intervention, as procedure success entails achieving certain accuracy tolerances specific to individual procedures. This work has focused on studying the navigation uncertainty, along with techniques to reduce uncertainty, for an IGI platform dedicated to image-guided renal interventions. We constructed life-size replica patient-specific kidney models from pre-operative images using 3D printing and tissue emulating materials and conducted experiments to characterize the uncertainty of both optical and electromagnetic surgical tracking systems, the uncertainty associated with the virtual model-to-physical phantom registration, as well as the uncertainty associated with live augmented reality (AR) views of the surgical scene achieved by enhancing the pre-procedural model and tracked surgical instrument views with live video views acquires using a camera tracked in real time. To better understand the effects of the tracked instrument calibration, registration fiducial configuration, and tracked camera calibration on the overall navigation uncertainty, we conducted Monte Carlo simulations that enabled us to identify optimal configurations that were subsequently validated experimentally using patient-specific phantoms in the laboratory. To mitigate the inherent accuracy limitations associated with the pre-procedural model-to-patient registration and their effect on the overall navigation, we also demonstrated the use of tracked video imaging to update the registration, enabling us to restore targeting accuracy to within its acceptable range. Lastly, we conducted several validation experiments using patient-specific kidney emulating phantoms using post-procedure CT imaging as reference ground truth to assess the accuracy of AR-guided navigation in the context of in vitro renal interventions. This work helped find answers to key questions about uncertainty propagation in image-guided renal interventions and led to the development of key techniques and tools to help reduce optimize the overall navigation / targeting uncertainty
Digital technologies for virtual recomposition : the case study of Serpotta stuccoes
The matter that lies beneath the smooth
and shining surface of stuccoes of the Serpotta family, who used to work in Sicily from 1670 to 1730, has
been thoroughly studied in previous papers, disclosing
the deep, even if empirical, knowledge of materials science that guided the artists in creating their master-
works. In this work the attention is focused on the solid
perspective and on the scenographic sculpture by Giacomo Serpotta, who is acknowledged as the leading exponent of the School. The study deals with some particular works of the artist, the so-called "teatrini" (Toy
Theater), made by him for the San Lorenzo Oratory in
Palermo. On the basis of archive documents and previous analogical photogrammetric plotting, integrated
with digital solutions and methodologies of computer-
based technologies, the study investigates and interprets
the geometric-formal genesis of the examined works of
art, until the prototyping of the whole scenic apparatus.peer-reviewe
- …