41,803 research outputs found
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization
When approaching a novel visual recognition problem in a specialized image
domain, a common strategy is to start with a pre-trained deep neural network
and fine-tune it to the specialized domain. If the target domain covers a
smaller visual space than the source domain used for pre-training (e.g.
ImageNet), the fine-tuned network is likely to be over-parameterized. However,
applying network pruning as a post-processing step to reduce the memory
requirements has drawbacks: fine-tuning and pruning are performed
independently; pruning parameters are set once and cannot adapt over time; and
the highly parameterized nature of state-of-the-art pruning methods make it
prohibitive to manually search the pruning parameter space for deep networks,
leading to coarse approximations. We propose a principled method for jointly
fine-tuning and compressing a pre-trained convolutional network that overcomes
these limitations. Experiments on two specialized image domains (remote sensing
images and describable textures) demonstrate the validity of the proposed
approach.Comment: BMVC 2017 ora
Recent trends, technical concepts and components of computer-assisted orthopedic surgery systems: A comprehensive review
Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.Web of Science1923art. no. 519
Robot Autonomy for Surgery
Autonomous surgery involves having surgical tasks performed by a robot
operating under its own will, with partial or no human involvement. There are
several important advantages of automation in surgery, which include increasing
precision of care due to sub-millimeter robot control, real-time utilization of
biosignals for interventional care, improvements to surgical efficiency and
execution, and computer-aided guidance under various medical imaging and
sensing modalities. While these methods may displace some tasks of surgical
teams and individual surgeons, they also present new capabilities in
interventions that are too difficult or go beyond the skills of a human. In
this chapter, we provide an overview of robot autonomy in commercial use and in
research, and present some of the challenges faced in developing autonomous
surgical robots
Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning
Interventional C-arm imaging is crucial to percutaneous orthopedic procedures
as it enables the surgeon to monitor the progress of surgery on the anatomy
level. Minimally invasive interventions require repeated acquisition of X-ray
images from different anatomical views to verify tool placement. Achieving and
reproducing these views often comes at the cost of increased surgical time and
radiation dose to both patient and staff. This work proposes a marker-free
"technician-in-the-loop" Augmented Reality (AR) solution for C-arm
repositioning. The X-ray technician operating the C-arm interventionally is
equipped with a head-mounted display capable of recording desired C-arm poses
in 3D via an integrated infrared sensor. For C-arm repositioning to a
particular target view, the recorded C-arm pose is restored as a virtual object
and visualized in an AR environment, serving as a perceptual reference for the
technician. We conduct experiments in a setting simulating orthopedic trauma
surgery. Our proof-of-principle findings indicate that the proposed system can
decrease the 2.76 X-ray images required per desired view down to zero,
suggesting substantial reductions of radiation dose during C-arm repositioning.
The proposed AR solution is a first step towards facilitating communication
between the surgeon and the surgical staff, improving the quality of surgical
image acquisition, and enabling context-aware guidance for surgery rooms of the
future. The concept of technician-in-the-loop design will become relevant to
various interventions considering the expected advancements of sensing and
wearable computing in the near future
Video-based assistance system for training in minimally invasive surgery
In this paper, the development of an assisting system for laparoscopic surgical training is presented. With this system, we expect to facilitate the training process at the first stages of training in laparoscopic surgery and to contribute to an objective evaluation of surgical skills. To achieve this, we propose the insertion of multimedia contents and outlines of work adapted to the level of experience of trainees and the detection of the movements of the laparoscopic instrument into the monitored image. A module to track the instrument is implemented focusing on the tip of the laparoscopic tool. This tracking method does not need the presence of artificial marks or special colours to distinguish the instruments. Similarly, the system has another method based on visual tracking to localize support multimedia content in a stable position of the field of vision. Therefore, this position of the support content is adapted to the movements of the camera or the working area. Experimental results are presented to show the feasibility of the proposed system for assisting in laparoscopic surgical training
Recommended from our members
Semiautomated optical coherence tomography-guided robotic surgery for porcine lens removal.
PurposeTo evaluate semiautomated surgical lens extraction procedures using the optical coherence tomography (OCT)-integrated Intraocular Robotic Interventional Surgical System.SettingStein Eye Institute and Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, USA.DesignExperimental study.MethodsSemiautomated lens extraction was performed on postmortem pig eyes using a robotic platform integrated with an OCT imaging system. Lens extraction was performed using a series of automated steps including robot-to-eye alignment, irrigation/aspiration (I/A) handpiece insertion, anatomic modeling, surgical path planning, and I/A handpiece navigation. Intraoperative surgical supervision and human intervention were enabled by real-time OCT image feedback to the surgeon via a graphical user interface. Manual preparation of the pig-eye models, including the corneal incision and capsulorhexis, was performed by a trained cataract surgeon before the semiautomated lens extraction procedures. A scoring system was used to assess surgical complications in a postoperative evaluation.ResultsComplete lens extraction was achieved in 25 of 30 eyes. In the remaining 5 eyes, small lens pieces (≤1.0 mm3) were detected near the lens equator, where transpupillary OCT could not image. No posterior capsule rupture or corneal leakage occurred. The mean surgical duration was 277 seconds ± 42 (SD). Based on a 3-point scale (0 = no damage), damage to the iris was 0.33 ± 0.20, damage to the cornea was 1.47 ± 0.20 (due to tissue dehydration), and stress at the incision was 0.97 ± 0.11.ConclusionsNo posterior capsule rupture was reported. Complete lens removal was achieved in 25 trials without significant surgical complications. Refinements to the procedures are required before fully automated lens extraction can be realized
- …