4,607 research outputs found
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Keypoint Transfer for Fast Whole-Body Segmentation
We introduce an approach for image segmentation based on sparse
correspondences between keypoints in testing and training images. Keypoints
represent automatically identified distinctive image locations, where each
keypoint correspondence suggests a transformation between images. We use these
correspondences to transfer label maps of entire organs from the training
images to the test image. The keypoint transfer algorithm includes three steps:
(i) keypoint matching, (ii) voting-based keypoint labeling, and (iii)
keypoint-based probabilistic transfer of organ segmentations. We report
segmentation results for abdominal organs in whole-body CT and MRI, as well as
in contrast-enhanced CT and MRI. Our method offers a speed-up of about three
orders of magnitude in comparison to common multi-atlas segmentation, while
achieving an accuracy that compares favorably. Moreover, keypoint transfer does
not require the registration to an atlas or a training phase. Finally, the
method allows for the segmentation of scans with highly variable field-of-view.Comment: Accepted for publication at IEEE Transactions on Medical Imagin
Planning and Evaluation of Radio-Therapeutic Treatment of Head-and-Neck Cancer Using PET/CT scanning
A fast and robust patient specific Finite Element mesh registration technique: application to 60 clinical cases
Finite Element mesh generation remains an important issue for patient
specific biomechanical modeling. While some techniques make automatic mesh
generation possible, in most cases, manual mesh generation is preferred for
better control over the sub-domain representation, element type, layout and
refinement that it provides. Yet, this option is time consuming and not suited
for intraoperative situations where model generation and computation time is
critical. To overcome this problem we propose a fast and automatic mesh
generation technique based on the elastic registration of a generic mesh to the
specific target organ in conjunction with element regularity and quality
correction. This Mesh-Match-and-Repair (MMRep) approach combines control over
the mesh structure along with fast and robust meshing capabilities, even in
situations where only partial organ geometry is available. The technique was
successfully tested on a database of 5 pre-operatively acquired complete femora
CT scans, 5 femoral heads partially digitized at intraoperative stage, and 50
CT volumes of patients' heads. The MMRep algorithm succeeded in all 60 cases,
yielding for each patient a hex-dominant, Atlas based, Finite Element mesh with
submillimetric surface representation accuracy, directly exploitable within a
commercial FE software
Recommended from our members
State of the Art of Level Set Methods in Segmentation and Registration of Medical Imaging Modalities
Segmentation of medical images is an important step in various applications such as visualization, quantitative analysis and image-guided surgery. Numerous segmentation methods have been developed in the past two decades for extraction of organ contours on medical images. Low-level segmentation methods, such as pixel-based clustering, region growing, and filter-based edge detection, require additional pre-processing and post-processing as well as considerable amounts of expert intervention or information of the objects of interest. Furthermore the subsequent analysis of segmented objects is hampered by the primitive, pixel or voxel level representations from those region-based segmentation. Deformable models, on the other hand, provide an explicit representation of the boundary and the shape of the object. They combine several desirable features such as inherent connectivity and smoothness, which counteract noise and boundary irregularities, as well as the ability to incorporate knowledge about the object of interest. However, parametric deformable models have two main limitations. First, in situations where the initial model and desired object boundary differ greatly in size and shape, the model must be re-parameterized dynamically to faithfully recover the object boundary. The second limitation is that it has difficulty dealing with topological adaptation such as splitting or merging model parts, a useful property for recovering either multiple objects or objects with unknown topology. This difficulty is caused by the fact that a new parameterization must be constructed whenever topology change occurs, which requires sophisticated schemes. Level set deformable models, also referred to as geometric deformable models, provide an elegant solution to address the primary limitations of parametric deformable models. These methods have drawn a great deal of attention since their introduction in 1988. Advantages of the contour implicit formulation of the deformable model over parametric formulation include: (1) no parameterization of the contour, (2) topological flexibility, (3) good numerical stability, (4) straightforward extension of the 2D formulation to n-D. Recent reviews on the subject include papers from Suri. In this chapter we give a general overview of the level set segmentation methods with emphasize on new frameworks recently introduced in the context of medical imaging problems. We then introduce novel approaches that aim at combining segmentation and registration in a level set formulation. Finally we review a selective set of clinical works with detailed validation of the level set methods for several clinical applications
Geometric Supervision and Deep Structured Models for Image Segmentation
The task of semantic segmentation aims at understanding an image at a pixel level. Due to its applicability in many areas, such as autonomous vehicles, robotics and medical surgery assistance, semantic segmentation has become an essential task in image analysis. During the last few years a lot of progress have been made for image segmentation algorithms, mainly due to the introduction of deep learning methods, in particular the use of Convolutional Neural Networks (CNNs). CNNs are powerful for modeling complex connections between input and output data but have two drawbacks when it comes to semantic segmentation. Firstly, CNNs lack the ability to directly model dependent output structures, for instance, explicitly enforcing properties such as label smoothness and coherence. This drawback motivates the use of Conditional Random Fields (CRFs), applied as a post-processing step in semantic segmentation. Secondly, training CNNs requires large amounts of annotated data. For segmentation this amounts to dense, pixel-level, annotations that are very time-consuming to acquire.This thesis summarizes the content of five papers addressing the two aforementioned drawbacks of CNNs. The first two papers present methods on how geometric 3D models can be used to improve segmentation models. The 3D models can be created with little human labour and can be used as a supervisory signal to improve the robustness of semantic segmentation and long-term visual localization methods. The last three papers focuses on models combining CNNs and CRFs for semantic segmentation. The models consist of a CNN capable of learning complex image features coupled with a CRF capable of learning dependencies between output variables. Emphasis has been on creating models that are possible to train end-to-end, giving the CNN and the CRF a chance to learn how to interact and exploit complementary information to achieve better performance
- …