203 research outputs found

    Towards Non-contact 3D Ultrasound for Wrist Imaging

    Full text link
    Objective: The objective of this work is an attempt towards non-contact freehand 3D ultrasound imaging with minimal complexity added to the existing point of care ultrasound (POCUS) systems. Methods: This study proposes a novel approach of using a mechanical track for non-contact ultrasound (US) scanning. The approach thus restricts the probe motion to a linear plane, to simplify the acquisition and 3D reconstruction process. A pipeline for US 3D volume reconstruction employing an US research platform and a GPU-based edge device is developed. Results: The efficacy of the proposed approach is demonstrated through ex-vivo and in-vivo experiments. Conclusion: The proposed approach with the adjustable field of view capability, non-contact design, and low cost of deployment without significantly altering the existing setup would open doors for up gradation of traditional systems to a wide range of 3D US imaging applications. Significance: Ultrasound (US) imaging is a popular clinical imaging modality for the point-of-care bedside imaging, particularly of the wrist/knee in the pediatric population due to its non-invasive and radiation free nature. However, the limited views of tissue structures obtained with 2D US in such scenarios make the diagnosis challenging. To overcome this, 3D US imaging which uses 2D US images and their orientation/position to reconstruct 3D volumes was developed. The accurate position estimation of the US probe at low cost has always stood as a challenging task in 3D reconstruction. Additionally, US imaging involves contact, which causes difficulty to pediatric subjects while monitoring live fractures or open wounds. Towards overcoming these challenges, a novel framework is attempted in this work.Comment: 9 Pages, 11 figure

    Ultrasound-Augmented Laparoscopy

    Get PDF
    Laparoscopic surgery is perhaps the most common minimally invasive procedure for many diseases in the abdomen. Since the laparoscopic camera provides only the surface view of the internal organs, in many procedures, surgeons use laparoscopic ultrasound (LUS) to visualize deep-seated surgical targets. Conventionally, the 2D LUS image is visualized in a display spatially separate from that displays the laparoscopic video. Therefore, reasoning about the geometry of hidden targets requires mentally solving the spatial alignment, and resolving the modality differences, which is cognitively very challenging. Moreover, the mental representation of hidden targets in space acquired through such cognitive medication may be error prone, and cause incorrect actions to be performed. To remedy this, advanced visualization strategies are required where the US information is visualized in the context of the laparoscopic video. To this end, efficient computational methods are required to accurately align the US image coordinate system with that centred in the camera, and to render the registered image information in the context of the camera such that surgeons perceive the geometry of hidden targets accurately. In this thesis, such a visualization pipeline is described. A novel method to register US images with a camera centric coordinate system is detailed with an experimental investigation into its accuracy bounds. An improved method to blend US information with the surface view is also presented with an experimental investigation into the accuracy of perception of the target locations in space

    3D Kidney Segmentation from Abdominal Images Using Spatial-Appearance Models

    Get PDF
    Kidney segmentation is an essential step in developing any noninvasive computer-assisted diagnostic system for renal function assessment. This paper introduces an automated framework for 3D kidney segmentation from dynamic computed tomography (CT) images that integrates discriminative features from the current and prior CT appearances into a random forest classification approach. To account for CT images’ inhomogeneities, we employ discriminate features that are extracted from a higher-order spatial model and an adaptive shape model in addition to the first-order CT appearance. To model the interactions between CT data voxels, we employed a higher-order spatial model, which adds the triple and quad clique families to the traditional pairwise clique family. The kidney shape prior model is built using a set of training CT data and is updated during segmentation using not only region labels but also voxels’ appearances in neighboring spatial voxel locations. Our framework performance has been evaluated on in vivo dynamic CT data collected from 20 subjects and comprises multiple 3D scans acquired before and after contrast medium administration. Quantitative evaluation between manually and automatically segmented kidney contours using Dice similarity, percentage volume differences, and 95th-percentile bidirectional Hausdorff distances confirms the high accuracy of our approach

    Registration of Brain MRI/PET Images Based on Adaptive Combination of Intensity and Gradient Field Mutual Information

    Get PDF
    Traditional mutual information (MI) function aligns two multimodality images with intensity information, lacking spatial information, so that it usually presents many local maxima that can lead to inaccurate registration. Our paper proposes an algorithm of adaptive combination of intensity and gradient field mutual information (ACMI). Gradient code maps (GCM) are constructed by coding gradient field information of corresponding original images. The gradient field MI, calculated from GCMs, can provide complementary properties to intensity MI. ACMI combines intensity MI and gradient field MI with a nonlinear weight function, which can automatically adjust the proportion between two types MI in combination to improve registration. Experimental results demonstrate that ACMI outperforms the traditional MI and it is much less sensitive to reduced resolution or overlap of images

    Segmentation of Pulmonary Vascular Trees from Thoracic 3D CT Images

    Get PDF
    This paper describes an algorithm for extracting pulmonary vascular trees (arteries plus veins) from three-dimensional (3D) thoracic computed tomographic (CT) images. The algorithm integrates tube enhancement filter and traversal approaches which are based on eigenvalues and eigenvectors of a Hessian matrix to extract thin peripheral segments as well as thick vessels close to the lung hilum. The resultant algorithm was applied to a simulation data set and 44 scans from 22 human subjects imaged via multidetector-row CT (MDCT) during breath holds at 85% and 20% of their vital capacity. A quantitative validation was performed with more than 1000 manually identified points selected from inside the vessel segments to assess true positives (TPs) and 1000 points randomly placed outside of the vessels to evaluate false positives (FPs) in each case. On average, for both the high and low volume lung images, 99% of the points was properly marked as vessel and 1% of the points were assessed as FPs. Our hybrid segmentation algorithm provides a highly reliable method of segmenting the combined pulmonary venous and arterial trees which in turn will serve as a critical starting point for further quantitative analysis tasks and aid in our overall goal of establishing a normative atlas of the human lung

    Using the Waseda Bioinstrumentation System WB-1R to analyze Surgeon’s performance during laparoscopy - towards the development of a global performance index -

    Get PDF
    Minimally invasive surgery (MIS) has become very common in recent years, thanks to the many advantages it provides for patients. Since it is difficult for surgeons to learn and master this technique, several training methods and metrics have been proposed, both to improve the surgeon's abilities and also to assess his/her skills. This paper presents the use of the WB-1R (Waseda Bioinstrumentation system no.1. Refined), which was developed at Waseda University, Tokyo, to investigate and analyze a surgeon's movements and performance. Specifically, the system can measure the movements of the head, the arms, and the hands, as well as several physiological parameters. In this paper we present our experiment to evaluate a surgeon's ability to handle surgical instruments and his/her depth perception using a laparoscopic view. Our preliminary analysis of a subset of the acquired data (i.e. comfort of the subjects; the amount of time it took o complete each exercise; and respiration) clearly shows that the expert surgeon and the group of medical students perform very differently. Therefore, WB-1R (or, better, a newer version tailored specifically for use in the operating room) could provide important additional information to help assess the experience and performance of surgeons, thus leading to the development of a Global Performance Index for surgeons during MIS. These analyses and modeling, moreover, are an important step towards the automatization and the robotic assistance of the surgical gesture

    Multi-atlas segmentation using clustering, local non-linear manifold embeddings and target-specific templates

    Get PDF
    Multi-atlas segmentation (MAS) has become an established technique for the automated delineation of anatomical structures. The often manually annotated labels from each of multiple pre-segmented images (atlases) are typically transferred to a target through the spatial mapping of corresponding structures of interest. The mapping can be estimated by pairwise registration between each atlas and the target or by creating an intermediate population template for spatial normalisation of atlases and targets. The former is done at runtime which is computationally expensive but provides high accuracy. In the latter approach the template can be constructed from the atlases offline requiring only one registration to the target at runtime. Although this is computationally more efficient, the composition of deformation fields can lead to decreased accuracy. Our goal was to develop a MAS method which was both efficient and accurate. In our approach we create a target-specific template (TST) which has a high similarity to the target and serves as intermediate step to increase registration accuracy. The TST is constructed from the atlas images that are most similar to the target. These images are determined in low-dimensional manifold spaces on the basis of deformation fields in local regions of interest. We also introduce a clustering approach to divide atlas labels into meaningful sub-regions of interest and increase local specificity for TST construction and label fusion. Our approach was tested on a variety of MR brain datasets and applied to an in-house dataset. We achieve state-of-the-art accuracy while being computationally much more efficient than competing methods. This efficiency opens the door to the use of larger sets of atlases which could lead to further improvement in segmentation accuracy
    corecore