2,960 research outputs found

    Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion - A Feasibility Study

    Get PDF
    Modern neurosurgical procedures often rely on computer-assisted real-time guidance using multiple medical imaging modalities. State-of-the-art commercial products enable the fusion of pre-operative with intra-operative images (e.g., magnetic resonance [MR] with ultrasound [US] images), as well as the on-screen visualization of procedures in progress. In so doing, US images can be employed as a template to which pre-operative images can be registered, to correct for anatomical changes, to provide live-image feedback, and consequently to improve confidence when making resection margin decisions near eloquent regions during tumour surgery. In spite of the potential for tracked ultrasound to improve many neurosurgical procedures, it is not widely used. State-of-the-art systems are handicapped by optical tracking’s need for consistent line-of-sight, keeping tracked rigid bodies clean and rigidly fixed, and requiring a calibration workflow. The goal of this work is to improve the value offered by co-registered ultrasound images without the workflow drawbacks of conventional systems. The novel work in this thesis includes: the exploration and development of a GPU-enabled 2D-3D multi-modal registration algorithm based on the existing LC2 metric; and the use of this registration algorithm in the context of a sensor and image-fusion algorithm. The work presented here is a motivating step in a vision towards a heterogeneous tracking framework for image-guided interventions where the knowledge from intraoperative imaging, pre-operative imaging, and (potentially disjoint) wireless sensors in the surgical field are seamlessly integrated for the benefit of the surgeon. The technology described in this thesis, inspired by advances in robot localization demonstrate how inaccurate pose data from disjoint sources can produce a localization system greater than the sum of its parts

    Multimodality imaging in vivo for preclinical assessment of tumor-targeted doxorubicin nanoparticles.

    Get PDF
    This study presents a new multimodal imaging approach that includes high-frequency ultrasound, fluorescence intensity, confocal, and spectral imaging to improve the preclinical evaluation of new therapeutics in vivo. Here we use this approach to assess in vivo the therapeutic efficacy of the novel chemotherapy construct, HerDox during and after treatment. HerDox is comprised of doxorubicin non-covalently assembled in a viral-like particle targeted to HER2+ tumor cells, causing tumor cell death at over 10-fold lower dose compared to the untargeted drug, while sparing the heart. Whereas our initial proof-of-principle studies on HerDox used tumor growth/shrinkage rates as a measure of therapeutic efficacy, here we show that multimodal imaging deployed during and after treatment can supplement traditional modes of tumor monitoring to further characterize the particle in tissues of treated mice. Specifically, we show here that tumor cell apoptosis elicited by HerDox can be monitored in vivo during treatment using high frequency ultrasound imaging, while in situ confocal imaging of excised tumors shows that HerDox indeed penetrated tumor tissue and can be detected at the subcellular level, including in the nucleus, via Dox fluorescence. In addition, ratiometric spectral imaging of the same tumor tissue enables quantitative discrimination of HerDox fluorescence from autofluorescence in situ. In contrast to standard approaches of preclinical assessment, this new method provides multiple/complementary information that may shorten the time required for initial evaluation of in vivo efficacy, thus potentially reducing the time and cost for translating new drug molecules into the clinic

    Calibration and Analysis of a Multimodal Micro-CT and Structured Light Imaging System for the Evaluation of Excised Breast Tissue.

    Get PDF
    A multimodal micro-computed tomography (CT) and multi-spectral structured light imaging (SLI) system is introduced and systematically analyzed to test its feasibility to aid in margin delineation during breast conserving surgery (BCS). Phantom analysis of the micro-CT yielded a signal-to-noise ratio of 34, a contrast of 1.64, and a minimum detectable resolution of 240 ?m for a 1.2?min scan. The SLI system, spanning wavelengths 490?nm to 800?nm and spatial frequencies up to 1.37 , was evaluated with aqueous tissue simulating phantoms having variations in particle size distribution, scatter density, and blood volume fraction. The reduced scattering coefficient, and phase function parameter, ?, were accurately recovered over all wavelengths independent of blood volume fractions from 0% to 4%, assuming a flat sample geometry perpendicular to the imaging plane. The resolution of the optical system was tested with a step phantom, from which the modulation transfer function was calculated yielding a maximum resolution of 3.78 cycles per mm. The three dimensional spatial co-registration between the CT and optical imaging space was tested and shown to be accurate within 0.7?mm. A freshly resected breast specimen, with lobular carcinoma, fibrocystic disease, and adipose, was imaged with the system. The micro-CT provided visualization of the tumor mass and its spiculations, and SLI yielded superficial quantification of light scattering parameters for the malignant and benign tissue types. These results appear to be the first demonstration of SLI combined with standard medical tomography for imaging excised tumor specimens. While further investigations are needed to determine and test the spectral, spatial, and CT features required to classify tissue, this study demonstrates the ability of multimodal CT/SLI to quantify, visualize, and spatially navigate breast tumor specimens, which could potentially aid in the assessment of tumor margin status during BCS

    06311 Abstracts Collection -- Sensor Data and Information Fusion in Computer Vision and Medicine

    Get PDF
    From 30.07.06 to 04.08.06, the Dagstuhl Seminar 06311 ``Sensor Data and Information Fusion in Computer Vision and Medicine\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. Sensor data fusion is of increasing importance for many research fields and applications. Multi-modal imaging is routine in medicine, and in robitics it is common to use multi-sensor data fusion. During the seminar, researchers and application experts working in the field of sensor data fusion presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. The second part briefly summarizes the contributions

    Temporal Interpolation via Motion Field Prediction

    Full text link
    Navigated 2D multi-slice dynamic Magnetic Resonance (MR) imaging enables high contrast 4D MR imaging during free breathing and provides in-vivo observations for treatment planning and guidance. Navigator slices are vital for retrospective stacking of 2D data slices in this method. However, they also prolong the acquisition sessions. Temporal interpolation of navigator slices an be used to reduce the number of navigator acquisitions without degrading specificity in stacking. In this work, we propose a convolutional neural network (CNN) based method for temporal interpolation via motion field prediction. The proposed formulation incorporates the prior knowledge that a motion field underlies changes in the image intensities over time. Previous approaches that interpolate directly in the intensity space are prone to produce blurry images or even remove structures in the images. Our method avoids such problems and faithfully preserves the information in the image. Further, an important advantage of our formulation is that it provides an unsupervised estimation of bi-directional motion fields. We show that these motion fields can be used to halve the number of registrations required during 4D reconstruction, thus substantially reducing the reconstruction time.Comment: Submitted to 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherland

    Robust Tracking in Aerial Imagery Based on an Ego-Motion Bayesian Model

    Get PDF
    A novel strategy for object tracking in aerial imagery is presented, which is able to deal with complex situations where the camera ego-motion cannot be reliably estimated due to the aperture problem (related to low structured scenes), the strong ego-motion, and/or the presence of independent moving objects. The proposed algorithm is based on a complex modeling of the dynamic information, which simulates both the object and the camera dynamics to predict the putative object locations. In this model, the camera dynamics is probabilistically formulated as a weighted set of affine transformations that represent possible camera ego-motions. This dynamic model is used in a Particle Filter framework to distinguish the actual object location among the multiple candidates, that result from complex cluttered backgrounds, and the presence of several moving objects. The proposed strategy has been tested with the aerial FLIR AMCOM dataset, and its performance has been also compared with other tracking techniques to demonstrate its efficiency
    corecore