1,113 research outputs found

    3D fusion of histology to multi-parametric MRI for prostate cancer imaging evaluation and lesion-targeted treatment planning

    Get PDF
    Multi-parametric magnetic resonance imaging (mpMRI) of localized prostate cancer has the potential to support detection, staging and localization of tumors, as well as selection, delivery and monitoring of treatments. Delineating prostate cancer tumors on imaging could potentially further support the clinical workflow by enabling precise monitoring of tumor burden in active-surveillance patients, optimized targeting of image-guided biopsies, and targeted delivery of treatments to decrease morbidity and improve outcomes. Evaluating the performance of mpMRI for prostate cancer imaging and delineation ideally includes comparison to an accurately registered reference standard, such as prostatectomy histology, for the locations of tumor boundaries on mpMRI. There are key gaps in knowledge regarding how to accurately register histological reference standards to imaging, and consequently further gaps in knowledge regarding the suitability of mpMRI for tasks, such as tumor delineation, that require such reference standards for evaluation. To obtain an understanding of the magnitude of the mpMRI-histology registration problem, we quantified the position, orientation and deformation of whole-mount histology sections relative to the formalin-fixed tissue slices from which they were cut. We found that (1) modeling isotropic scaling accounted for the majority of the deformation with a further small but statistically significant improvement from modeling affine transformation, and (2) due to the depth (mean±standard deviation (SD) 1.1±0.4 mm) and orientation (mean±SD 1.5±0.9°) of the sectioning, the assumption that histology sections are cut from the front faces of tissue slices, common in previous approaches, introduced a mean error of 0.7 mm. To determine the potential consequences of seemingly small registration errors such as described above, we investigated the impact of registration accuracy on the statistical power of imaging validation studies using a co-registered spatial reference standard (e.g. histology images) by deriving novel statistical power formulae that incorporate registration error. We illustrated, through a case study modeled on a prostate cancer imaging trial at our centre, that submillimeter differences in registration error can have a substantial impact on the required sample sizes (and therefore also the study cost) for studies aiming to detect mpMRI signal differences due to 0.5 – 2.0 cm3 prostate tumors. With the aim of achieving highly accurate mpMRI-histology registrations without disrupting the clinical pathology workflow, we developed a three-stage method for accurately registering 2D whole-mount histology images to pre-prostatectomy mpMRI that allowed flexible placement of cuts during slicing for pathology and avoided the assumption that histology sections are cut from the front faces of tissue slices. The method comprised a 3D reconstruction of histology images, followed by 3D–3D ex vivo–in vivo and in vivo–in vivo image transformations. The 3D reconstruction method minimized fiducial registration error between cross-sections of non-disruptive histology- and ex-vivo-MRI-visible strand-shaped fiducials to reconstruct histology images into the coordinate system of an ex vivo MR image. We quantified the mean±standard deviation target registration error of the reconstruction to be 0.7±0.4 mm, based on the post-reconstruction misalignment of intrinsic landmark pairs. We also compared our fiducial-based reconstruction to an alternative reconstruction based on mutual-information-based registration, an established method for multi-modality registration. We found that the mean target registration error for the fiducial-based method (0.7 mm) was lower than that for the mutual-information-based method (1.2 mm), and that the mutual-information-based method was less robust to initialization error due to multiple sources of error, including the optimizer and the mutual information similarity metric. The second stage of the histology–mpMRI registration used interactively defined 3D–3D deformable thin-plate-spline transformations to align ex vivo to in vivo MR images to compensate for deformation due to endorectal MR coil positioning, surgical resection and formalin fixation. The third stage used interactively defined 3D–3D rigid or thin-plate-spline transformations to co-register in vivo mpMRI images to compensate for patient motion and image distortion. The combined mean registration error of the histology–mpMRI registration was quantified to be 2 mm using manually identified intrinsic landmark pairs. Our data set, comprising mpMRI, target volumes contoured by four observers and co-registered contoured and graded histology images, was used to quantify the positive predictive values and variability of observer scoring of lesions following the Prostate Imaging Reporting and Data System (PI-RADS) guidelines, the variability of target volume contouring, and appropriate expansion margins from target volumes to achieve coverage of histologically defined cancer. The analysis of lesion scoring showed that a PI-RADS overall cancer likelihood of 5, denoting “highly likely cancer”, had a positive predictive value of 85% for Gleason 7 cancer (and 93% for lesions with volumes \u3e0.5 cm3 measured on mpMRI) and that PI-RADS scores were positively correlated with histological grade (ρ=0.6). However, the analysis also showed interobserver differences in PI-RADS score of 0.6 to 1.2 (on a 5-point scale) and an agreement kappa value of only 0.30. The analysis of target volume contouring showed that target volume contours with suitable margins can achieve near-complete histological coverage for detected lesions, despite the presence of high interobserver spatial variability in target volumes. Prostate cancer imaging and delineation have the potential to support multiple stages in the management of localized prostate cancer. Targeted biopsy procedures with optimized targeting based on tumor delineation may help distinguish patients who need treatment from those who need active surveillance. Ongoing monitoring of tumor burden based on delineation in patients undergoing active surveillance may help identify those who need to progress to therapy early while the cancer is still curable. Preferentially targeting therapies at delineated target volumes may lower the morbidity associated with aggressive cancer treatment and improve outcomes in low-intermediate-risk patients. Measurements of the accuracy and variability of lesion scoring and target volume contouring on mpMRI will clarify its value in supporting these roles

    Call for Papers

    Get PDF
    We are pleased to open submissions for Volume 3, to be published December 2020. We accept standard research articles (6,000-8,000 words), as well as a range of other collaborative, creative and exploratory works (see out website for details: https://ojs.victoria.ac.nz/ce/about). Deadline for Open Submissions is April 1, 2020.&nbsp

    On Power and Obligation in Publishing

    Get PDF
    Welcome to Volume 5 of Commoning Ethnography. We’ll start with the obvious: this issue was a challenge to produce. It arrives nearly three calendar years after our last issue. This was not our plan. There are myriad reasons for the issue’s untimeliness. Chiefly, these have to do with a quite volatile period in the life of our institution in the long wake of  the COVID-19 pandemic as it played out in its own untimely way in Aotearoa. They also have to do with changes in our personal circumstances and shifting personnel on the editorial collective. Rather than unpack these circumstances, the experience of trying desperately to publish the journal while also keeping up with all the other things in life has raised a different set of questions: What is the nature of the relationship between author and editor? What kinds of obligations, responsibilities, and power relationships are enfolded into that relationship? What happens when those asymmetries shift around

    Adaptive control of hypersonic vehicles

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.Includes bibliographical references (p. 105-109).The guidance, navigation and control of hypersonic vehicles are highly challenging tasks due to the fact that the dynamics of the airframe, propulsion system and structure are integrated and highly interactive. Such a coupling makes it difficult to model various components with a requisite degree of accuracy. This in turn makes various control tasks including altitude and velocity command tracking in the cruise phase of the flight extremely difficult. This work proposes an adaptive controller for a hypersonic cruise vehicle subject to: aerodynamic uncertainties, center-of-gravity movements, actuator saturation, failures, and time-delays. The adaptive control architecture is based on a linearized model of the underlying rigid body dynamics and explicitly accommodates for all uncertainties. Within the control structure is a baseline Proportional Integral Filter commonly used in optimal control designs. The control design is validated using a highfidelity HSV model that incorporates various effects including coupling between structural modes and aerodynamics, and thrust pitch coupling. Analysis of the Adaptive Robust Controller for Hypersonic Vehicles (ARCH) is carried out using a control verification methodology. This methodology illustrates the resilience of the controller to the uncertainties mentioned above for a set of closed-loop requirements that prevent excessive structural loading, poor tracking performance, and engine stalls. This analysis enables the quantification of the improvements that result from using and adaptive controller for a typical maneuver in the V-h space under cruise conditions.by Travis Eli Gibson.S.M

    Call for Papers

    Get PDF
    We are pleased to open submissions for Issue 2, to be published December 2019. We accept standard research articles (6,000-8,000 words), as well as a range of other collaborative, creative and exploratory works (see out website for details: https://ojs.victoria.ac.nz/ce/about). Deadline for Open Submissions is April 1, 2019. We also welcome submissions for a Special Section on The Labours of Collaboration for Issue 2. The deadline for this Special Section Submissions is March 1, 2019

    Intraoperative Organ Motion Models with an Ensemble of Conditional Generative Adversarial Networks

    Get PDF
    In this paper, we describe how a patient-specific, ultrasound-probe-induced prostate motion model can be directly generated from a single preoperative MR image. Our motion model allows for sampling from the conditional distribution of dense displacement fields, is encoded by a generative neural network conditioned on a medical image, and accepts random noise as additional input. The generative network is trained by a minimax optimisation with a second discriminative neural network, tasked to distinguish generated samples from training motion data. In this work, we propose that 1) jointly optimising a third conditioning neural network that pre-processes the input image, can effectively extract patient-specific features for conditioning; and 2) combining multiple generative models trained separately with heuristically pre-disjointed training data sets can adequately mitigate the problem of mode collapse. Trained with diagnostic T2-weighted MR images from 143 real patients and 73,216 3D dense displacement fields from finite element simulations of intraoperative prostate motion due to transrectal ultrasound probe pressure, the proposed models produced physically-plausible patient-specific motion of prostate glands. The ability to capture biomechanically simulated motion was evaluated using two errors representing generalisability and specificity of the model. The median values, calculated from a 10-fold cross-validation, were 2.8+/-0.3 mm and 1.7+/-0.1 mm, respectively. We conclude that the introduced approach demonstrates the feasibility of applying state-of-the-art machine learning algorithms to generate organ motion models from patient images, and shows significant promise for future research.Comment: Accepted to MICCAI 201

    Adversarial Deformation Regularization for Training Image Registration Neural Networks

    Get PDF
    We describe an adversarial learning approach to constrain convolutional neural network training for image registration, replacing heuristic smoothness measures of displacement fields often used in these tasks. Using minimally-invasive prostate cancer intervention as an example application, we demonstrate the feasibility of utilizing biomechanical simulations to regularize a weakly-supervised anatomical-label-driven registration network for aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural transrectal ultrasound (TRUS) images. A discriminator network is optimized to distinguish the registration-predicted displacement fields from the motion data simulated by finite element analysis. During training, the registration network simultaneously aims to maximize similarity between anatomical labels that drives image alignment and to minimize an adversarial generator loss that measures divergence between the predicted- and simulated deformation. The end-to-end trained network enables efficient and fully-automated registration that only requires an MR and TRUS image pair as input, without anatomical labels or simulated data during inference. 108 pairs of labelled MR and TRUS images from 76 prostate cancer patients and 71,500 nonlinear finite-element simulations from 143 different patients were used for this study. We show that, with only gland segmentation as training labels, the proposed method can help predict physically plausible deformation without any other smoothness penalty. Based on cross-validation experiments using 834 pairs of independent validation landmarks, the proposed adversarial-regularized registration achieved a target registration error of 6.3 mm that is significantly lower than those from several other regularization methods.Comment: Accepted to MICCAI 201

    Label-driven weakly-supervised learning for multimodal deformable image registration

    Get PDF
    Spatially aligning medical images from different modalities remains a challenging task, especially for intraoperative applications that require fast and robust algorithms. We propose a weakly-supervised, label-driven formulation for learning 3D voxel correspondence from higher-level label correspondence, thereby bypassing classical intensity-based image similarity measures. During training, a convolutional neural network is optimised by outputting a dense displacement field (DDF) that warps a set of available anatomical labels from the moving image to match their corresponding counterparts in the fixed image. These label pairs, including solid organs, ducts, vessels, point landmarks and other ad hoc structures, are only required at training time and can be spatially aligned by minimising a cross-entropy function of the warped moving label and the fixed label. During inference, the trained network takes a new image pair to predict an optimal DDF, resulting in a fully-automatic, label-free, real-time and deformable registration. For interventional applications where large global transformation prevails, we also propose a neural network architecture to jointly optimise the global- and local displacements. Experiment results are presented based on cross-validating registrations of 111 pairs of T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients with a total of over 4000 anatomical labels, yielding a median target registration error of 4.2 mm on landmark centroids and a median Dice of 0.88 on prostate glands.Comment: Accepted to ISBI 201
    • 

    corecore