122 research outputs found

    Adversarial Deformation Regularization for Training Image Registration Neural Networks

    Get PDF
    We describe an adversarial learning approach to constrain convolutional neural network training for image registration, replacing heuristic smoothness measures of displacement fields often used in these tasks. Using minimally-invasive prostate cancer intervention as an example application, we demonstrate the feasibility of utilizing biomechanical simulations to regularize a weakly-supervised anatomical-label-driven registration network for aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural transrectal ultrasound (TRUS) images. A discriminator network is optimized to distinguish the registration-predicted displacement fields from the motion data simulated by finite element analysis. During training, the registration network simultaneously aims to maximize similarity between anatomical labels that drives image alignment and to minimize an adversarial generator loss that measures divergence between the predicted- and simulated deformation. The end-to-end trained network enables efficient and fully-automated registration that only requires an MR and TRUS image pair as input, without anatomical labels or simulated data during inference. 108 pairs of labelled MR and TRUS images from 76 prostate cancer patients and 71,500 nonlinear finite-element simulations from 143 different patients were used for this study. We show that, with only gland segmentation as training labels, the proposed method can help predict physically plausible deformation without any other smoothness penalty. Based on cross-validation experiments using 834 pairs of independent validation landmarks, the proposed adversarial-regularized registration achieved a target registration error of 6.3 mm that is significantly lower than those from several other regularization methods.Comment: Accepted to MICCAI 201

    Label-driven weakly-supervised learning for multimodal deformable image registration

    Get PDF
    Spatially aligning medical images from different modalities remains a challenging task, especially for intraoperative applications that require fast and robust algorithms. We propose a weakly-supervised, label-driven formulation for learning 3D voxel correspondence from higher-level label correspondence, thereby bypassing classical intensity-based image similarity measures. During training, a convolutional neural network is optimised by outputting a dense displacement field (DDF) that warps a set of available anatomical labels from the moving image to match their corresponding counterparts in the fixed image. These label pairs, including solid organs, ducts, vessels, point landmarks and other ad hoc structures, are only required at training time and can be spatially aligned by minimising a cross-entropy function of the warped moving label and the fixed label. During inference, the trained network takes a new image pair to predict an optimal DDF, resulting in a fully-automatic, label-free, real-time and deformable registration. For interventional applications where large global transformation prevails, we also propose a neural network architecture to jointly optimise the global- and local displacements. Experiment results are presented based on cross-validating registrations of 111 pairs of T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients with a total of over 4000 anatomical labels, yielding a median target registration error of 4.2 mm on landmark centroids and a median Dice of 0.88 on prostate glands.Comment: Accepted to ISBI 201

    Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images

    Get PDF
    Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition

    DeepReg: a deep learning toolkit for medical image registration

    Get PDF
    DeepReg (https://github.com/DeepRegNet/DeepReg) is a community-supported open-source toolkit for research and education in medical image registration using deep learning.Comment: Accepted in The Journal of Open Source Software (JOSS

    The SmartTarget BIOPSY trial: A prospective, within-person randomised, blinded trial comparing the accuracy of visual-registration and MRI/ultrasound image-fusion targeted biopsies for prostate cancer risk stratification

    Get PDF
    Background: Multiparametric magnetic resonance imaging (mpMRI)-targeted prostate biopsies can improve detection of clinically significant prostate cancer and decrease the overdetection of insignificant cancers. Whether visual-registration targeting is sufficient or if augmentation with image-fusion software is needed is unknown. Objective: To assess concordance between the two methods. Design, Setting, and Participants: We conducted a blinded, within-person randomised, paired validating clinical trial. From 2014 to 2016, 141 men who had undergone a prior (positive or negative) transrectal ultrasound biopsy and had a discrete lesion on mpMRI (score 3 to 5) requiring targeted transperineal biopsy were enrolled at a UK academic hospital; 129 underwent both biopsy strategies and completed the study. Intervention: The order of performing biopsies using visual-registration and a computer-assisted MRI/ultrasound image-fusion system (SmartTarget) on each patient was randomised. The equipment was reset between biopsy strategies to mitigate incorporation bias. Outcome Measurements and Statistical Analysis: The proportion of clinically significant prostate cancer (primary outcome: Gleason pattern ≥3+4=7, maximum cancer core length ≥4 mm; secondary outcome: Gleason pattern ≥4+3=7, maximum cancer core length ≥6 mm) detected by each method was compared using McNemar's test of paired proportions. Results and Limitations: The two strategies combined detected 93 clinically significant prostate cancers (72% of the cohort). Each strategy individually detected 80/93 (86%) of these cancers; each strategy detected 13 cases missed by the other. Three patients experienced adverse events related to biopsy (urinary retention, urinary tract infection, nausea and vomiting). No difference in urinary symptoms, erectile function, or quality of life between baseline and follow-up (median 10.5 weeks) was observed. The key limitation was lack of parallel-group randomisation and limit on number of targeted cores. Conclusions: Visual-registration and image-fusion targeting strategies combined had the highest detection rate for clinically significant cancers. Targeted prostate biopsy should be performed using both strategies together. Patient Summary: We compared two prostate cancer biopsy strategies: visual-registration and image-fusion. The combination of the two strategies found the most clinically important cancers and should be used together whenever targeted biopsy is being performed

    IPT9, a cis-zeatin cytokinin biosynthesis gene, promotes root growth

    Get PDF
    Cytokinin and auxin are plant hormones that coordinate many aspects of plant development. Their interactions in plant underground growth are well established, occurring at the levels of metabolism, signaling, and transport. Unlike many plant hormone classes, cytokinins are represented by more than one active molecule. Multiple mutant lines, blocking specific parts of cytokinin biosynthetic pathways, have enabled research in plants with deficiencies in specific cytokinin-types. While most of these mutants have confirmed the impeding effect of cytokinin on root growth, the ipt29 double mutant instead surprisingly exhibits reduced primary root length compared to the wild type. This mutant is impaired in cis-zeatin (cZ) production, a cytokinin-type that had been considered inactive in the past. Here we have further investigated the intriguing ipt29 root phenotype, opposite to known cytokinin functions, and the (bio)activity of cZ. Our data suggest that despite the ipt29 short-root phenotype, cZ application has a negative impact on primary root growth and can activate a cytokinin response in the stele. Grafting experiments revealed that the root phenotype of ipt29 depends mainly on local signaling which does not relate directly to cytokinin levels. Notably, ipt29 displayed increased auxin levels in the root tissue. Moreover, analyses of the differential contributions of ipt2 and ipt9 to the ipt29 short-root phenotype demonstrated that, despite its deficiency on cZ levels, ipt2 does not show any root phenotype or auxin homeostasis variation, while ipt9 mutants were indistinguishable from ipt29. We conclude that IPT9 functions may go beyond cZ biosynthesis, directly or indirectly, implicating effects on auxin homeostasis and therefore influencing plant growth

    The SmartTarget Biopsy Trial: A Prospective, Within-person Randomised, Blinded Trial Comparing the Accuracy of Visual-registration and Magnetic Resonance Imaging/Ultrasound Image-fusion Targeted Biopsies for Prostate Cancer Risk Stratification

    Get PDF
    Background: Multiparametric magnetic resonance imaging (mpMRI)-targeted prostate biopsies can improve detection of clinically significant prostate cancer and decrease the overdetection of insignificant cancers. It is unknown whether visual-registration targeting is sufficient or augmentation with image-fusion software is needed. Objective: To assess concordance between the two methods. Design, setting, and participants: We conducted a blinded, within-person randomised, paired validating clinical trial. From 2014 to 2016, 141 men who had undergone a prior (positive or negative) transrectal ultrasound biopsy and had a discrete lesion on mpMRI (score 3–5) requiring targeted transperineal biopsy were enrolled at a UK academic hospital; 129 underwent both biopsy strategies and completed the study. Intervention: The order of performing biopsies using visual registration and a computer-assisted MRI/ultrasound image-fusion system (SmartTarget) on each patient was randomised. The equipment was reset between biopsy strategies to mitigate incorporation bias. Outcome measurements and statistical analysis: The proportion of clinically significant prostate cancer (primary outcome: Gleason pattern ≥3 + 4 = 7, maximum cancer core length ≥4 mm; secondary outcome: Gleason pattern ≥4 + 3 = 7, maximum cancer core length ≥6 mm) detected by each method was compared using McNemar's test of paired proportions. Results and limitations: The two strategies combined detected 93 clinically significant prostate cancers (72% of the cohort). Each strategy detected 80/93 (86%) of these cancers; each strategy identified 13 cases missed by the other. Three patients experienced adverse events related to biopsy (urinary retention, urinary tract infection, nausea, and vomiting). No difference in urinary symptoms, erectile function, or quality of life between baseline and follow-up (median 10.5 wk) was observed. The key limitations were lack of parallel-group randomisation and a limit on the number of targeted cores. Conclusions: Visual-registration and image-fusion targeting strategies combined had the highest detection rate for clinically significant cancers. Targeted prostate biopsy should be performed using both strategies together. Patient summary: We compared two prostate cancer biopsy strategies: visual registration and image fusion. A combination of the two strategies found the most clinically important cancers and should be used together whenever targeted biopsy is being performed. Image-fusion results in a clinically significant prostate cancer detection rate were similar to those of visual registration performed by an experienced operator. Detection could be improved by 14% with no adverse effect on patient safety by adding image fusion to conventional visual-registration targeting
    • …
    corecore