109 research outputs found

    3D CATBraTS: Channel Attention Transformer for Brain Tumour Semantic Segmentation

    Get PDF
    Brain tumour diagnosis is a challenging task yet crucial for planning treatments to stop or slow the growth of a tumour. In the last decade, there has been a dramatic increase in the use of convolutional neural networks (CNN) for their high performance in the automatic segmentation of tumours in medical images. More recently, Vision Transformer (ViT) has become a central focus of medical imaging for its robustness and efficiency when compared to CNNs. In this paper, we propose a novel 3D transformer named 3D CATBraTS for brain tumour semantic segmentation on magnetic resonance images (MRIs) based on the state-of-the-art Swin transformer with a modified CNN-encoder architecture using residual blocks and a channel attention module. The proposed approach is evaluated on the BraTS 2021 dataset and achieved quantitative measures of the mean Dice similarity coefficient (DSC) that surpasses the current state-of-the-art approaches in the validation phase

    Biomarker comparison and selection for prostate cancer detection in Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI)

    Full text link
    [EN] In this work, the capability of imaging biomarkers obtained from multivariate curve resolution-alternating least squares (MCR-ALS), in combination with those obtained from first and second-generation pharmacokinetic models, have been studied for improving prostate cancer tumor depiction using partial least squares- discriminant analysis (PLS-DA). The main goal of this work is to improve tissue classification properties selecting the best biomarkers in terms of prediction. A wrapped double cross-validation method has been applied for the variable selection process. Using the best PLS-DA model, prostate tissues can be classified obtaining 13.4% of false negatives and 7.4% of false positives. Using MCR-ALS biomarkers yields the best models in terms of parsimony and classification performance.This research has been supported by "Generalitat Valenciana (Conselleria d'Educacio, Investigacio, Cultura I Esport)" under the project AICO/2016/061.Aguado-Sarrió, E.; Prats-Montalbán, JM.; Sanz-Requena, R.; Garcia-Marti, G.; Marti-Bonmati, L.; Ferrer, A. (2017). Biomarker comparison and selection for prostate cancer detection in Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI). Chemometrics and Intelligent Laboratory Systems. 165:38-45. https://doi.org/10.1016/j.chemolab.2017.04.003S384516

    Technical Note: Error metrics for estimating the accuracy of needle/instrument placement during transperineal MR/US-guided prostate interventions

    Get PDF
    Purpose: Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally-invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. Methods: A set of 9 measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Results: Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0±1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0±1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. Conclusions: The application of a comprehensive, unbiased validation assessment for MR/TRUS guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behaviour of these systems

    Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks

    Get PDF
    Clinically important targets for ultrasound-guided prostate biopsy and prostate cancer focal therapy can be defined on MRI. However, localizing these targets on transrectal ultrasound (TRUS) remains challenging. Automatic segmentation of the prostate on intraoperative TRUS images is an important step towards automating most MRI-TRUS image registration workflows so that they become more acceptable in clinical practice. In this paper, we propose a deep learning method using convolutional neural networks (CNNs) for automatic prostate segmentation in 2D TRUS slices and 3D TRUS volumes. The method was evaluated on a clinical cohort of 110 patients who underwent TRUS-guided targeted biopsy. Segmentation accuracy was measured by comparison to manual prostate segmentation in 2D on 4055 TRUS images and in 3D on the corresponding 110 volumes, in a 10-fold patient-level cross validation. The proposed method achieved a mean 2D Dice score coefficient (DSC) of 0.91±0.12 and a mean absolute boundary segmentation error of 1.23±1.46mm. Dice scores (0.91±0.04) were also calculated for 3D volumes on the patient level. These suggest a promising approach to aid a wide range of TRUS-guided prostate cancer procedures needing multimodality data fusion

    Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images

    Get PDF
    Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition

    DeepReg: a deep learning toolkit for medical image registration

    Get PDF
    Image fusion is a fundamental task in medical image analysis and computer-assisted intervention. Medical image registration, computational algorithms that align different images together (Hill et al., 2001), has in recent years turned the research attention towards deep learning. Indeed, the representation ability to learn from population data with deep neural networks has opened new possibilities for improving registration generalisability by mitigating difficulties in designing hand-engineered image features and similarity measures for many realworld clinical applications (Fu et al., 2020; Haskins et al., 2020). In addition, its fast inference can substantially accelerate registration execution for time-critical tasks. DeepReg is a Python package using TensorFlow (Abadi et al., 2015) that implements multiple registration algorithms and a set of predefined dataset loaders, supporting both labelledand unlabelled data. DeepReg also provides command-line tool options that enable basic and advanced functionalities for model training, prediction and image warping. These implementations, together with their documentation, tutorials and demos, aim to simplify workflows for prototyping and developing novel methodology, utilising latest development and accessing quality research advances. DeepReg is unit tested and a set of customised contributor guidelines are provided to facilitate community contributions. A submission to the MICCAI Educational Challenge has utilised the DeepReg code and demos to explore the link between classical algorithms and deep-learning-based methods (Montana Brown et al., 2020), while a recently published research work investigated temporal changes in prostate cancer imaging, by using a longitudinal registration adapted from the DeepReg code (Yang et al., 2020)

    The SmartTarget BIOPSY trial: A prospective, within-person randomised, blinded trial comparing the accuracy of visual-registration and MRI/ultrasound image-fusion targeted biopsies for prostate cancer risk stratification

    Get PDF
    Background: Multiparametric magnetic resonance imaging (mpMRI)-targeted prostate biopsies can improve detection of clinically significant prostate cancer and decrease the overdetection of insignificant cancers. Whether visual-registration targeting is sufficient or if augmentation with image-fusion software is needed is unknown. Objective: To assess concordance between the two methods. Design, Setting, and Participants: We conducted a blinded, within-person randomised, paired validating clinical trial. From 2014 to 2016, 141 men who had undergone a prior (positive or negative) transrectal ultrasound biopsy and had a discrete lesion on mpMRI (score 3 to 5) requiring targeted transperineal biopsy were enrolled at a UK academic hospital; 129 underwent both biopsy strategies and completed the study. Intervention: The order of performing biopsies using visual-registration and a computer-assisted MRI/ultrasound image-fusion system (SmartTarget) on each patient was randomised. The equipment was reset between biopsy strategies to mitigate incorporation bias. Outcome Measurements and Statistical Analysis: The proportion of clinically significant prostate cancer (primary outcome: Gleason pattern ≥3+4=7, maximum cancer core length ≥4 mm; secondary outcome: Gleason pattern ≥4+3=7, maximum cancer core length ≥6 mm) detected by each method was compared using McNemar's test of paired proportions. Results and Limitations: The two strategies combined detected 93 clinically significant prostate cancers (72% of the cohort). Each strategy individually detected 80/93 (86%) of these cancers; each strategy detected 13 cases missed by the other. Three patients experienced adverse events related to biopsy (urinary retention, urinary tract infection, nausea and vomiting). No difference in urinary symptoms, erectile function, or quality of life between baseline and follow-up (median 10.5 weeks) was observed. The key limitation was lack of parallel-group randomisation and limit on number of targeted cores. Conclusions: Visual-registration and image-fusion targeting strategies combined had the highest detection rate for clinically significant cancers. Targeted prostate biopsy should be performed using both strategies together. Patient Summary: We compared two prostate cancer biopsy strategies: visual-registration and image-fusion. The combination of the two strategies found the most clinically important cancers and should be used together whenever targeted biopsy is being performed
    corecore