152 research outputs found

    Fan-Slicer: A Pycuda Package for Fast Reslicing of Ultrasound Shaped Planes

    Get PDF
    Fan-Slicer (https://github.com/UCL/fan-slicer) is a Python package that enables the fast sampling (slicing) of 2D ultrasound-shaped images from a 3D volume. To increase sampling speed, CUDA kernel functions are used in conjunction with the Pycuda package. The main features include functions to generate images from both 3D surface models and 3D volumes. Additionally, the package also allows for the sampling of images from curvilinear (fan shaped planes) and linear (rectangle shaped planes) ultrasound transducers. Potential uses of Fan-slicer include the generation of large datasets of 2D images from 3D volumes and the simulation of intra-operative data among others

    Adversarial Deformation Regularization for Training Image Registration Neural Networks

    Get PDF
    We describe an adversarial learning approach to constrain convolutional neural network training for image registration, replacing heuristic smoothness measures of displacement fields often used in these tasks. Using minimally-invasive prostate cancer intervention as an example application, we demonstrate the feasibility of utilizing biomechanical simulations to regularize a weakly-supervised anatomical-label-driven registration network for aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural transrectal ultrasound (TRUS) images. A discriminator network is optimized to distinguish the registration-predicted displacement fields from the motion data simulated by finite element analysis. During training, the registration network simultaneously aims to maximize similarity between anatomical labels that drives image alignment and to minimize an adversarial generator loss that measures divergence between the predicted- and simulated deformation. The end-to-end trained network enables efficient and fully-automated registration that only requires an MR and TRUS image pair as input, without anatomical labels or simulated data during inference. 108 pairs of labelled MR and TRUS images from 76 prostate cancer patients and 71,500 nonlinear finite-element simulations from 143 different patients were used for this study. We show that, with only gland segmentation as training labels, the proposed method can help predict physically plausible deformation without any other smoothness penalty. Based on cross-validation experiments using 834 pairs of independent validation landmarks, the proposed adversarial-regularized registration achieved a target registration error of 6.3 mm that is significantly lower than those from several other regularization methods.Comment: Accepted to MICCAI 201

    Label-driven weakly-supervised learning for multimodal deformable image registration

    Get PDF
    Spatially aligning medical images from different modalities remains a challenging task, especially for intraoperative applications that require fast and robust algorithms. We propose a weakly-supervised, label-driven formulation for learning 3D voxel correspondence from higher-level label correspondence, thereby bypassing classical intensity-based image similarity measures. During training, a convolutional neural network is optimised by outputting a dense displacement field (DDF) that warps a set of available anatomical labels from the moving image to match their corresponding counterparts in the fixed image. These label pairs, including solid organs, ducts, vessels, point landmarks and other ad hoc structures, are only required at training time and can be spatially aligned by minimising a cross-entropy function of the warped moving label and the fixed label. During inference, the trained network takes a new image pair to predict an optimal DDF, resulting in a fully-automatic, label-free, real-time and deformable registration. For interventional applications where large global transformation prevails, we also propose a neural network architecture to jointly optimise the global- and local displacements. Experiment results are presented based on cross-validating registrations of 111 pairs of T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients with a total of over 4000 anatomical labels, yielding a median target registration error of 4.2 mm on landmark centroids and a median Dice of 0.88 on prostate glands.Comment: Accepted to ISBI 201

    Endoscopic Ultrasound Image Synthesis Using a Cycle-Consistent Adversarial Network

    Get PDF
    Endoscopic ultrasound (EUS) is a challenging procedure that requires skill, both in endoscopy and ultrasound image interpretation. Classification of key anatomical landmarks visible on EUS images can assist the gastroenterologist during navigation. Current applications of deep learning have shown the ability to automatically classify ultrasound images with high accuracy. However, these techniques require a large amount of labelled data which is time consuming to obtain, and in the case of EUS, is also a difficult task to perform retrospectively due to the lack of 3D context. In this paper, we propose the use of an image-to-image translation method to create synthetic EUS (sEUS) images from CT data, that can be used as a data augmentation strategy when EUS data is scarce. We train a cycle-consistent adversarial network with unpaired EUS images and CT slices extracted in a manner such that they mimic plausible EUS views, to generate sEUS images from the pancreas, aorta and liver. We quantitatively evaluate the use of sEUS images in a classification sub-task and assess the FrƩchet Inception Distance. We show that synthetic data, obtained from CT data, imposes only a minor classification accuracy penalty and may help generalization to new unseen patients. The code and a dataset containing generated sEUS images are available at: https://ebonmati.github.io

    Deep hashing for global registration of untracked 2D laparoscopic ultrasound to CT

    Get PDF
    PURPOSE: The registration of Laparoscopic Ultrasound (LUS) to CT can enhance the safety of laparoscopic liver surgery by providing the surgeon with awareness on the relative positioning between critical vessels and a tumour. In an effort to provide a translatable solution for this poorly constrained problem, Content-based Image Retrieval (CBIR) based on vessel information has been suggested as a method for obtaining a global coarse registration without using tracking information. However, the performance of these frameworks is limited by the use of non-generalisable handcrafted vessel features. METHODS: We propose the use of a Deep Hashing (DH) network to directly convert vessel images from both LUS and CT into fixed size hash codes. During training, these codes are learnt from a patient-specific CT scan by supplying the network with triplets of vessel images which include both a registered and a mis-registered pair. Once hash codes have been learnt, they can be used to perform registration with CBIR methods. RESULTS: We test a CBIR pipeline on 11 sequences of untracked LUS distributed across 5 clinical cases. Compared to a handcrafted feature approach, our model improves the registration success rate significantly from 48% to 61%, considering a 20Ā mm error as the threshold for a successful coarse registration. CONCLUSIONS: We present the first DH framework for interventional multi-modal registration tasks. The presented approach is easily generalisable to other registration problems, does not require annotated data for training, and may promote the translation of these techniques

    Technical note: automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalising neural network

    Get PDF
    Segmentation of the levator hiatus in ultrasound allows to extract biometrics which are of importance for pelvic floor disorder assessment. In this work, we present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a 2D image extracted from a 3D ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalising activation function. SELU has important advantages such as being parameter-free and mini-batch independent. A dataset with 91 images from 35 patients all labelled by three operators, is used for training and evaluation in a leave-one-patient-out cross-validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williamsā€™ index of 1.03), and outperforming a U-Net architecture without the need for batch normalisation. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semi-automatic approach

    Effect of Single and Combined Monochromatic Light on the Human Pupillary Light Response

    Get PDF
    The pupillary light reflex (PLR) is a neurological reflex driven by rods, cones, and melanopsin-containing retinal ganglion cells. Our aim was to achieve a more precise picture of the effects of 5-min duration monochromatic light stimuli, alone or in combination, on the human PLR, to determine its spectral sensitivity and to assess the importance of photon flux. Using pupillometry, the PLR was assessed in 13 participants (6 women) aged 27.2 Ā± 5.41 years (mean Ā± SD) during 5-min light stimuli of purple (437 nm), blue (479 nm), red (627 nm), and combinations of red+purple or red+blue light. In addition, nine 5-min, photon-matched light stimuli, ranging in 10 nm increments peaking between 420 and 500 nm were tested in 15 participants (8 women) aged 25.7 Ā± 8.90 years. Maximum pupil constriction, time to achieve this, constriction velocity, area under the curve (AUC) at short (0ā€“60 s), and longer duration (240ā€“300 s) light exposures, and 6-s post-illumination pupillary response (6-s PIPR) were assessed. Photoreceptor activation was estimated by mathematical modeling. The velocity of constriction was significantly faster with blue monochromatic light than with red or purple light. Within the blue light spectrum (between 420 and 500 nm), the velocity of constriction was significantly faster with the 480 nm light stimulus, while the slowest pupil constriction was observed with 430 nm light. Maximum pupil constriction was achieved with 470 nm light, and the greatest AUC0āˆ’60 and AUC240āˆ’300 was observed with 490 and 460 nm light, respectively. The 6-s PIPR was maximum after 490 nm light stimulus. Both the transient (AUC0āˆ’60) and sustained (AUC240āˆ’300) response was significantly correlated with melanopic activation. Higher photon fluxes for both purple and blue light produced greater amplitude sustained pupillary constriction. The findings confirm human PLR dependence on wavelength, monochromatic or bichromatic light and photon flux under 5-min duration light stimuli. Since the most rapid and high amplitude PLR occurred within the 460ā€“490 nm light range (alone or combined), our results suggest that color discrimination should be studied under total or partial substitution of this blue light range (460ā€“490 nm) by shorter wavelengths (~440 nm). Thus for nocturnal lighting, replacement of blue light with purple light might be a plausible solution to preserve color discrimination while minimizing melanopic activation

    Electromagnetic tracking in imageā€guided laparoscopic surgery: Comparison with optical tracking and feasibility study of a combined laparoscope and laparoscopic ultrasound system

    Get PDF
    PURPOSE: In imageā€guided laparoscopy, optical tracking is commonly employed, but electromagnetic (EM) systems have been proposed in the literature. In this paper, we provide a thorough comparison of EM and optical tracking systems for use in imageā€guided laparoscopic surgery and a feasibility study of a combined, EMā€tracked laparoscope and laparoscopic ultrasound (LUS) image guidance system. METHODS: We first assess the tracking accuracy of a laparoscope with two optical trackers tracking retroreflective markers mounted on the shaft and an EM tracker with the sensor embedded at the proximal end, using a standard evaluation plate. We then use a stylus to test the precision of position measurement and accuracy of distance measurement of the trackers. Finally, we assess the accuracy of an image guidance system comprised of an EMā€tracked laparoscope and an EMā€tracked LUS probe. RESULTS: In the experiment using a standard evaluation plate, the two optical trackers show less jitter in position and orientation measurement than the EM tracker. Also, the optical trackers demonstrate better consistency of orientation measurement within the test volume. However, their accuracy of measuring relative positions decreases significantly with longer distances whereas the EM tracker's performance is stable; at 50 mm distance, the RMS errors for the two optical trackers are 0.210 and 0.233 mm, respectively, and it is 0.214 mm for the EM tracker; at 250 mm distance, the RMS errors for the two optical trackers become 1.031 and 1.178 mm, respectively, while it is 0.367 mm for the EM tracker. In the experiment using the stylus, the two optical trackers have RMS errors of 1.278 and 1.555 mm in localizing the stylus tip, and it is 1.117 mm for the EM tracker. Our prototype of a combined, EMā€tracked laparoscope and LUS system using representative calibration methods showed a RMS point localization error of 3.0 mm for the laparoscope and 1.3 mm for the LUS probe, the lager error of the former being predominantly due to the triangulation error when using a narrowā€baseline stereo laparoscope. CONCLUSIONS: The errors incurred by optical trackers, due to the leverā€arm effect and variation in tracking accuracy in the depth direction, would make EMā€tracked solutions preferable if the EM sensor is placed at the proximal end of the laparoscope
    • ā€¦
    corecore