167 research outputs found

    Spatial calibration of a 2D/3D ultrasound using a tracked needle

    Get PDF
    PURPOSE: Spatial calibration between a 2D/3D ultrasound and a pose tracking system requires a complex and time-consuming procedure. Simplifying this procedure without compromising the calibration accuracy is still a challenging problem. METHOD: We propose a new calibration method for both 2D and 3D ultrasound probes that involves scanning an arbitrary region of a tracked needle in different poses. This approach is easier to perform than most alternative methods that require a precise alignment between US scans and a calibration phantom. RESULTS: Our calibration method provides an average accuracy of 2.49 mm for a 2D US probe with 107 mm scanning depth, and an average accuracy of 2.39 mm for a 3D US with 107 mm scanning depth. CONCLUSION: Our method proposes a unified calibration framework for 2D and 3D probes using the same phantom object, work-flow, and algorithm. Our method significantly improves the accuracy of needle-based methods for 2D US probes as well as extends its use for 3D US probes

    Deep learning-based plane pose regression in obstetric ultrasound

    Get PDF
    PURPOSE: In obstetric ultrasound (US) scanning, the learner's ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a major challenge in skill acquisition. We aim to build a US plane localisation system for 3D visualisation, training, and guidance without integrating additional sensors. METHODS: We propose a regression convolutional neural network (CNN) using image features to estimate the six-dimensional pose of arbitrarily oriented US planes relative to the fetal brain centre. The network was trained on synthetic images acquired from phantom 3D US volumes and fine-tuned on real scans. Training data was generated by slicing US volumes into imaging planes in Unity at random coordinates and more densely around the standard transventricular (TV) plane. RESULTS: With phantom data, the median errors are 0.90 mm/1.17[Formula: see text] and 0.44 mm/1.21[Formula: see text] for random planes and planes close to the TV one, respectively. With real data, using a different fetus with the same gestational age (GA), these errors are 11.84 mm/25.17[Formula: see text]. The average inference time is 2.97 ms per plane. CONCLUSION: The proposed network reliably localises US planes within the fetal brain in phantom data and successfully generalises pose regression for an unseen fetal brain from a similar GA as in training. Future development will expand the prediction to volumes of the whole fetus and assess its potential for vision-based, freehand US-assisted navigation when acquiring standard fetal planes

    A Log-Euclidean and Total Variation based Variational Framework for Computational Sonography

    Get PDF
    We propose a spatial compounding technique and variational framework to improve 3D ultrasound image quality by compositing multiple ultrasound volumes acquired from different probe orientations. In the composite volume, instead of intensity values, we estimate a tensor at every voxel. The resultant tensor image encapsulates the directional information of the underlying imaging data and can be used to generate ultrasound volumes from arbitrary, potentially unseen, probe positions. Extending the work of Hennersperger et al., we introduce a log-Euclidean framework to ensure that the tensors are positive-definite, eventually ensuring non-negative images. Additionally, we regularise the underpinning ill-posed variational problem while preserving edge information by relying on a total variation penalisation of the tensor field in the log domain. We present results on in vivo human data to show the efficacy of the approach.Comment: SPIE Medical Imaging 201

    Learning ultrasound plane pose regression: assessing generalized pose coordinates in the fetal brain

    Get PDF
    In obstetric ultrasound (US) scanning, the learner's ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a significant challenge in skill acquisition. We aim to build a US plane localization system for 3D visualization, training, and guidance without integrating additional sensors. This work builds on top of our previous work, which predicts the six-dimensional (6D) pose of arbitrarily-oriented US planes slicing the fetal brain with respect to a normalized reference frame using a convolutional neural network (CNN) regression network. Here, we analyze in detail the assumptions of the normalized fetal brain reference frame and quantify its accuracy with respect to the acquisition of transventricular (TV) standard plane (SP) for fetal biometry. We investigate the impact of registration quality in the training and testing data and its subsequent effect on trained models. Finally, we introduce data augmentations and larger training sets that improve the results of our previous work, achieving median errors of 3.53 mm and 6.42 degrees for translation and rotation, respectively.Comment: 12 pages, 9 figures, 2 tables. This work has been submitted to the IEEE for possible publication (IEEE TMRB). Copyright may be transferred without notice, after which this version may no longer be accessibl

    Cosmological implications of massive gravitons

    Full text link
    The van Dam-Veltman-Zakharov (vDVZ) discontinuity requires that the mass mm of the graviton is exactly zero, otherwise measurements of the deflection of starlight by the Sun and the precession of Mercury's perihelion would conflict with their theoretical values. This theoretical discontinuity is open to question for numerous reasons. In this paper we show from a phenomenological viewpoint that the m>0m>0 hypothesis is in accord with Supernova Ia and CMB observations, and that the large scale structure of the universe suggests that m1030m \sim 10^{-30} eV/c2/c^2.Comment: 13 pages, 1 figur

    Peri- and postnatal effects of prenatal adenoviral VEGF gene therapy in growth-restricted sheep

    Get PDF
    Supported by Wellcome Trust project grant 088208 to A.L.D., J.M.W., D.M.P., I.C.Z., and J.F.M. Wellbeing of Women research training fellowship 318 to D.J.C., Scottish Government work package 4.2 to J.M.W., J.S.M., and R.P.A., as well as funding from the National Institute for Health Research University College London Hospitals Biomedical Research Centre A.L.D. and D.M.P., the British Heart Foundation to I.C.Z., and Ark Therapeutics Oy, Kuopio, Finland, which supplied adenovirus vectors free of charge.Peer reviewedPublisher PD

    Towards computer-assisted TTTS:Laser ablation detection for workflow segmentation from fetoscopic video

    Get PDF
    PURPOSE: Intrauterine foetal surgery is the treatment option for several congenital malformations. For twin-to-twin transfusion syndrome (TTTS), interventions involve the use of laser fibre to ablate vessels in a shared placenta. The procedure presents a number of challenges for the surgeon, and computer-assisted technologies can potentially be a significant support. Vision-based sensing is the primary source of information from the intrauterine environment, and hence, vision approaches present an appealing approach for extracting higher level information from the surgical site. METHODS: In this paper, we propose a framework to detect one of the key steps during TTTS interventions-ablation. We adopt a deep learning approach, specifically the ResNet101 architecture, for classification of different surgical actions performed during laser ablation therapy. RESULTS: We perform a two-fold cross-validation using almost 50 k frames from five different TTTS ablation procedures. Our results show that deep learning methods are a promising approach for ablation detection. CONCLUSION: To our knowledge, this is the first attempt at automating photocoagulation detection using video and our technique can be an important component of a larger assistive framework for enhanced foetal therapies. The current implementation does not include semantic segmentation or localisation of the ablation site, and this would be a natural extension in future work

    Three-Point Correlation Functions of SDSS Galaxies: Luminosity and Color Dependence in Redshift and Projected Space

    Full text link
    The three-point correlation function (3PCF) provides an important view into the clustering of galaxies that is not available to its lower order cousin, the two-point correlation function (2PCF). Higher order statistics, such as the 3PCF, are necessary to probe the non-Gaussian structure and shape information expected in these distributions. We measure the clustering of spectroscopic galaxies in the Main Galaxy Sample of the Sloan Digital Sky Survey (SDSS), focusing on the shape or configuration dependence of the reduced 3PCF in both redshift and projected space. This work constitutes the largest number of galaxies ever used to investigate the reduced 3PCF, using over 220,000 galaxies in three volume-limited samples. We find significant configuration dependence of the reduced 3PCF at 3-27 Mpc/h, in agreement with LCDM predictions and in disagreement with the hierarchical ansatz. Below 6 Mpc/h, the redshift space reduced 3PCF shows a smaller amplitude and weak configuration dependence in comparison with projected measurements suggesting that redshift distortions, and not galaxy bias, can make the reduced 3PCF appear consistent with the hierarchical ansatz. The reduced 3PCF shows a weaker dependence on luminosity than the 2PCF, with no significant dependence on scales above 9 Mpc/h. On scales less than 9 Mpc/h, the reduced 3PCF appears more affected by galaxy color than luminosty. We demonstrate the extreme sensitivity of the 3PCF to systematic effects such as sky completeness and binning scheme, along with the difficulty of resolving the errors. Some comparable analyses make assumptions that do not consistently account for these effects.Comment: 27 pages, 21 figures. Updated to match accepted version. Published in Ap

    The Clustering of Luminous Red Galaxies in the Sloan Digital Sky Survey Imaging Data

    Get PDF
    We present the 3D real space clustering power spectrum of a sample of \~600,000 luminous red galaxies (LRGs) measured by the Sloan Digital Sky Survey (SDSS), using photometric redshifts. This sample of galaxies ranges from redshift z=0.2 to 0.6 over 3,528 deg^2 of the sky, probing a volume of 1.5 (Gpc/h)^3, making it the largest volume ever used for galaxy clustering measurements. We measure the angular clustering power spectrum in eight redshift slices and combine these into a high precision 3D real space power spectrum from k=0.005 (h/Mpc) to k=1 (h/Mpc). We detect power on gigaparsec scales, beyond the turnover in the matter power spectrum, on scales significantly larger than those accessible to current spectroscopic redshift surveys. We also find evidence for baryonic oscillations, both in the power spectrum, as well as in fits to the baryon density, at a 2.5 sigma confidence level. The statistical power of these data to constrain cosmology is ~1.7 times better than previous clustering analyses. Varying the matter density and baryon fraction, we find \Omega_M = 0.30 \pm 0.03, and \Omega_b/\Omega_M = 0.18 \pm 0.04, The detection of baryonic oscillations also allows us to measure the comoving distance to z=0.5; we find a best fit distance of 1.73 \pm 0.12 Gpc, corresponding to a 6.5% error on the distance. These results demonstrate the ability to make precise clustering measurements with photometric surveys (abridged).Comment: 23 pages, 27 figures, submitted to MNRA
    corecore