773 research outputs found

    Reply to Lee and colleagues—Viral posterior uveitis

    Get PDF
    No abstract available

    Factorised spatial representation learning: application in semi-supervised myocardial segmentation

    Full text link
    The success and generalisation of deep learning algorithms heavily depend on learning good feature representations. In medical imaging this entails representing anatomical information, as well as properties related to the specific imaging setting. Anatomical information is required to perform further analysis, whereas imaging information is key to disentangle scanner variability and potential artefacts. The ability to factorise these would allow for training algorithms only on the relevant information according to the task. To date, such factorisation has not been attempted. In this paper, we propose a methodology of latent space factorisation relying on the cycle-consistency principle. As an example application, we consider cardiac MR segmentation, where we separate information related to the myocardium from other features related to imaging and surrounding substructures. We demonstrate the proposed method's utility in a semi-supervised setting: we use very few labelled images together with many unlabelled images to train a myocardium segmentation neural network. Specifically, we achieve comparable performance to fully supervised networks using a fraction of labelled images in experiments on ACDC and a dataset from Edinburgh Imaging Facility QMRI. Code will be made available at https://github.com/agis85/spatial_factorisation.Comment: Accepted in MICCAI 201

    Tobacco, e-cigarette and alcohol content in popular UK soap operas: a content analysis to explore changes in social norms and scene location over time

    Get PDF
    Background Exposure to tobacco and alcohol on-screen promotes use and despite regulations and policies to limit impact, these behaviours remain common. We report a longitudinal analysis of tobacco, e-cigarette and alcohol content in three popular UK television soap operas, to examine changing social norms between 2002-2022. Methods We used one-minute interval coding to measure content in programmes in two one-week periods in three years (2002, 2012 and 2022). Change in probability of actual and implied use of tobacco, e-cigarette and alcohol over time was examined using logistic regression. Results We coded 2505 intervals from 78 episodes. Tobacco content occurred in 22% of episodes and significantly decreased from 2002 to 2022 (OR 0.15 95% CI 0.06-0.40). Tobacco use changed over time with decreasing use indoors and increasing use outdoors. No e-cigarette use was identified. Alcohol content was found in 88% of episodes and while it also significantly decreased over time (OR 0.78 95% CI 0.61 – 0.99) it featured in 20% of broadcast minutes in 2022. Alcohol use in homes increased over time. Conclusion While tobacco imagery is increasingly rare on television, alcohol content has remained common. Current regulations are not sufficient to reduce exposure. Soap opera producers should consider the impact of on-screen tobacco and alcohol use and opportunities to change social norms and help protect future generations

    Can a single image processing algorithm work equally well across all phases of DCE-MRI?

    Full text link
    Image segmentation and registration are said to be challenging when applied to dynamic contrast enhanced MRI sequences (DCE-MRI). The contrast agent causes rapid changes in intensity in the region of interest and elsewhere, which can lead to false positive predictions for segmentation tasks and confound the image registration similarity metric. While it is widely assumed that contrast changes increase the difficulty of these tasks, to our knowledge no work has quantified these effects. In this paper we examine the effect of training with different ratios of contrast enhanced (CE) data on two popular tasks: segmentation with nnU-Net and Mask R-CNN and registration using VoxelMorph and VTN. We experimented further by strategically using the available datasets through pretraining and fine tuning with different splits of data. We found that to create a generalisable model, pretraining with CE data and fine tuning with non-CE data gave the best result. This interesting find could be expanded to other deep learning based image processing tasks with DCE-MRI and provide significant improvements to the models performance

    Comparing articulatory images: An MRI / Ultrasound Tongue Image database

    Get PDF
    This work was supported by an EPSRC grant (EP/I027696/1). Thanks to our ULTRAX project colleagues Steve Renals and Korin Richmond. Scott Semple is supported by the British Heart Foundation Centre of Research Excellence Award. Thanks to Steve Cowen for technical assistance and Annette Cooper for MRI data acquisition.We report the development of a database that will contain paired ultrasound and MRI of tongue movements and shapes from 12 adults, illustrated with pilot data from one speaker. The primary purpose of the database will be to evaluate the informational content of ultrasound tongue images on the basis of the richer articulatory structures visible with MRI, and to provide tongue shape information that can later be incorporated into an image processing algorithm to enhance ultrasound tongue images. Ultrasound is an increasingly popular technique for studying speech production since it provides a real-time image of tongue movements. Its potential as a visualfeedback speech therapy tool has been recognised but has not yet been exploited to any great extent. In part this is because obstruents like /t/ /k/ /ch/,which are important targets for therapy, have tongue shapes in both canonical and common error productions which ultrasound displays rather poorly compared to the more easily-imaged vowels, glides and liquids. By enhancing ultrasound images in real time with information based on our corpus, we aim to create images which we hypothesise will A) be more easily understood by children for clinical feedback B) extend the range and utility of ultrasound generally.caslUltraxArticulate Instruments Ltd. Articulate Assistant Advanced Ultrasound Module User Manual, Revision 2.12, [manual]. Author, Edinburgh, 2010. Bernhardt, B., Gick, B., Bacsfalvi, P., and Adler-Bock, M. Ultrasound in speech therapy with adolescents and adults. Clinical Linguistics & Phonetics, 19: 605-617, 2005. Engwall, O. Assessing Magnetic Resonance Imaging Measurements: Effects of Sustenation, Gravitation, and Coarticulation. In: Harrington, J., Tabain, M., editors, Speech Production: Models, Phonetic Processes, and Techniques. Hove: Psychology Press, 301-314, 2006. ICPLA (International Clinical Phonetics and Linguistics Association). ExtIPA Symbols for Disordered Speech, 2002. IPA (The International Phonetic Association). The International Phonetic Alphabet, 2005. Michi K-I, Yamashita Y, Imai S, Suzuki N and Yoshida H. Role of visual feedback treatment for defective /s/ sounds in patients with cleft palate. Journal of Speech and Hearing Research, 36: 277-285, 1993. Lee, J. and Stone, M. Overlaying Ultrasound to MRI Sequences. Paper presented at Ultrafest V (March, 2010) retrieved May 19th, 2011 from http://www.haskins.yale.edu/conferences/UltrafestV/abstracts.html Scobbie, J.M., Lawson, E., Cowen, S. Cleland, J. and Wrench, A.A. A common coordinate system for mid-sagittal articulatory measurement. Proceedings of Interspeech, Florence, 2011in press. Stone, M. A Guide to Analysing Tongue Motion from Ultrasound Images. Clinical Linguistics and Phonetics, 19: 455-501, 2005. Wrench, A.A., Cleland, J. and Scobbie, J.M. An Ultrasound Protocol for Comparing Tongue Contours: Upright vs. Supine. Proceedings ofpub2477pu
    corecore