14 research outputs found

    Does the Mutation Type Affect the Response to Cranial Vault Expansion in Children With Apert Syndrome?

    Get PDF
    Most cases of Apert syndrome are caused by mutations in the FGFR2 gene, either Ser252Trp or Pro253Arg. In these patients, over the last decades, spring-assisted posterior vault expansion (SA-PVE) has been the technique of choice for cranial vault expansion in the Craniofacial Unit of Great Ormond Street Hospital for Children (GOSH), London. The aim of this study was to investigate if there is a difference in preoperative intracranial volume (ICV) in patients with Apert syndrome with Ser252Trp or Pro253Arg mutation and whether these mutations affect the change in ICV achieved by SA-PVE. The GOSH craniofacial SA-PVE database was used to select patients with complete genetic testing and preoperative and postoperative computed tomography scans. ICV was calculated using FSL (FMRIB Analysis Group, Oxford) and adjusted based on Apert-specific growth curves. Sixteen patients were included with 8 having Ser252Trp mutation and 8 having Pro253Arg mutation. The mean preoperative adjusted computed tomography volume for patients in the Ser252Trp group was 1137.7 cm3 and in the Pro253Arg group was 1115.8 cm3 (P=1.00). There was a significant increase in ICV following SA-PVE in all patients (P<0.001) with no difference in mean change in ICV between the groups (P=0.51). Four (50%) patients with Ser252Trp mutation and 3 (37.5%) with Pro253Arg mutations required a second operation after primary SA-PVE. The results demonstrate that regardless of the mutation present, SA-PVE was successful in increasing ICV in patients with Apert syndrome and that a repeat volume expanding procedure was required by a similar number of patients in the 2 groups

    Thresholds for identifying pathological intracranial pressure in paediatric traumatic brain injury.

    Get PDF
    Intracranial pressure (ICP) monitoring forms an integral part of the management of severe traumatic brain injury (TBI) in children. The prediction of elevated ICP from imaging is important when deciding on whether to implement invasive ICP monitoring for a patient. However, the radiological markers of pathologically elevated ICP have not been specifically validated in paediatric studies. Here in, we describe an objective, non-invasive, quantitative method of stratifying which patients are likely to require invasive monitoring. A retrospective review of patients admitted to Cambridge University Hospital's Paediatric Intensive Care Unit between January 2009 and December 2016 with a TBI requiring invasive neurosurgical monitoring was performed. Radiological biomarkers of TBI (basal cistern volume, ventricular volume, volume of extra-axial haematomas) from CT scans were measured and correlated with epochs of continuous high frequency variables of pressure monitoring around the time of imaging. 38 patients were identified. Basal cistern volume was found to correlate significantly with opening ICP (r = -0.53, p < 0.001). The optimal threshold of basal cistern volume for predicting high ICP ([Formula: see text]20 mmHg) was a relative volume of 0.0055 (sensitivity 79%, specificity 80%). Ventricular volume and extra-axial haematoma volume did not correlate significantly with opening ICP. Our results show that the features of pathologically elevated ICP in children may differ considerably from those validated in adults. The development of quantitative parameters can help to predict which patients would most benefit from invasive neurosurgical monitoring and we present a novel radiological threshold for this.We gratefully acknowledge financial support as follows. Research support: the Medical Research Council (MRC, Grant Nos. G0600986 ID79068 and G1002277 ID98489) and the National Institute for Health Research Biomedical Research Centre (NIHR BRC) Cambridge (Neuroscience Theme; Brain Injury and Repair Theme). Authors’ support: Peter J Hutchinson – NIHR Research Professorship, Academy of Medical Sciences/Health Foundation Senior Surgical Scientist Fellowship, NIHR Global Health Research Group on Neurotrauma, and NIHR Cambridge BRC. Joseph Donnelly is supported by a Woolf Fisher Scholarship. MC- NIHR BRC

    MAD: Modality Agnostic Distance Measure for Image Registration

    Full text link
    Multi-modal image registration is a crucial pre-processing step in many medical applications. However, it is a challenging task due to the complex intensity relationships between different imaging modalities, which can result in large discrepancy in image appearance. The success of multi-modal image registration, whether it is conventional or learning based, is predicated upon the choice of an appropriate distance (or similarity) measure. Particularly, deep learning registration algorithms lack in accuracy or even fail completely when attempting to register data from an "unseen" modality. In this work, we present Modality Agnostic Distance (MAD), a deep image distance}] measure that utilises random convolutions to learn the inherent geometry of the images while being robust to large appearance changes. Random convolutions are geometry-preserving modules which we use to simulate an infinite number of synthetic modalities alleviating the need for aligned paired data during training. We can therefore train MAD on a mono-modal dataset and successfully apply it to a multi-modal dataset. We demonstrate that not only can MAD affinely register multi-modal images successfully, but it has also a larger capture range than traditional measures such as Mutual Information and Normalised Gradient Fields

    Transformer-based out-of-distribution detection for clinically safe segmentation

    Get PDF
    In a clinical setting it is essential that deployed image processing systems are robust to the full range of inputs they might encounter and, in particular, do not make confidently wrong predictions. The most popular approach to safe processing is to train networks that can provide a measure of their uncertainty, but these tend to fail for inputs that are far outside the training data distribution. Recently, generative modelling approaches have been proposed as an alternative; these can quantify the likelihood of a data sample explicitly, filtering out any out-of-distribution (OOD) samples before further processing is performed. In this work, we focus on image segmentation and evaluate several approaches to network uncertainty in the far-OOD and near-OOD cases for the task of segmenting haemorrhages in head CTs. We find all of these approaches are unsuitable for safe segmentation as they provide confidently wrong predictions when operating OOD. We propose performing full 3D OOD detection using a VQ-GAN to provide a compressed latent representation of the image and a transformer to estimate the data likelihood. Our approach successfully identifies images in both the far- and near-OOD cases. We find a strong relationship between image likelihood and the quality of a model’s segmentation, making this approach viable for filtering images unsuitable for segmentation. To our knowledge, this is the first time transformers have been applied to perform OOD detection on 3D image data.</p

    Latent Transformer Models for out-of-distribution detection

    Get PDF
    Any clinically-deployed image-processing pipeline must be robust to the full range of inputs it may be presented with. One popular approach to this challenge is to develop predictive models that can provide a measure of their uncertainty. Another approach is to use generative modelling to quantify the likelihood of inputs. Inputs with a low enough likelihood are deemed to be out-of-distribution and are not presented to the downstream predictive model. In this work, we evaluate several approaches to segmentation with uncertainty for the task of segmenting bleeds in 3D CT of the head. We show that these models can fail catastrophically when operating in the far out-of-distribution domain, often providing predictions that are both highly confident and wrong. We propose to instead perform out-of-distribution detection using the Latent Transformer Model: a VQ-GAN is used to provide a highly compressed latent representation of the input volume, and a transformer is then used to estimate the likelihood of this compressed representation of the input. We demonstrate this approach can identify images that are both far- and near- out-of-distribution, as well as provide spatial maps that highlight the regions considered to be out-of-distribution. Furthermore, we find a strong relationship between an image's likelihood and the quality of a model's segmentation on it, demonstrating that this approach is viable for filtering out unsuitable images

    Automated brain segmentation methods for clinical quality MRI and CT images

    Get PDF
    Alzheimer’s disease (AD) is a progressive neurodegenerative disorder associated with brain tissue loss. Accurate estimation of this loss is critical for the diagnosis, prognosis, and tracking the progression of AD. Structural magnetic resonance imaging (sMRI) and X-ray computed tomography (CT) are widely used imaging modalities that help to in vivo map brain tissue distributions. As manual image segmentations are tedious and time-consuming, automated segmentation methods are increasingly applied to head MRI and head CT images to estimate brain tissue volumes. However, existing automated methods can be applied only to images that have high spatial resolution and their accuracy on heterogeneous low-quality clinical images has not been tested. Further, automated brain tissue segmentation methods for CT are not available, although CT is more widely acquired than MRI in the clinical setting. For these reasons, large clinical imaging archives are unusable for research studies. In this work, we identify and develop automated tissue segmentation and brain volumetry methods that can be applied to clinical quality MRI and CT images. In the first project, we surveyed the current MRI methods and validated the accuracy of these methods when applied to clinical quality images. We then developed CTSeg, a tissue segmentation method for CT images, by adopting the MRI technique that exhibited the highest reliability. CTSeg is an atlas-based statistical modeling method that relies on hand-curated features and cannot be applied to images of subjects with different diseases and age groups. Advanced deep learning-based segmentation methods use hierarchical representations and learn complex features in a data-driven manner. In our final project, we develop a fully automated deep learning segmentation method that uses contextual information to segment clinical quality head CT images. The application of this method on an AD dataset revealed larger differences between brain volumes of AD and control subjects. This dissertation demonstrates the potential of applying automated methods to large clinical imaging archives to answer research questions in a variety of studies
    corecore