4,921 research outputs found

    Are infestations of Cymomelanodactylus killing Acropora cytherea in the Chagos archipelago?

    Get PDF
    Associations between branching corals and infaunal crabs are well known, mostly due to the beneficial effects of Trapezia and Tetralia crabs in protecting host corals from crown-of-thorns starfish (e.g., Pratchett et al. 2000) and/or sedimentation (Stewart et al. 2006). These crabs are obligate associates of live corals and highly prevalent across suitable coral hosts, with 1–2 individuals per colony (Patton 1994). Cymo melanodactylus (Fig. 1) are also prevalent in branching corals, mostly Acropora, and are known to feed on live coral tissue, but are generally found in low abundance (<3 per colony) and do not significantly affect their host corals (e.g., Patton 1994). In the Chagos archipelago, however, infestations of Cymo melanodactylus were found on recently dead and dying colonies of Acropora cytherea

    Towards a comprehensive framework for movement and distortion correction of diffusion MR images: Within volume movement

    Get PDF
    Most motion correction methods work by aligning a set of volumes together, or to a volume that represents a reference location. These are based on an implicit assumption that the subject remains motionless during the several seconds it takes to acquire all slices in a volume, and that any movement occurs in the brief moment between acquiring the last slice of one volume and the first slice of the next. This is clearly an approximation that can be more or less good depending on how long it takes to acquire one volume and in how rapidly the subject moves. In this paper we present a method that increases the temporal resolution of the motion correction by modelling movement as a piecewise continous function over time. This intra-volume movement correction is implemented within a previously presented framework that simultaneously estimates distortions, movement and movement-induced signal dropout. We validate the method on highly realistic simulated data containing all of these effects. It is demonstrated that we can estimate the true movement with high accuracy, and that scalar parameters derived from the data, such as fractional anisotropy, are estimated with greater fidelity when data has been corrected for intra-volume movement. Importantly, we also show that the difference in fidelity between data affected by different amounts of movement is much reduced when taking intra-volume movement into account. Additional validation was performed on data from a healthy volunteer scanned when lying still and when performing deliberate movements. We show an increased correspondence between the “still” and the “movement” data when the latter is corrected for intra-volume movement. Finally we demonstrate a big reduction in the telltale signs of intra-volume movement in data acquired on elderly subjects

    Gastroesophageal intussusception with complete herniation of the spleen in a 12 months old dog with idiopathic megaoesophagus

    Get PDF
    A 12 months old, castrated male, mix-breed dog was presented due to acute onset of vomiting, retching, anorexia, and tachypnoea. Idiopathic megaoesophagus was diagnosed three months prior to presentation. Radiographic and CT examination revealed gastroesophageal intussusception with herniation of the complete spleen into the intussusception. After initial stabilization surgical treatment was performed. The stomach and spleen were manually reduced into the abdomen. Due to questionable viability of the gastric wall an inverting suture pattern was used to invaginate the compromised part. Left sided gastropexy was performed to reduce risk of recurrence. Additionally oesophagopexy was performed to reduce the risk of hiatal hernia due to intraoperative damage to the hiatus. The patient recovered uneventful and was discharged from hospital five days following surgery. Conservative treatment of concurrent megaoesophagus was continued. At last follow up, 10 months later, the dog was clinically fine, had gained weight, and showed no signs of regurgitation

    Denoising diffusion models for out-of-distribution detection

    Get PDF
    Out-of-distribution detection is crucial to the safe deployment of machine learning systems. Currently, unsupervised out-of-distribution detection is dominated by generative-based approaches that make use of estimates of the likelihood or other measurements from a generative model. Reconstruction-based methods offer an alternative approach, in which a measure of reconstruction error is used to determine if a sample is out-of-distribution. However, reconstruction-based approaches are less favoured, as they require careful tuning of the model's information bottleneck-such as the size of the latent dimension - to produce good results. In this work, we exploit the view of denoising diffusion probabilistic models (DDPM) as denoising autoencoders where the bottleneck is controlled externally, by means of the amount of noise applied. We propose to use DDPMs to reconstruct an input that has been noised to a range of noise levels, and use the resulting multi-dimensional reconstruction error to classify out-of-distribution inputs. We validate our approach both on standard computer-vision datasets and on higher dimension medical datasets. Our approach outperforms not only reconstruction-based methods, but also state-of-the-art generative-based approaches. Code is available at https://github.com/marksgraham/ddpm-ood

    Test-time unsupervised domain adaptation

    Get PDF
    Convolutional neural networks trained on publicly available medical imaging datasets (source domain) rarely generalise to different scanners or acquisition protocols (target domain). This motivates the active field of domain adaptation. While some approaches to the problem require labelled data from the target domain, others adopt an unsupervised approach to domain adaptation (UDA). Evaluating UDA methods consists of measuring the model’s ability to generalise to unseen data in the target domain. In this work, we argue that this is not as useful as adapting to the test set directly. We therefore propose an evaluation framework where we perform test-time UDA on each subject separately. We show that models adapted to a specific target subject from the target domain outperform a domain adaptation method which has seen more data of the target domain but not this specific target subject. This result supports the thesis that unsupervised domain adaptation should be used at test-time, even if only using a single target-domain subject

    Hierarchical Brain Parcellation with Uncertainty

    Get PDF
    Many atlases used for brain parcellation are hierarchically organised, progressively dividing the brain into smaller sub-regions. However, state-of-the-art parcellation methods tend to ignore this structure and treat labels as if they are ‘flat’. We introduce a hierarchically-aware brain parcellation method that works by predicting the decisions at each branch in the label tree. We further show how this method can be used to model uncertainty separately for every branch in this label tree. Our method exceeds the performance of flat uncertainty methods, whilst also providing decomposed uncertainty estimates that enable us to obtain self-consistent parcellations and uncertainty maps at any level of the label hierarchy. We demonstrate a simple way these decision-specific uncertainty maps may be used to provided uncertainty-thresholded tissue maps at any level of the label tree

    A Simulation Framework for Quantitative Validation of Artefact Correction in Diffusion MRI

    Get PDF
    In this paper we demonstrate a simulation framework that enables the direct and quantitative comparison of post-processing methods for diffusion weighted magnetic resonance (DW-MR) images. DW-MR datasets are employed in a range of techniques that enable estimates of local microstructure and global connectivity in the brain. These techniques require full alignment of images across the dataset, but this is rarely the case. Artefacts such as eddy-current (EC) distortion and motion lead to misalignment between images, which compromise the quality of the microstructural measures obtained from them. Numerous methods and software packages exist to correct these artefacts, some of which have become de-facto standards, but none have been subject to rigorous validation. The ultimate aim of these techniques is improved image alignment, yet in the literature this is assessed using either qualitative visual measures or quantitative surrogate metrics. Here we introduce a simulation framework that allows for the direct, quantitative assessment of techniques, enabling objective comparisons of existing and future methods. DW-MR datasets are generated using a process that is based on the physics of MRI acquisition, which allows for the salient features of the images and their artefacts to be reproduced. We demonstrate the application of this framework by testing one of the most commonly used methods for EC correction, registration of DWIs to b = 0, and reveal the systematic bias this introduces into corrected datasets

    A supervised learning approach for diffusion MRI quality control with minimal training data

    Get PDF
    Quality control (QC) is a fundamental component of any study. Diffusion MRI has unique challenges that make manual QC particularly difficult, including a greater number of artefacts than other MR modalities and a greater volume of data. The gold standard is manual inspection of the data, but this process is time-consuming and subjective. Recently supervised learning approaches based on convolutional neural networks have been shown to be competitive with manual inspection. A drawback of these approaches is they still require a manually labelled dataset for training, which is itself time-consuming to produce and still introduces an element of subjectivity. In this work we demonstrate the need for manual labelling can be greatly reduced by training on simulated data, and using a small amount of labelled data for a final calibration step. We demonstrate its potential for the detection of severe movement artefacts, and compare performance to a classifier trained on manually-labelled real data

    Moral Framing and Ideological Bias of News

    Full text link
    News outlets are a primary source for many people to learn what is going on in the world. However, outlets with different political slants, when talking about the same news story, usually emphasize various aspects and choose their language framing differently. This framing implicitly shows their biases and also affects the reader's opinion and understanding. Therefore, understanding the framing in the news stories is fundamental for realizing what kind of view the writer is conveying with each news story. In this paper, we describe methods for characterizing moral frames in the news. We capture the frames based on the Moral Foundation Theory. This theory is a psychological concept which explains how every kind of morality and opinion can be summarized and presented with five main dimensions. We propose an unsupervised method that extracts the framing Bias and the framing Intensity without any external framing annotations provided. We validate the performance on an annotated twitter dataset and then use it to quantify the framing bias and partisanship of news

    The new paradigm of hepatitis C therapy: integration of oral therapies into best practices.

    Get PDF
    Emerging data indicate that all-oral antiviral treatments for chronic hepatitis C virus (HCV) will become a reality in the near future. In replacing interferon-based therapies, all-oral regimens are expected to be more tolerable, more effective, shorter in duration and simpler to administer. Coinciding with new treatment options are novel methodologies for disease screening and staging, which create the possibility of more timely care and treatment. Assessments of histologic damage typically are performed using liver biopsy, yet noninvasive assessments of histologic damage have become the norm in some European countries and are becoming more widespread in the United States. Also in place are new Centers for Disease Control and Prevention (CDC) initiatives to simplify testing, improve provider and patient awareness and expand recommendations for HCV screening beyond risk-based strategies. Issued in 2012, the CDC recommendations aim to increase HCV testing among those with the greatest HCV burden in the United States by recommending one-time testing for all persons born during 1945-1965. In 2013, the United States Preventive Services Task Force adopted similar recommendations for risk-based and birth-cohort-based testing. Taken together, the developments in screening, diagnosis and treatment will likely increase demand for therapy and stimulate a shift in delivery of care related to chronic HCV, with increased involvement of primary care and infectious disease specialists. Yet even in this new era of therapy, barriers to curing patients of HCV will exist. Overcoming such barriers will require novel, integrative strategies and investment of resources at local, regional and national levels
    corecore