3,389 research outputs found

    Somatic mitochondrial DNA mutations in cancer escape purifying selection and high pathogenicity mutations lead to the oncocytic phenotype: pathogenicity analysis of reported somatic mtDNA mutations in tumors

    Get PDF
    BACKGROUND: The presence of somatic mitochondrial DNA (mtDNA) mutations in cancer cells has been interpreted in controversial ways, ranging from random neutral accumulation of mutations, to positive selection for high pathogenicity, or conversely to purifying selection against high pathogenicity variants as occurs at the population level. METHODS: Here we evaluated the predicted pathogenicity of somatic mtDNA mutations described in cancer and compare these to the distribution of variations observed in the global human population and all possible protein variations that could occur in human mtDNA. We focus on oncocytic tumors, which are clearly associated with mitochondrial dysfunction. The protein variant pathogenicity was predicted using two computational methods, MutPred and SNPs&GO. RESULTS: The pathogenicity score of the somatic mtDNA variants were significantly higher in oncocytic tumors compared to non-oncocytic tumors. Variations in subunits of Complex I of the electron transfer chain were significantly more common in tumors with the oncocytic phenotype, while variations in Complex V subunits were significantly more common in non-oncocytic tumors. CONCLUSIONS: Our results show that the somatic mtDNA mutations reported over all tumors are indistinguishable from a random selection from the set of all possible amino acid variations, and have therefore escaped the effects of purifying selection that act strongly at the population level. We show that the pathogenicity of somatic mtDNA mutations is a determining factor for the oncocytic phenotype. The opposite associations of the Complex I and Complex V variants with the oncocytic and non-oncocytic tumors implies that low mitochondrial membrane potential may play an important role in determining the oncocytic phenotype

    Tversky loss function for image segmentation using 3D fully convolutional deep networks

    Full text link
    Fully convolutional deep neural networks carry out excellent potential for fast and accurate image segmentation. One of the main challenges in training these networks is data imbalance, which is particularly problematic in medical imaging applications such as lesion segmentation where the number of lesion voxels is often much lower than the number of non-lesion voxels. Training with unbalanced data can lead to predictions that are severely biased towards high precision but low recall (sensitivity), which is undesired especially in medical applications where false negatives are much less tolerable than false positives. Several methods have been proposed to deal with this problem including balanced sampling, two step training, sample re-weighting, and similarity loss functions. In this paper, we propose a generalized loss function based on the Tversky index to address the issue of data imbalance and achieve much better trade-off between precision and recall in training 3D fully convolutional deep neural networks. Experimental results in multiple sclerosis lesion segmentation on magnetic resonance images show improved F2 score, Dice coefficient, and the area under the precision-recall curve in test data. Based on these results we suggest Tversky loss function as a generalized framework to effectively train deep neural networks

    Assessment of Electromagnetic Tracking Accuracy for Endoscopic Ultrasound

    Get PDF
    Endoscopic ultrasound (EUS) is a minimally-invasive imaging technique that can be technically difficult to perform due to the small field of view and uncertainty in the endoscope position. Electromagnetic (EM) tracking is emerging as an important technology in guiding endoscopic interventions and for training in endotherapy by providing information on endoscope location by fusion with pre-operative images. However, the accuracy of EM tracking could be compromised by the endoscopic ultrasound transducer. In this work, we quantify the precision and accuracy of EM tracking sensors inserted into the working channel of a flexible endoscope, with the ultrasound transducer turned on and off. The EUS device was found to have little (no significant) effect on static tracking accuracy although jitter increased significantly. A significant change in the measured distance between sensors arranged in a fixed geometry was found during a dynamic acquisition. In conclusion, EM tracking accuracy was not found to be significantly affected by the flexible endoscope

    The temporal representation of experience in subjective mood

    Get PDF
    Humans refer to their mood state regularly in day-to-day as well as clinical interactions. Theoretical accounts suggest that when reporting on our mood we integrate over the history of our experiences; yet, the temporal structure of this integration remains unexamined. Here, we use a computational approach to quantitatively answer this question and show that early events exert a stronger influence on reported mood (a primacy weighting) compared to recent events. We show that a Primacy model accounts better for mood reports compared to a range of alternative temporal representations across random, consistent, or dynamic reward environments, different age groups, and in both healthy and depressed participants. Moreover, we find evidence for neural encoding of the Primacy, but not the Recency, model in frontal brain regions related to mood regulation. These findings hold implications for the timing of events in experimental or clinical settings and suggest new directions for individualized mood interventions

    Towards image-guided pancreas and biliary endoscopy: Automatic multi-organ segmentation on abdominal CT with dense dilated networks

    Get PDF
    Segmentation of anatomy on abdominal CT enables patient-specific image guidance in clinical endoscopic procedures and in endoscopy training. Because robust interpatient registration of abdominal images is necessary for existing multi-atlas- and statistical-shape-model-based segmentations, but remains challenging, there is a need for automated multi-organ segmentation that does not rely on registration. We present a deep-learning-based algorithm for segmenting the liver, pancreas, stomach, and esophagus using dilated convolution units with dense skip connections and a new spatial prior. The algorithm was evaluated with an 8-fold cross-validation and compared to a joint-label-fusion-based segmentation based on Dice scores and boundary distances. The proposed algorithm yielded more accurate segmentations than the joint-label-fusion-ba sed algorithm for the pancreas (median Dice scores 66 vs 37), stomach (83 vs 72) and esophagus (73 vs 54) and marginally less accurate segmentation for the liver (92 vs 93). We conclude that dilated convolutional networks with dense skip connections can segment the liver, pancreas, stomach and esophagus from abdominal CT without image registration and have the potential to support image-guided navigation in gastrointestinal endoscopy procedures

    Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks

    Get PDF
    Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deeplearning- based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the GI tract (esophagus, stomach, duodenum) and surrounding organs (liver, spleen, left kidney, gallbladder). We directly compared the segmentation accuracy of the proposed method to existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 vs. 0.71, 0.74 and 0.74 for the pancreas, 0.90 vs 0.85, 0.87 and 0.83 for the stomach and 0.76 vs 0.68, 0.69 and 0.66 for the esophagus. We conclude that deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures

    Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures

    Get PDF
    PURPOSE: Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. METHODS: A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. RESULTS: The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). CONCLUSIONS: The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting

    Endoscopic Ultrasound Image Synthesis Using a Cycle-Consistent Adversarial Network

    Get PDF
    Endoscopic ultrasound (EUS) is a challenging procedure that requires skill, both in endoscopy and ultrasound image interpretation. Classification of key anatomical landmarks visible on EUS images can assist the gastroenterologist during navigation. Current applications of deep learning have shown the ability to automatically classify ultrasound images with high accuracy. However, these techniques require a large amount of labelled data which is time consuming to obtain, and in the case of EUS, is also a difficult task to perform retrospectively due to the lack of 3D context. In this paper, we propose the use of an image-to-image translation method to create synthetic EUS (sEUS) images from CT data, that can be used as a data augmentation strategy when EUS data is scarce. We train a cycle-consistent adversarial network with unpaired EUS images and CT slices extracted in a manner such that they mimic plausible EUS views, to generate sEUS images from the pancreas, aorta and liver. We quantitatively evaluate the use of sEUS images in a classification sub-task and assess the Fréchet Inception Distance. We show that synthetic data, obtained from CT data, imposes only a minor classification accuracy penalty and may help generalization to new unseen patients. The code and a dataset containing generated sEUS images are available at: https://ebonmati.github.io
    corecore