312 research outputs found

    Effect of latent space distribution on the segmentation of images with multiple annotations

    Full text link
    We propose the Generalized Probabilistic U-Net, which extends the Probabilistic U-Net by allowing more general forms of the Gaussian distribution as the latent space distribution that can better approximate the uncertainty in the reference segmentations. We study the effect the choice of latent space distribution has on capturing the variation in the reference segmentations for lung tumors and white matter hyperintensities in the brain. We show that the choice of distribution affects the sample diversity of the predictions and their overlap with respect to the reference segmentations. We have made our implementation available at https://github.com/ishaanb92/GeneralizedProbabilisticUNetComment: Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA) https://melba-journal.org/2023:005. arXiv admin note: text overlap with arXiv:2207.1287

    Схеми скорочення необхідного об'єму вимірювань у методі контролю стаціонарної підйомної установки

    Get PDF
    Предлагается последовательная процедура принятия решения относительно вектора характеристик контролируемой стационарной подъемной установки, которая является некоторым обобщением процедуры Вальда и позволяет получить выигрыш в среднем объеме испытаний, аналогичный обычному «вальдовскому» выигрышу для случая двух гипотез. Предлагаемая последовательная процедура позволяет учитывать дополнительную информацию и за счет этого получить добавочный выигрыш в объеме контроля стационарной подъемной установки.Offers a consistent decision-making procedure for the vector characteristics of the controlled stationary hoist, which is a generalization of Wald's procedure and provides a gain in the average volume of tests, similar to the usual "of Wald's" winning the case of two hypotheses. The proposed sequential procedure takes into account the additional information and thereby obtain additional gains in the amount of control a stationary hoist

    Real-Time Decoding of Brain Responses to Visuospatial Attention Using 7T fMRI

    Get PDF
    Brain-Computer interface technologies mean to create new communication channels between our mind and our environment, independent of the motor system, by detecting and classifying self regulation of local brain activity. BCIs can provide patients with severe paralysis a means to communicate and to live more independent lives. There has been a growing interest in using invasive recordings for BCI to improve the signal quality. This also potentially gives access to new control strategies previously inaccessible by non-invasive methods. However, before surgery, the best implantation site needs to be determined. The blood-oxygen-level dependent signal changes measured with fMRI have been shown to agree well spatially with those found with invasive electrodes, and are the best option for pre-surgical localization. We show, using real-time fMRI at 7T, that eye movement-independent visuospatial attention can be used as a reliable control strategy for BCIs. At this field strength even subtle signal changes can be detected in single trials thanks to the high contrast-to-noise ratio. A group of healthy subjects were instructed to move their attention between three (two peripheral and one central) spatial target regions while keeping their gaze fixated at the center. The activated regions were first located and thereafter the subjects were given real-time feedback based on the activity in these regions. All subjects managed to regulate local brain areas without training, which suggests that visuospatial attention is a promising new target for intracranial BCI. ECoG data recorded from one epilepsy patient showed that local changes in gamma-power can be used to separate the three classes

    Aid alignment for global health research: the role of HIROs

    Get PDF
    The lack of a mechanism that aligns financial flows for global health research towards public health priorities limits the impact of health research on health and health equity. Collaborative groups of health research funders appear to be particularly well situated to ameliorate this situation and to initiate discussion on aid alignment for global health research. One such group is the Heads of International Research Organizations (HIROs), which brings together a large number of major government and philanthropic funders of biomedical research. Surprisingly, there is hardly any information publicly available on HIROs' objectives, or on how it aims to achieve more harmonization in the field of research for health. Greater transparency on HIROs' objectives and on its current efforts towards addressing the gap between global health research needs and investments would be desirable, given the enormous potential benefits of more coordination by this group

    Automated measurement of brain and white matter lesion volume in type 2 diabetes mellitus

    Get PDF
    Aims/hypothesis: Type 2 diabetes mellitus has been associated with brain atrophy and cognitive decline, but the association with ischaemic white matter lesions is unclear. Previous neuroimaging studies have mainly used semiquantitative rating scales to measure atrophy and white matter lesions (WMLs). In this study we used an automated segmentation technique to investigate the association of type 2 diabetes, several diabetes-related risk factors and cognition with cerebral tissue and WML volumes. Subjects and methods: Magnetic resonance images of 99 patients with type 2 diabetes and 46 control participants from a population-based sample were segmented using a k-nearest neighbour classifier trained on ten manually segmented data sets. White matter, grey matter, lateral ventricles, cerebrospinal fluid not including lateral ventricles, and WML volumes were assessed. Analyses were adjusted for age, sex, level of education and intracranial volume. Results: Type 2 diabetes was associated with a smaller volume of grey matter (-21.8 ml; 95% CI -34.2, -9.4) and with larger lateral ventricle volume (7.1 ml; 95% CI 2.3, 12.0) and with larger white matter lesion volume (56.5%; 95% CI 4.0, 135.8), whereas white matter volume was not affected. In separate analyses for men and women, the effects of diabetes were only significant in women. Conclusions/interpretation: The combination of atrophy with larger WML volume indicates that type 2 diabetes is associated with mixed pathology in the brain. The observed sex differences were unexpected and need to be addressed in further studies. © 2007 Springer-Verlag

    Adaptive stochastic gradient descent optimisation for image registration.

    Get PDF
    Abstract We present a stochastic gradient descent optimisation method for image registration with adaptive step size prediction. The method is based on the theoretical work by Plakhov and Cruz (J. Math. Sci. 120(1): [964][965][966][967][968][969][970][971][972][973] 2004). Our main methodological contribution is the derivation of an image-driven mechanism to select proper values for the most important free parameters of the method. The selection mechanism employs general characteristics of the cost functions that commonly occur in intensity-based image registration. Also, the theoretical convergence conditions of the optimisation method are taken into account. The proposed adaptive stochastic gradient descent (ASGD) method is compared to a standard, non-adaptive RobbinsMonro (RM) algorithm. Both ASGD and RM employ a stochastic subsampling technique to accelerate the optimisation process. Registration experiments were performed on 3D CT and MR data of the head, lungs, and prostate, using various similarity measures and transformation models. The results indicate that ASGD is robust to these variations in the registration framework and is less sensitive to the settings of the user-defined parameters than RM. The main disadvantage of RM is the need for a predetermined step size function. The ASGD method provides a solution for that issue

    Deep Learning from Dual-Energy Information for Whole-Heart Segmentation in Dual-Energy and Single-Energy Non-Contrast-Enhanced Cardiac CT

    Full text link
    Deep learning-based whole-heart segmentation in coronary CT angiography (CCTA) allows the extraction of quantitative imaging measures for cardiovascular risk prediction. Automatic extraction of these measures in patients undergoing only non-contrast-enhanced CT (NCCT) scanning would be valuable. In this work, we leverage information provided by a dual-layer detector CT scanner to obtain a reference standard in virtual non-contrast (VNC) CT images mimicking NCCT images, and train a 3D convolutional neural network (CNN) for the segmentation of VNC as well as NCCT images. Contrast-enhanced acquisitions on a dual-layer detector CT scanner were reconstructed into a CCTA and a perfectly aligned VNC image. In each CCTA image, manual reference segmentations of the left ventricular (LV) myocardium, LV cavity, right ventricle, left atrium, right atrium, ascending aorta, and pulmonary artery trunk were obtained and propagated to the corresponding VNC image. These VNC images and reference segmentations were used to train 3D CNNs for automatic segmentation in either VNC images or NCCT images. Automatic segmentations in VNC images showed good agreement with reference segmentations, with an average Dice similarity coefficient of 0.897 \pm 0.034 and an average symmetric surface distance of 1.42 \pm 0.45 mm. Volume differences [95% confidence interval] between automatic NCCT and reference CCTA segmentations were -19 [-67; 30] mL for LV myocardium, -25 [-78; 29] mL for LV cavity, -29 [-73; 14] mL for right ventricle, -20 [-62; 21] mL for left atrium, and -19 [-73; 34] mL for right atrium, respectively. In 214 (74%) NCCT images from an independent multi-vendor multi-center set, two observers agreed that the automatic segmentation was mostly accurate or better. This method might enable quantification of additional cardiac measures from NCCT images for improved cardiovascular risk prediction

    Deep Learning-Based Regression and Classification for Automatic Landmark Localization in Medical Images

    Get PDF
    In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage.Comment: 12 pages, accepted at IEEE transactions in Medical Imagin

    Optimization Strategies for Interactive Classification of Interstitial Lung Disease Textures

    Get PDF
    For computerized analysis of textures in interstitial lung disease, manual annotations of lung tissue are necessary. Since making these annotations is labor intensive, we previously proposed an interactive annotation framework. In this framework, observers iteratively trained a classifier to distinguish the different texture types by correcting its classification errors. In this work, we investigated three ways to extend this approach, in order to decrease the amount of user interaction required to annotate all lung tissue in a computed tomography scan. First, we conducted automatic classification experiments to test how data from previously annotated scans can be used for classification of the scan under consideration. We compared the performance of a classifier trained on data from one observer, a classifier trained on data from multiple observers, a classifier trained on consensus training data, and an ensemble of classifiers, each trained on data from different sources. Experiments were conducted without and with texture selection (ts). In the former case, training data from all eight textures was used. In the latter, only training data from the texture types present in the scan were used, and the observer would have to indicate textures contained in the scan to be analyzed. Second, we simulated interactive annotation to test the effects of (1) asking observers to perform ts before the start of annotation, (2) the use of a classifier trained on data from previously annotated scans at the start of annotation, when the interactive classifier is untrained, and (3) allowing observers to choose which interactive or automatic classification results they wanted to correct. Finally, various strategies for selecting the classification results that were presented to the observer were considered. Classification accuracies for all possible interactive annotation scenarios were compared. Using the best-performing protocol, in which observers select the textures that should be distinguished in the scan and in which they can choose which classification results to use for correction, a median accuracy of 88% was reached. The results obtained using this protocol were significantly better than results obtained with other interactive or automatic classification protocols
    corecore