666 research outputs found

    Scanner Invariant Representations for Diffusion MRI Harmonization

    Get PDF
    Purpose: In the present work we describe the correction of diffusion-weighted MRI for site and scanner biases using a novel method based on invariant representation. Theory and Methods: Pooled imaging data from multiple sources are subject to variation between the sources. Correcting for these biases has become very important as imaging studies increase in size and multi-site cases become more common. We propose learning an intermediate representation invariant to site/protocol variables, a technique adapted from information theory-based algorithmic fairness; by leveraging the data processing inequality, such a representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to underlying structures. To implement this, we use a deep learning method based on variational auto-encoders (VAE) to construct scanner invariant encodings of the imaging data. Results: To evaluate our method, we use training data from the 2018 MICCAI Computational Diffusion MRI (CDMRI) Challenge Harmonization dataset. Our proposed method shows improvements on independent test data relative to a recently published baseline method on each subtask, mapping data from three different scanning contexts to and from one separate target scanning context. Conclusion: As imaging studies continue to grow, the use of pooled multi-site imaging will similarly increase. Invariant representation presents a strong candidate for the harmonization of these data

    Simultaneous Matrix Diagonalization for Structural Brain Networks Classification

    Full text link
    This paper considers the problem of brain disease classification based on connectome data. A connectome is a network representation of a human brain. The typical connectome classification problem is very challenging because of the small sample size and high dimensionality of the data. We propose to use simultaneous approximate diagonalization of adjacency matrices in order to compute their eigenstructures in more stable way. The obtained approximate eigenvalues are further used as features for classification. The proposed approach is demonstrated to be efficient for detection of Alzheimer's disease, outperforming simple baselines and competing with state-of-the-art approaches to brain disease classification

    iPINNs: Incremental learning for Physics-informed neural networks

    Full text link
    Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs). However, finding a set of neural network parameters that lead to fulfilling a PDE can be challenging and non-unique due to the complexity of the loss landscape that needs to be traversed. Although a variety of multi-task learning and transfer learning approaches have been proposed to overcome these issues, there is no incremental training procedure for PINNs that can effectively mitigate such training challenges. We propose incremental PINNs (iPINNs) that can learn multiple tasks (equations) sequentially without additional parameters for new tasks and improve performance for every equation in the sequence. Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learned subnetworks. We demonstrate that previous subnetworks are a good initialization for a new equation if PDEs share similarities. We also show that iPINNs achieve lower prediction error than regular PINNs for two different scenarios: (1) learning a family of equations (e.g., 1-D convection PDE); and (2) learning PDEs resulting from a combination of processes (e.g., 1-D reaction-diffusion PDE). The ability to learn all problems with a single network together with learning more complex PDEs with better generalization than regular PINNs will open new avenues in this field

    A Brief Prehistory of Double Descent

    Full text link
    In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners. Given a fixed training sample size nn, such curves show the risk of a learner as a function of some (approximate) measure of its complexity NN. With NN the number of features, these curves are also referred to as feature curves. A salient observation in [1] is that these curves can display, what they call, double descent: with increasing NN, the risk initially decreases, attains a minimum, and then increases until NN equals nn, where the training data is fitted perfectly. Increasing NN even further, the risk decreases a second and final time, creating a peak at N=nN=n. This twofold descent may come as a surprise, but as opposed to what [1] reports, it has not been overlooked historically. Our letter draws attention to some original, earlier findings, of interest to contemporary machine learning

    PLPD: reliable protein localization prediction from imbalanced and overlapped datasets

    Get PDF
    Subcellular localization is one of the key functional characteristics of proteins. An automatic and efficient prediction method for the protein subcellular localization is highly required owing to the need for large-scale genome analysis. From a machine learning point of view, a dataset of protein localization has several characteristics: the dataset has too many classes (there are more than 10 localizations in a cell), it is a multi-label dataset (a protein may occur in several different subcellular locations), and it is too imbalanced (the number of proteins in each localization is remarkably different). Even though many previous works have been done for the prediction of protein subcellular localization, none of them tackles effectively these characteristics at the same time. Thus, a new computational method for protein localization is eventually needed for more reliable outcomes. To address the issue, we present a protein localization predictor based on D-SVDD (PLPD) for the prediction of protein localization, which can find the likelihood of a specific localization of a protein more easily and more correctly. Moreover, we introduce three measurements for the more precise evaluation of a protein localization predictor. As the results of various datasets which are made from the experiments of Huh et al. (2003), the proposed PLPD method represents a different approach that might play a complimentary role to the existing methods, such as Nearest Neighbor method and discriminate covariant method. Finally, after finding a good boundary for each localization using the 5184 classified proteins as training data, we predicted 138 proteins whose subcellular localizations could not be clearly observed by the experiments of Huh et al. (2003)

    Taxation and Development: a Review of Donor Support to Strengthen Tax Systems in Developing Countries

    Get PDF
    Recent years have seen a growing interest among donors on taxation in developing countries. This reflects a concern for domestic revenue mobilization to finance public goods and services, as well as recognition of the centrality of taxation for growth and redistribution. The global financial crisis has also led many donor countries to pay more attention to the extent and effectiveness of the aid they provide, and to ensuring that they support rather than discourage the developing countries’ own revenue-raising efforts. This paper reviews the state of knowledge on aid and tax reform in developing countries, with a particular focus on sub-Saharan Africa. Four main issues are addressed: (1) impacts of donor assistance to strengthen tax systems, including what has worked, or not, and why; (2) challenges in ‘scaling up’ donor efforts; (3) how to best provide assistance to reform tax systems; and (4) knowledge gaps to be filled in order to design better donor interventions. The paper argues that donors should complement the traditional ‘technical’ approach to tax reform with measures that encourage constructive engagement between governments and citizens over tax issues.Department for International DevelopmentBill and Melinda Gates Foundatio

    On Quantifying Local Geometric Structures of Fiber Tracts

    Get PDF
    International audienceIn diffusion MRI, fiber tracts, represented by densely distributed 3D curves, can be estimated from diffusion weighted images using tractography. The spatial geometric structure of white matter fiber tracts is known to be complex in human brain, but it carries intrinsic information of human brain. In this paper, inspired by studies of liquid crystals, we propose tract-based director field analysis (tDFA) with total six rotationally invariant scalar indices to quantify local geometric structures of fiber tracts. The contributions of tDFA include: 1) We propose orientational order (OO) and orientational dispersion (OD) indices to quantify the degree of alignment and dispersion of fiber tracts; 2) We define the local orthogonal frame for a set of unoriented curves, which is proved to be a generalization of the Frenet frame defined for a single oriented curve; 3) With the local orthogonal frame, we propose splay, bend, and twist indices to quantify three types of orientational distortion of local fiber tracts, and a total distortion index to describe distortions of all three types. The proposed tDFA for fiber tracts is a generalization of the voxel-based DFA (vDFA) which was recently proposed for a spherical function field (i.e., an ODF field). To our knowledge, this is the first work to quantify orientational distortion (splay, bend, twist, and total distortion) of fiber tracts. Experiments show that the proposed scalar indices are useful descriptors of local geometric structures to visualize and analyze fiber tracts

    Improved neonatal brain MRI segmentation by interpolation of motion corrupted slices

    Get PDF
    BACKGROUND AND PURPOSE: To apply and evaluate an intensity‐based interpolation technique, enabling segmentation of motion‐affected neonatal brain MRI. METHODS: Moderate‐late preterm infants were enrolled in a prospective cohort study (Brain Imaging in Moderate‐late Preterm infants “BIMP‐study”) between August 2017 and November 2019. T2‐weighted MRI was performed around term equivalent age on a 3T MRI. Scans without motion (n = 27 [24%], control group) and with moderate‐severe motion (n = 33 [29%]) were included. Motion‐affected slices were re‐estimated using intensity‐based shape‐preserving cubic spline interpolation, and automatically segmented in eight structures. Quality of interpolation and segmentation was visually assessed for errors after interpolation. Reliability was tested using interpolated control group scans (18/54 axial slices). Structural similarity index (SSIM) was used to compare T2‐weighted scans, and Sørensen‐Dice was used to compare segmentation before and after interpolation. Finally, volumes of brain structures of the control group were used assessing sensitivity (absolute mean fraction difference) and bias (confidence interval of mean difference). RESULTS: Visually, segmentation of 25 scans (22%) with motion artifacts improved with interpolation, while segmentation of eight scans (7%) with adjacent motion‐affected slices did not improve. Average SSIM was .895 and Sørensen‐Dice coefficients ranged between .87 and .97. Absolute mean fraction difference was ≤0.17 for less than or equal to five interpolated slices. Confidence intervals revealed a small bias for cortical gray matter (0.14‐3.07 cm(3)), cerebrospinal fluid (0.39‐1.65 cm(3)), deep gray matter (0.74‐1.01 cm(3)), and brainstem volumes (0.07‐0.28 cm(3)) and a negative bias in white matter volumes (–4.47 to –1.65 cm(3)). CONCLUSION: According to qualitative and quantitative assessment, intensity‐based interpolation reduced the percentage of discarded scans from 29% to 7%

    Quality Assurance of Spectral Ultraviolet Measurements in Europe Through the Development of a Transportable Unit (QASUME)

    Get PDF
    QASUME is a European Commission funded project that aims to develop and test a transportable unit for providing quality assurance to UV spectroradiometric measurements conducted in Europe. The comparisons will be performed at the home sites of the instruments, thus avoiding the risk of transporting instruments to participate in intercomparison campaigns. Spectral measurements obtained at each of the stations will be compared, following detailed and objective comparison protocols, against collocated measurements performed by a thoroughly tested and validated travelling unit. The transportable unit comprises a spectroradiometer, its calibrator with a set of calibration lamps traceable to the sources of different Standards Laboratories, and devices for determining the slit function and the angular response of the local spectroradiometers. The unit will be transported by road to about 25 UV stations over a period of about two years. The spectroradiometer of the transportable unit is compared in an intercomparison campaign with six instruments to establish a relation, which would then be used as a reference for its calibration over the period of its regular operation at the European stations. Different weather patterns (from clear skies to heavy rain) were present during the campaign, allowing the performance of the spectroradiometers to be evaluated under unfavourable conditions (as may be experienced at home sites) as well as the more desirable dry conditions. Measurements in the laboratory revealed that the calibration standards of the spectroradiometers differ by up to 10%. The evaluation is completed through comparisons with the same six instruments at their homes sites
    • …
    corecore