836 research outputs found
Spectral Graph Convolutions for Population-based Disease Prediction
Exploiting the wealth of imaging and non-imaging information for disease
prediction tasks requires models capable of representing, at the same time,
individual features as well as data associations between subjects from
potentially large populations. Graphs provide a natural framework for such
tasks, yet previous graph-based approaches focus on pairwise similarities
without modelling the subjects' individual characteristics and features. On the
other hand, relying solely on subject-specific imaging feature vectors fails to
model the interaction and similarity between subjects, which can reduce
performance. In this paper, we introduce the novel concept of Graph
Convolutional Networks (GCN) for brain analysis in populations, combining
imaging and non-imaging data. We represent populations as a sparse graph where
its vertices are associated with image-based feature vectors and the edges
encode phenotypic information. This structure was used to train a GCN model on
partially labelled graphs, aiming to infer the classes of unlabelled nodes from
the node features and pairwise associations between subjects. We demonstrate
the potential of the method on the challenging ADNI and ABIDE databases, as a
proof of concept of the benefit from integrating contextual information in
classification tasks. This has a clear impact on the quality of the
predictions, leading to 69.5% accuracy for ABIDE (outperforming the current
state of the art of 66.8%) and 77% for ADNI for prediction of MCI conversion,
significantly outperforming standard linear classifiers where only individual
features are considered.Comment: International Conference on Medical Image Computing and
Computer-Assisted Interventions (MICCAI) 201
Linear models of activation cascades: analytical solutions and coarse-graining of delayed signal transduction
Cellular signal transduction usually involves activation cascades, the
sequential activation of a series of proteins following the reception of an
input signal. Here we study the classic model of weakly activated cascades and
obtain analytical solutions for a variety of inputs. We show that in the
special but important case of optimal-gain cascades (i.e., when the
deactivation rates are identical) the downstream output of the cascade can be
represented exactly as a lumped nonlinear module containing an incomplete gamma
function with real parameters that depend on the rates and length of the
cascade, as well as parameters of the input signal. The expressions obtained
can be applied to the non-identical case when the deactivation rates are random
to capture the variability in the cascade outputs. We also show that cascades
can be rearranged so that blocks with similar rates can be lumped and
represented through our nonlinear modules. Our results can be used both to
represent cascades in computational models of differential equations and to fit
data efficiently, by reducing the number of equations and parameters involved.
In particular, the length of the cascade appears as a real-valued parameter and
can thus be fitted in the same manner as Hill coefficients. Finally, we show
how the obtained nonlinear modules can be used instead of delay differential
equations to model delays in signal transduction.Comment: 18 pages, 7 figure
Deriving a multi-subject functional-connectivity atlas to inform connectome estimation
MICCAI 2014 preprintInternational audienceThe estimation of functional connectivity structure from functional neuroimaging data is an important step toward understanding the mechanisms of various brain diseases and building relevant biomarkers. Yet, such inferences have to deal with the low signal-to-noise ratio and the paucity of the data. With at our disposal a steadily growing volume of publicly available neuroimaging data, it is however possible to improve the estimation procedures involved in connectome mapping. In this work, we propose a novel learning scheme for functional connectivity based on sparse Gaussian graphical models that aims at minimizing the bias induced by the regularization used in the estimation, by carefully separating the estimation of the model support from the coefficients. Moreover, our strategy makes it possible to include new data with a limited computational cost. We illustrate the physiological relevance of the learned prior, that can be identified as a functional connectivity atlas, based on an experiment on 46 subjects of the Human Connectome Dataset
Nonlinear spinor field in Bianchi type-I Universe filled with viscous fluid: numerical solutions
We consider a system of nonlinear spinor and a Bianchi type I gravitational
fields in presence of viscous fluid. The nonlinear term in the spinor field
Lagrangian is chosen to be , with being a self-coupling
constant and being a function of the invariants an constructed from
bilinear spinor forms and . Self-consistent solutions to the spinor and
BI gravitational field equations are obtained in terms of , where
is the volume scale of BI universe. System of equations for and \ve,
where \ve is the energy of the viscous fluid, is deduced. This system is
solved numerically for some special cases.Comment: 15 pages, 4 figure
Squeeze-and-Breathe Evolutionary Monte Carlo Optimisation with Local Search Acceleration and its application to parameter fitting
Motivation: Estimating parameters from data is a key stage of the modelling
process, particularly in biological systems where many parameters need to be
estimated from sparse and noisy data sets. Over the years, a variety of
heuristics have been proposed to solve this complex optimisation problem, with
good results in some cases yet with limitations in the biological setting.
Results: In this work, we develop an algorithm for model parameter fitting
that combines ideas from evolutionary algorithms, sequential Monte Carlo and
direct search optimisation. Our method performs well even when the order of
magnitude and/or the range of the parameters is unknown. The method refines
iteratively a sequence of parameter distributions through local optimisation
combined with partial resampling from a historical prior defined over the
support of all previous iterations. We exemplify our method with biological
models using both simulated and real experimental data and estimate the
parameters efficiently even in the absence of a priori knowledge about the
parameters.Comment: 15 Pages, 3 Figures, 6 Tables; Availability: Matlab code available
from the authors upon reques
Genotoxicity evaluation of medical devices: A regulatory perspective
This review critically evaluates our current regulatory understanding of genotoxicity testing and risk assessment of medical devices. Genotoxicity risk assessment of these devices begins with the evaluation of materials of construction, manufacturing additives and all residual materials for potential to induce DNA damage. This is followed by extractable and/or leachable (E&L) studies to understand the worst case and/or clinical exposures, coupled with risk assessment of extractables or leachables. The TTC (Threshold of Toxicological Concern) approach is used to define acceptable levels of genotoxic chemicals, when identified. Where appropriate, in silico predictions may be used to evaluate the genotoxic potentials of identifiable chemicals with limited toxicological data and above the levels defined by TTC. Devices that could not be supported by E&L studies are evaluated by in vitro genotoxicity studies conducted in accordance with ISO10993-3 and 33. Certain endpoints such as ‘site of contact genotoxicity’ that are specific for certain classes of medical devices are currently not addressed in the current standards. The review also illustrates the potential uses of recent advances to achieve the goal of robust genotoxicity assessment of medical devices which are being increasingly used for health benefits. The review also highlights the gaps for genotoxicity risk assessment of medical devices and suggests possible approaches to address them taking into consideration the recent advances in genotoxicity testing including their potential uses in biocompatibility assessment
Genetic architecture of sporadic frontotemporal dementia and overlap with Alzheimer's and Parkinson's diseases
BACKGROUND: Clinical, pathological and genetic overlap between sporadic frontotemporal dementia (FTD), Alzheimer's disease (AD) and Parkinson's disease (PD) has been suggested; however, the relationship between these disorders is still not well understood. Here we evaluated genetic overlap between FTD, AD and PD to assess shared pathobiology and identify novel genetic variants associated with increased risk for FTD.
METHODS: Summary statistics were obtained from the International FTD Genomics Consortium, International PD Genetics Consortium and International Genomics of AD Project (n>75 000 cases and controls). We used conjunction false discovery rate (FDR) to evaluate genetic pleiotropy and conditional FDR to identify novel FTD-associated SNPs. Relevant variants were further evaluated for expression quantitative loci.
RESULTS: We observed SNPs within the HLA, MAPT and APOE regions jointly contributing to increased risk for FTD and AD or PD. By conditioning on polymorphisms associated with PD and AD, we found 11 loci associated with increased risk for FTD. Meta-analysis across two independent FTD cohorts revealed a genome-wide signal within the APOE region (rs6857, 3′-UTR=PVRL2, p=2.21×10–12), and a suggestive signal for rs1358071 within the MAPT region (intronic=CRHR1, p=4.91×10−7) with the effect allele tagging the H1 haplotype. Pleiotropic SNPs at the HLA and MAPT loci associated with expression changes in cis-genes supporting involvement of intracellular vesicular trafficking, immune response and endo/lysosomal processes.
CONCLUSIONS: Our findings demonstrate genetic pleiotropy in these neurodegenerative diseases and indicate that sporadic FTD is a polygenic disorder where multiple pleiotropic loci with small effects contribute to increased disease risk
Random walk centrality for temporal networks
Nodes can be ranked according to their relative importance within a network. Ranking algorithms based on random walks are particularly useful because they connect topological and diffusive properties of the network. Previous methods based on random walks, for example the PageRank, have focused on static structures. However, several realistic networks are indeed dynamic, meaning that their structure changes in time. In this paper, we propose a centrality measure for temporal networks based on random walks under periodic boundary conditions that we call TempoRank. It is known that, in static networks, the stationary density of the random walk is proportional to the degree or the strength of a node. In contrast, we find that, in temporal networks, the stationary density is proportional to the in-strength of the so-called effective network, a weighted and directed network explicitly constructed from the original sequence of transition matrices. The stationary density also depends on the sojourn probability q, which regulates the tendency of the walker to stay in the node, and on the temporal resolution of the data. We apply our method to human interaction networks and show that although it is important for a node to be connected to another node with many random walkers (one of the principles of the PageRank) at the right moment, this effect is negligible in practice when the time order of link activation is included
Combining scores from different patient reported outcome measures in meta-analyses: when is it justified?
BACKGROUND: Combining outcomes and the use of standardized effect measures such as effect size and standardized response mean across instruments allows more comprehensive meta-analyses and should avoid selection bias. However, such analysis ideally requires that the instruments correlate strongly and that the underlying assumption of similar responsiveness is fulfilled. The aim of the study was to assess the correlation between two widely used health-related quality of life instruments for patients with chronic obstructive pulmonary disease and to compare the instruments' responsiveness on a study level. METHODS: We systematically identified all longitudinal studies that used both the Chronic Respiratory Questionnaire (CRQ) and the St. George's Respiratory Questionnaire (SGRQ) through electronic searches of MEDLINE, EMBASE, CENTRAL and PubMed. We assessed the correlation between CRQ (scale 1 – 7) and SGRQ (scale 1 – 100) change scores and compared responsiveness of the two instruments by comparing standardized response means (change scores divided by their standard deviation). RESULTS: We identified 15 studies with 23 patient groups. CRQ change scores ranged from -0.19 to 1.87 (median 0.35, IQR 0.14–0.68) and from -16.00 to 3.00 (median -3.00, IQR -4.73–0.25) for SGRQ change scores. The correlation between CRQ and SGRQ change scores was 0.88. Standardized response means of the CRQ (median 0.51, IQR 0.19–0.98) were significantly higher (p < 0.001) than for the SGRQ (median 0.26, IQR -0.03–0.40). CONCLUSION: Investigators should be cautious about pooling the results from different instruments in meta-analysis even if they appear to measure similar constructs. Despite high correlation in changes scores, responsiveness of instruments may differ substantially and could lead to important between-study heterogeneity and biased meta-analyses
- …