INRIA a CCSD electronic archive server
Not a member yet
    120044 research outputs found

    Determining Clinical Disease Progression in Symptomatic Patients With CADASIL

    No full text
    International audienceBackground and Objectives: Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy (CADASIL) is the most frequent small artery brain disease caused by pathogenic variants of the NOTCH3 gene. During the disease, we still do not know how the various deficits progress and develop with each other at different stages of the disease. We aim to model disease progression, identify possible progressive subgroups and the effects of different covariates on clinical worsening. Methods: Data were from patients followed in the French CADASIL referral center, who were aged 25-80 years and had completed at least two visits and one of 14 clinical scores. Progression and variability were assessed using a Disease course model (Leaspy). A Gaussian mixture model was used to identify different progression subgroups. Logistic regressions were used to compare the characteristics between groups. Results: In 395 patients along 2007 visits, the follow-up ranged from 6 months to 19 years, with a mean of 7.5 years. They were 45% men with a mean age of 52.2 years. The evolution curves of the different scores showed that clinical manifestations develop heterogeneously and can vary considerably depending on the disease stage. We identified an early-onset, rapidly progressing subgroup of patients with earlier motor symptoms and focal neurological deficits, (median time-shift: 59 (Q1-Q3: 48.9-66.3), median acceleration rate: 0.84 (0.07-1.31), and a late-onset slowly progressing group, with earlier cognitive symptoms, (median time-shift: 69.2 (63.4-75.1), median acceleration rate: -0.18 (-0.48-0.14). Male gender, a lower education level, hypertension, and the NOTCH3 pathogenic variant location within EGFr 1-6 were found associated with this group difference. Discussion: Our results suggest a gradual and heterogeneous decline in different clinical and cognitive performances over the lifetime of CADASIL patients. Two progression profiles, one rapid and early and the other, more delayed and slower, are possible after the onset of symptoms. Although a major limitation of our study is that the clusters were assessed post-hoc which may induce some bias. Overall, male gender, a low level of education, the pathogenic variant location in EGFr 1 to 6 domains, smoking and/or arterial hypertension may affect the clinical progression of the disease.</div

    Benchmark for quantitative characterization of circadian clock cycles

    No full text
    International audienceUnderstanding circadian clock mechanisms is fundamental in order to counteract the harmful effects of clock malfunctioning and associated diseases. Biochemical, genetic and systems biology approaches have provided invaluable information on the mechanisms of the circadian clock, from which many mathematical models have been developed to understand the dynamics and quantitative properties of the circadian oscillator. To better analyze and compare quantitatively all these circadian cycles, we propose a method based on a previously proposed circadian cycle segmentation into stages. We notably identify a sequence of eight stages that characterize the progress of the circadian cycle. Next, we apply our approach to an experimental dataset and to five different models, all built with ordinary differential equations. Our method permits to assess the agreement of mathematical model cycles with biological properties or to detect some inconsistencies. As another application of our method, we provide insights on how this segmentation into stages can help to analyze the effect of a clock gene loss of function on the dynamic of a genetic oscillator. The strength of our method is to provide a benchmark for characterization, comparison and improvement of new mathematical models of circadian oscillators in a wide variety of model systems.</div

    Decoding Algorithms for Tensor Codes

    No full text
    Tensor codes are a generalisation of matrix codes. Such codes are defined as subspaces of order-rr tensors for which the ambient space is endowed with the tensor-rank as a metric. A class of these codes was introduced by Roth who outlined a decoding algorithm for low tensor-rank errors for particular cases. They may be viewed as a generalisation of the well-known Delsarte-Gabidulin-Roth maximum rank distance codes. We study a generalised class of these codes. We investigate the properties of these codes and outline decoding techniques for different metrics that leverage their tensor structure. We first consider a fibre-wise decoding approach, as each fibre of a codeword corresponds to a Gabidulin codeword. We then give a generalisation of Loidreau's decoding method that corrects errors with properties constrained by the dimensions of the slice spaces and fibre spaces. The metrics we consider are upper bounded by the tensor-rank metric, and therefore these algorithms also decode tensor-rank weight errors

    Enhanced Computational Complexity in Continuous-Depth Models: Neural Ordinary Differential Equations With Trainable Numerical Schemes

    No full text
    International audienceNeural Ordinary Differential Equations (NODEs) serve as continuous-time analogs of residual networks. They provide a system-theoretic perspective on neural network architecture design and offer natural solutions for time series modeling, forecasting, and applications where invertible neural networks are essential. However, these models suffer from slow performance due to heavy numerical solver overhead. For instance, a popular solution for training and inference of NODEs consists in using adaptive step size solvers such as the popular Dormand–Prince 5(4) (DOPRI). These solvers dynamically adjust the Number of Function Evaluations (NFE) as the equation fits the training data and becomes more complex. However, this comes at the cost of an increased number of function evaluations, which reduces computational efficiency. In this work, we propose a novel approach: making the parameters of the numerical integration scheme trainable. By doing so, the numerical scheme dynamically adapts to the dynamics of the NODE, resulting in a model that operates with a fixed NFE. We compare the proposed trainable solvers with state-of-the-art approaches, including DOPRI, for different benchmarks, including classification, density estimation, and dynamical system modeling. Overall, we report a state-of-the-art performance for all benchmarks in terms of accuracy metrics, while enhancing the computational efficiency through trainable fixed-step-size solvers. This work opens up new possibilities for practical and efficient modeling applications with NODEs

    Neural Variational Data Assimilation with Uncertainty Quantification Using SPDE Priors

    No full text
    International audienceAbstract The spatiotemporal interpolation of large geophysical datasets has historically been addressed by optimal interpolation (OI) and more sophisticated equation-based or data-driven data assimilation (DA) techniques. Recent advances in the deep learning community enable to address the interpolation problem through a neural architecture incorporating a variational data assimilation framework. The reconstruction task is seen as a joint learning problem of the prior involved in the variational inner cost, seen as a projection operator of the state, and the gradient-based minimization of the latter. Both prior models and solvers are stated as neural networks with automatic differentiation which can be trained by minimizing a loss function, typically the mean-square error between some ground truth and the reconstruction. Such a strategy turns out to be very efficient to improve the mean-state estimation but still needs complementary developments to quantify its related uncertainty. In this work, we use the theory of stochastic partial differential equations (SPDEs) and Gaussian processes (GPs) to estimate both space- and time-varying covariance of the state. Our neural variational scheme is modified to embed an augmented state formulation with both state and SPDE parameterization to estimate. We demonstrate the potential of the proposed framework on a spatiotemporal GP driven by diffusion-based anisotropies and on realistic sea surface height (SSH) datasets. We show how our solution reaches the OI baseline in the Gaussian case. For nonlinear dynamics, as almost always stated in DA, our solution outperforms OI, while allowing for fast and interpretable online parameter estimation

    Fixed-Work vs. Fixed-Time Checkpointing on Large-Scale Failure-Prone Platforms

    No full text
    International audienceConsider a High-Performance Computing (HPC) application executing on a large-scale failure-prone platform. The Fixed-Work Checkpointing ( FWC) problem consists in minimizing the expected time to execute a fixed amount of work (namely a fraction or the totality of the application). Strategies for the FWC problem have received considerable attention and are well-understood. On the contrary, the dual problem, namely the Fixed-Time Checkpointing ( FTC) problem, has been considered only very recently. The FTC problem consists in maximizing the expected work achieved during a fixed amount of time (namely the duration of a reservation granted to the application). This work provides a comparative overview of both problems. First we review existing strategies for the FWC problem and extend them to stochastic checkpoints, i.e., when the checkpoint is no longer a deterministic constant but obeys some probability distribution law instead. Then we provide a comprehensive study of the FTC problem. The problem turns out to be surprisingly difficult, even when restricting to taking one or two checkpoints. We provide a threshold-based heuristic to solve the general instance of the problem with an arbitrary number of checkpoints, and we have to resort to time discretization to provide an optimal strategy. We further extend this latter strategy to stochastic checkpoints.</div

    Alpha Mesh Swc: automatic and robust surface mesh generation from the skeleton description of brain cells

    No full text
    In recent years, there has been a significant increase in publicly available skeleton descriptions of real brain cells from laboratories all over the world. In theory, this should make it possible to perform large scale realistic simulations on brain cells. However, currently there is still a gap between the skeleton descriptions and high quality simulation-ready surface and volume meshes of brain cells. We propose and implement a tool called {\it Alpha\_Mesh\_Swc} to generate automatically and efficiently triangular surface meshes that are optimized for finite elements simulations. We use an Alpha Wrapping method with an offset parameter on component surface meshes to efficiently generate a global watertight mesh. Then mesh simplification and re-meshing are used to produce an optimal surface mesh. Our methodology limits the number of surface triangles while preserving geometrical accuracy, permits cutting and gluing of cell components, is robust to imperfect skeleton descriptions, and allows mixed cell descriptions (surface meshes combined with skeletons). We compared the robustness, performance and accuracy of {\it Alpha\_Mesh\_Swc} against existing tools and found significant improvement in terms of mesh accuracy. We show, on average, we can generate fully automatically a brain cell (neurons or glia) surface mesh in a couple of minutes on a laptop computer resulting in a simplified surface mesh with only around 10k nodes. The resulting meshes were used to perform diffusion MRI simulations in neurons and microglia. The code and a number of sample brain cell surface meshes have been made publicly available

    A cautionary tale on the cost-effectiveness of collaborative AI in real-world medical applications

    No full text
    Background. Federated learning (FL) has gained wide popularity as a collaborative learning paradigm enabling collaborative AI in sensitive healthcare applications. Nevertheless, the practical implementation of FL presents technical and organizational challenges, as it generally requires complex communication infrastructures. In this context, consensus-based learning (CBL) may represent a promising collaborative learning alternative, thanks to the ability of combining local knowledge into a federated decision system, while potentially reducing deployment overhead. Methods. In this work we propose an extensive benchmark of the accuracy and cost-effectiveness of a panel of FL and CBL methods in a wide range of collaborative medical data analysis scenarios. The benchmark includes 7 different medical datasets, encompassing 3 machine learning tasks, 8 different data modalities, and multi-centric settings involving 3 to 23 clients. Findings. Our results reveal that CBL is a cost-effective alternative to FL. When compared across the panel of medical dataset in the considered benchmark, CBL methods provide equivalent accuracy to the one achieved by FL.Nonetheless, CBL significantly reduces training time and communication cost (resp. 15 fold and 60 fold decrease) (p &lt; 0 • 05). Interpretation. This study opens a novel perspective on the deployment of collaborative AI in real-world applications, whereas the adoption of cost-effective methods is instrumental to achieve sustainability and democratisation of AI by alleviating the need for extensive computational resources. Funding

    Approximate well-balanced WENO finite difference schemes using a global-flux quadrature method with multi-step ODE integrator weights

    No full text
    International audienceIn this work, high-order discrete well-balanced methods for one-dimensional hyperbolic systems of balance laws are proposed. We aim to construct a method whose discrete steady states correspond to solutions of arbitrary high-order ODE integrators. However, this property is embedded directly into the scheme, eliminating the need to apply the ODE integrator explicitly to solve the local Cauchy problem. To achieve this, we employ a WENO finite difference framework and apply WENO reconstruction to a global flux assembled nodewise as the sum of the physical flux and a source primitive. The novel idea is to compute the source primitive using high-order multi-step ODE methods applied on the finite difference grid. This approach provides a locally well-balanced splitting of the source integral, with weights derived from the ODE integrator. By construction, the discrete solutions of the proposed schemes align with those of the underlying ODE integrator. The proposed methods employ WENO flux reconstructions of varying orders, combined with multi-step ODE methods of up to order 8, achieving steady-state accuracy determined solely by the ODE method's consistency. Numerical experiments using scalar balance laws and shallow water equations confirm that the methods achieve optimal convergence for time-dependent solutions and significant error reduction for steady-state solutions.</div

    59,760

    full texts

    120,049

    metadata records
    Updated in last 30 days.
    INRIA a CCSD electronic archive server is based in France
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇