31 research outputs found

    Relative entropy minimizing noisy non-linear neural network to approximate stochastic processes

    Full text link
    A method is provided for designing and training noise-driven recurrent neural networks as models of stochastic processes. The method unifies and generalizes two known separate modeling approaches, Echo State Networks (ESN) and Linear Inverse Modeling (LIM), under the common principle of relative entropy minimization. The power of the new method is demonstrated on a stochastic approximation of the El Nino phenomenon studied in climate research

    The Future of Digital Health with Federated Learning

    Full text link
    Data-driven Machine Learning has emerged as a promising approach for building accurate and robust statistical models from medical data, which is collected in huge volumes by modern healthcare systems. Existing medical data is not fully exploited by ML primarily because it sits in data silos and privacy concerns restrict access to this data. However, without access to sufficient data, ML will be prevented from reaching its full potential and, ultimately, from making the transition from research to clinical practice. This paper considers key factors contributing to this issue, explores how Federated Learning (FL) may provide a solution for the future of digital health and highlights the challenges and considerations that need to be addressed.Comment: This is a pre-print version of https://www.nature.com/articles/s41746-020-00323-

    Divergent Evolution of CHD3 Proteins Resulted in MOM1 Refining Epigenetic Control in Vascular Plants

    Get PDF
    Arabidopsis MOM1 is required for the heritable maintenance of transcriptional gene silencing (TGS). Unlike many other silencing factors, depletion of MOM1 evokes transcription at selected loci without major changes in DNA methylation or histone modification. These loci retain unusual, bivalent chromatin properties, intermediate to both euchromatin and heterochromatin. The structure of MOM1 previously suggested an integral nuclear membrane protein with chromatin-remodeling and actin-binding activities. Unexpected results presented here challenge these presumed MOM1 activities and demonstrate that less than 13% of MOM1 sequence is necessary and sufficient for TGS maintenance. This active sequence encompasses a novel Conserved MOM1 Motif 2 (CMM2). The high conservation suggests that CMM2 has been the subject of strong evolutionary pressure. The replacement of Arabidopsis CMM2 by a poplar motif reveals its functional conservation. Interspecies comparison suggests that MOM1 proteins emerged at the origin of vascular plants through neo-functionalization of the ubiquitous eukaryotic CHD3 chromatin remodeling factors. Interestingly, despite the divergent evolution of CHD3 and MOM1, we observed functional cooperation in epigenetic control involving unrelated protein motifs and thus probably diverse mechanisms

    Domain adaptation with optimal transport improves EEG sleep stage classifiers

    No full text
    International audience—Low sample size and the absence of labels on certain data limits the performances of predictive algorithms. To overcome this problem, it is sometimes possible to learn a model on a large labeled auxiliary dataset. Yet, this assumes that the two datasets exhibit similar statistical properties which is rarely the case in practice: there is a discrepancy between the large dataset, called the source, and the dataset of interest, called the target. Improving the prediction performance on the target domain by reducing the distribution discrepancy, between the source and the target domains, is known as Domain Adaptation (DA). Presently, Optimal transport DA (OTDA) methods yield state-of-the-art performances on several DA problems. In this paper, we consider the problem of sleep stage classification, and use OTDA to improve the performances of a convolutional neural network. We use features learnt from the electroencephalogram (EEG) and the electrooculogram (EOG) signals. Our results demonstrate that the method significantly improves the network predictions on the target data

    Comparison between the network simulations and computation of the averaged macroscopic variables (plain lines) and simulations of the macroscopic equations (dashed lines).

    No full text
    <p>Averaged macroscopic variables are in plain lines and simulations of the macroscopic equations are in dashed lines. The variable related to the distinct populations (see text) are depicted in different colors. The inputs to the McKean and Fitzhugh-Nagumo networks are shown in (a), and for Hodgkin-Huxley networks we took an affine transform of these curves: . Transient phases in which the averaged microscopic system is imprecise due to the convolution with the symmetric window are not plotted. Initial mismatch is due to different initial conditions for both systems. We can observe it quickly disappear, showing the robustness of the reduction to variations of the initial conditions. The simulations where done using a stochastic Euler algorithm with (resp. ) time steps of size (resp. ) for McKean and FitzHugh-Nagumo (resp. Hodgkin-Huxley) networks.</p

    Linear part <i>L</i> for the different models.

    No full text
    <p> is the Dirac function centered at and for the first two models (where is the Heaviside function).</p

    Effective non-linearities surfaces in the McKean, Fitzhugh-Nagumo and Hodgkin-Huxley model.

    No full text
    <p>Observe that noise tends to have a smoothing effect on the sigmoids.For the Hodgkin-Huxley model, we have empirically chosen a noise threshold under which the neuron was considered regime II and above which it is regime I. There are thus 2 branches below the threshold and only one above.</p

    Function for the deterministic McKean model given in equation 10.

    No full text
    <p>Function for the deterministic McKean model given in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0078917#pone.0078917.e148" target="_blank">equation 10</a>.</p
    corecore