56 research outputs found

    Time-Frequency characterisation for electric load monitoring

    No full text
    Electric utilities and consumers are increasingly interested in energy monitoring for economic and environmental reasons. A non-intrusive solution may rely on information extracted from the electric consumption measured at a centralized part of a distribution network. The problem at hands consists in the separation of the electric load into its major components. This problem of source separation from one sensor is quite tractable under certain conditions. In this work, the focus is made on the most consuming household appliance in France: the space-heating. It is a sum of an unknown number of pseudo-periodic signals embedded in the global active power. An unsupervised algorithm to determine the space-heating schedule from the global consumption based on the interpretation of the spaceheating signature in the time-frequency domain is proposed. The proposed method conjoins a time-frequency detector and a frequent itemsets extraction. First results on real data are quite satisfying

    RJMCMC point process sampler for single sensor source separation : an application to electric load monitoring

    No full text
    This paper presents an original method to separate the residential electric load into its major components. The method is explained in the particular case of space-heating, which is the most consuming electric end-use in France1. This is a source separation problem from a single mixture. The components to be retrieved are square signals characterized by a periodic regulation and a slowly timevarying duty cycles. A point process is used to model the electric load as a configuration of possibly overlapping square signals, given the priors on magnitude, duty cycle variations and the regulation periodicity. This stochastic process is simulated using a Reversible Jump Markov Chain Monte Carlo procedure. A simulated annealing scheme is used to achieve the posterior density maximization. First results on real data provided by Electricité de France are quite encouraging

    A statistically inferred microRNA network identifies breast cancer target miR-940 as an actin cytoskeleton regulator

    Get PDF
    International audienceMiRNAs are key regulators of gene expression. By binding to many genes, they create a complex network of gene co-regulation. Here, using a network-based approach, we identified miRNA hub groups by their close connections and common targets. In one cluster containing three miRNAs, miR-612, miR-661 and miR-940, the annotated functions of the co-regulated genes suggested a role in small GTPase signalling. Although the three members of this cluster targeted the same subset of predicted genes, we showed that their overexpression impacted cell fates differently. miR-661 demonstrated enhanced phosphorylation of myosin II and an increase in cell invasion, indicating a possible oncogenic miRNA. On the contrary, miR-612 and miR-940 inhibit phosphorylation of myosin II and cell invasion. Finally, expression profiling in human breast tissues showed that miR-940 was consistently downregulated in breast cancer tissues M icroRNAs are a class of endogenous, small (19–25 nucleotides), single-stranded non-coding RNAs that regulate gene expression in all eukaryotic organisms. In metazoans, microRNAs most commonly bind to the 39 untranslated region (39UTR) of their mRNA target transcript and cause translational repression and/or mRNA degradation. Every microRNA is predicted to regulate from a dozen to thousands of genes, including transcription factors. This fine-tuning of protein expression is known to be involved in many physiological processes, such as development, apoptosis, signal transduction and even cancer progression 1,2. More than 2,000 mature human microRNAs are listed in the 20 th release of miRBase: http://www.mirbase.org (2014) (Date of access:19/08/2013), and some authors hypothesise that the majority of human genes are regulated by microRNAs 3. Since their discovery in 1993 4 , a fair understanding of their role in animal development and in the onset and progression of diseases 2 , as well as of their potential use in therapies 5 , has been gathered. However, the cooperative behaviour of microRNAs is still under investigation. A growing body of experimental evidence suggests that microRNAs can regulate genes through complementarity, meaning that microRNAs can act together to regulate individual genes or groups of genes involved in similar processes 6. For example, Hu and co-workers demonstrated that transducing a cocktail of precursor microRNAs (miR-21, miR-24 and miR-221) can result in more effective engraftment of transplanted cardiac progenitor cells 7. Consistent with these discoveries, Zhu et al. demonstrated that miR-21 and miR-221 coregulate 56 gene ontology (GO) processes 8. In the same study, the authors also showed that cotransfection of miR-1 and miR-21 increases H 2 O 2-induced myocardial apoptosis and oxidative stress. These recent findings support the idea of microRNA-mediated cooperative regulation but also argue for the use of systemic approaches, notably based on graph theory, to decipher individual and complementary roles of microRNAs. Some work has been conducted to use recent high-throughput experiment-derived data sets to infer microRNA synergistic relationships 9–12. Herein, we present a microRNA network based on target similarities among microRNAs to infer clusters of microRNAs. Clusters are defined as groups of microRNAs sharing a set of common targets, predicted by either DIANA-microT v3 13 or TargetScan v6.2 14. Some authors have used GO enrichment analysis as a confirmatory tool for their clustering approach 11. In our case, GO enrichment is not used to infer networks but as a way to estimate the probable metabolic pathway(s) a cluster of microRNAs could co-regulate. Moreover, the novelty of our approach is to consider not only clusters of microRNAs but also OPE

    GenoLink: a graph-based querying and browsing system for investigating the function of genes and proteins

    Get PDF
    BACKGROUND: A large variety of biological data can be represented by graphs. These graphs can be constructed from heterogeneous data coming from genomic and post-genomic technologies, but there is still need for tools aiming at exploring and analysing such graphs. This paper describes GenoLink, a software platform for the graphical querying and exploration of graphs. RESULTS: GenoLink provides a generic framework for representing and querying data graphs. This framework provides a graph data structure, a graph query engine, allowing to retrieve sub-graphs from the entire data graph, and several graphical interfaces to express such queries and to further explore their results. A query consists in a graph pattern with constraints attached to the vertices and edges. A query result is the set of all sub-graphs of the entire data graph that are isomorphic to the pattern and satisfy the constraints. The graph data structure does not rely upon any particular data model but can dynamically accommodate for any user-supplied data model. However, for genomic and post-genomic applications, we provide a default data model and several parsers for the most popular data sources. GenoLink does not require any programming skill since all operations on graphs and the analysis of the results can be carried out graphically through several dedicated graphical interfaces. CONCLUSION: GenoLink is a generic and interactive tool allowing biologists to graphically explore various sources of information. GenoLink is distributed either as a standalone application or as a component of the Genostar/Iogma platform. Both distributions are free for academic research and teaching purposes and can be requested at [email protected]. A commercial licence form can be obtained for profit company at [email protected]. See also

    Toxicity Assays in Nanodrops Combining Bioassay and Morphometric Endpoints

    Get PDF
    BACKGROUND: Improved chemical hazard management such as REACH policy objective as well as drug ADMETOX prediction, while limiting the extent of animal testing, requires the development of increasingly high throughput as well as highly pertinent in vitro toxicity assays. METHODOLOGY: This report describes a new in vitro method for toxicity testing, combining cell-based assays in nanodrop Cell-on-Chip format with the use of a genetically engineered stress sensitive hepatic cell line. We tested the behavior of a stress inducible fluorescent HepG2 model in which Heat Shock Protein promoters controlled Enhanced-Green Fluorescent Protein expression upon exposure to Cadmium Chloride (CdCl(2)), Sodium Arsenate (NaAsO(2)) and Paraquat. In agreement with previous studies based on a micro-well format, we could observe a chemical-specific response, identified through differences in dynamics and amplitude. We especially determined IC50 values for CdCl(2) and NaAsO(2), in agreement with published data. Individual cell identification via image-based screening allowed us to perform multiparametric analyses. CONCLUSIONS: Using pre/sub lethal cell stress instead of cell mortality, we highlighted the high significance and the superior sensitivity of both stress promoter activation reporting and cell morphology parameters in measuring the cell response to a toxicant. These results demonstrate the first generation of high-throughput and high-content assays, capable of assessing chemical hazards in vitro within the REACH policy framework

    Phi-score: A cell-to-cell phenotypic scoring method for sensitive and selective hit discovery in cell-based assays

    Get PDF
    International audiencePhenotypic screening monitors phenotypic changes induced by perturbations, including those generated by drugs or RNA interference. Currently-used methods for scoring screen hits have proven to be problematic, particularly when applied to physiologically relevant conditions such as low cell numbers or inefficient transfection. Here, we describe the Phi-score, which is a novel scoring method for the identification of phenotypic modifiers or hits in cell-based screens. Phi-score performance was assessed with simulations, a validation experiment and its application to gene identification in a large-scale RNAi screen. Using robust statistics and a variance model, we demonstrated that the Phi-score showed better sensitivity, selectivity and reproducibility compared to classical approaches. The improved performance of the Phi-score paves the way for cell-based screening of primary cells, which are often difficult to obtain from patients in sufficient numbers. We also describe a dedicated merging procedure to pool scores from small interfering RNAs targeting the same gene so as to provide improved visualization and hit selection

    An accurate and interpretable model for siRNA efficacy prediction

    Get PDF
    BACKGROUND: The use of exogenous small interfering RNAs (siRNAs) for gene silencing has quickly become a widespread molecular tool providing a powerful means for gene functional study and new drug target identification. Although considerable progress has been made recently in understanding how the RNAi pathway mediates gene silencing, the design of potent siRNAs remains challenging. RESULTS: We propose a simple linear model combining basic features of siRNA sequences for siRNA efficacy prediction. Trained and tested on a large dataset of siRNA sequences made recently available, it performs as well as more complex state-of-the-art models in terms of potency prediction accuracy, with the advantage of being directly interpretable. The analysis of this linear model allows us to detect and quantify the effect of nucleotide preferences at particular positions, including previously known and new observations. We also detect and quantify a strong propensity of potent siRNAs to contain short asymmetric motifs in their sequence, and show that, surprisingly, these motifs alone contain at least as much relevant information for potency prediction as the nucleotide preferences for particular positions. CONCLUSION: The model proposed for prediction of siRNA potency is as accurate as a state-of-the-art nonlinear model and is easily interpretable in terms of biological features. It is freely available on the web a

    A 'small-world-like' model for comparing interventions aimed at preventing and controlling influenza pandemics

    Get PDF
    BACKGROUND: With an influenza pandemic seemingly imminent, we constructed a model simulating the spread of influenza within the community, in order to test the impact of various interventions. METHODS: The model includes an individual level, in which the risk of influenza virus infection and the dynamics of viral shedding are simulated according to age, treatment, and vaccination status; and a community level, in which meetings between individuals are simulated on randomly generated graphs. We used data on real pandemics to calibrate some parameters of the model. The reference scenario assumes no vaccination, no use of antiviral drugs, and no preexisting herd immunity. We explored the impact of interventions such as vaccination, treatment/prophylaxis with neuraminidase inhibitors, quarantine, and closure of schools or workplaces. RESULTS: In the reference scenario, 57% of realizations lead to an explosive outbreak, lasting a mean of 82 days (standard deviation (SD) 12 days) and affecting 46.8% of the population on average. Interventions aimed at reducing the number of meetings, combined with measures reducing individual transmissibility, would be partly effective: coverage of 70% of affected households, with treatment of the index patient, prophylaxis of household contacts, and confinement to home of all household members, would reduce the probability of an outbreak by 52%, and the remaining outbreaks would be limited to 17% of the population (range 0.8%–25%). Reactive vaccination of 70% of the susceptible population would significantly reduce the frequency, size, and mean duration of outbreaks, but the benefit would depend markedly on the interval between identification of the first case and the beginning of mass vaccination. The epidemic would affect 4% of the population if vaccination started immediately, 17% if there was a 14-day delay, and 36% if there was a 28-day delay. Closing schools when the number of infections in the community exceeded 50 would be very effective, limiting the size of outbreaks to 10% of the population (range 0.9%–22%). CONCLUSION: This flexible tool can help to determine the interventions most likely to contain an influenza pandemic. These results support the stockpiling of antiviral drugs and accelerated vaccine development

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
    • …
    corecore