47,765 research outputs found

    Network-based stratification of tumor mutations.

    Get PDF
    Many forms of cancer have multiple subtypes with different causes and clinical outcomes. Somatic tumor genome sequences provide a rich new source of data for uncovering these subtypes but have proven difficult to compare, as two tumors rarely share the same mutations. Here we introduce network-based stratification (NBS), a method to integrate somatic tumor genomes with gene networks. This approach allows for stratification of cancer into informative subtypes by clustering together patients with mutations in similar network regions. We demonstrate NBS in ovarian, uterine and lung cancer cohorts from The Cancer Genome Atlas. For each tissue, NBS identifies subtypes that are predictive of clinical outcomes such as patient survival, response to therapy or tumor histology. We identify network regions characteristic of each subtype and show how mutation-derived subtypes can be used to train an mRNA expression signature, which provides similar information in the absence of DNA sequence

    Hierarchical Models for Relational Event Sequences

    Full text link
    Interaction within small groups can often be represented as a sequence of events, where each event involves a sender and a recipient. Recent methods for modeling network data in continuous time model the rate at which individuals interact conditioned on the previous history of events as well as actor covariates. We present a hierarchical extension for modeling multiple such sequences, facilitating inferences about event-level dynamics and their variation across sequences. The hierarchical approach allows one to share information across sequences in a principled manner---we illustrate the efficacy of such sharing through a set of prediction experiments. After discussing methods for adequacy checking and model selection for this class of models, the method is illustrated with an analysis of high school classroom dynamics

    Anomaly Detection Based on Aggregation of Indicators

    Full text link
    Automatic anomaly detection is a major issue in various areas. Beyond mere detection, the identification of the origin of the problem that produced the anomaly is also essential. This paper introduces a general methodology that can assist human operators who aim at classifying monitoring signals. The main idea is to leverage expert knowledge by generating a very large number of indicators. A feature selection method is used to keep only the most discriminant indicators which are used as inputs of a Naive Bayes classifier. The parameters of the classifier have been optimized indirectly by the selection process. Simulated data designed to reproduce some of the anomaly types observed in real world engines.Comment: 23rd annual Belgian-Dutch Conference on Machine Learning (Benelearn 2014), Bruxelles : Belgium (2014

    Discovering study-specific gene regulatory networks

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Microarrays are commonly used in biology because of their ability to simultaneously measure thousands of genes under different conditions. Due to their structure, typically containing a high amount of variables but far fewer samples, scalable network analysis techniques are often employed. In particular, consensus approaches have been recently used that combine multiple microarray studies in order to find networks that are more robust. The purpose of this paper, however, is to combine multiple microarray studies to automatically identify subnetworks that are distinctive to specific experimental conditions rather than common to them all. To better understand key regulatory mechanisms and how they change under different conditions, we derive unique networks from multiple independent networks built using glasso which goes beyond standard correlations. This involves calculating cluster prediction accuracies to detect the most predictive genes for a specific set of conditions. We differentiate between accuracies calculated using cross-validation within a selected cluster of studies (the intra prediction accuracy) and those calculated on a set of independent studies belonging to different study clusters (inter prediction accuracy). Finally, we compare our method's results to related state-of-the art techniques. We explore how the proposed pipeline performs on both synthetic data and real data (wheat and Fusarium). Our results show that subnetworks can be identified reliably that are specific to subsets of studies and that these networks reflect key mechanisms that are fundamental to the experimental conditions in each of those subsets

    Dynamic Bayesian Predictive Synthesis in Time Series Forecasting

    Full text link
    We discuss model and forecast combination in time series forecasting. A foundational Bayesian perspective based on agent opinion analysis theory defines a new framework for density forecast combination, and encompasses several existing forecast pooling methods. We develop a novel class of dynamic latent factor models for time series forecast synthesis; simulation-based computation enables implementation. These models can dynamically adapt to time-varying biases, miscalibration and inter-dependencies among multiple models or forecasters. A macroeconomic forecasting study highlights the dynamic relationships among synthesized forecast densities, as well as the potential for improved forecast accuracy at multiple horizons

    Anomaly Detection Based on Indicators Aggregation

    Full text link
    Automatic anomaly detection is a major issue in various areas. Beyond mere detection, the identification of the source of the problem that produced the anomaly is also essential. This is particularly the case in aircraft engine health monitoring where detecting early signs of failure (anomalies) and helping the engine owner to implement efficiently the adapted maintenance operations (fixing the source of the anomaly) are of crucial importance to reduce the costs attached to unscheduled maintenance. This paper introduces a general methodology that aims at classifying monitoring signals into normal ones and several classes of abnormal ones. The main idea is to leverage expert knowledge by generating a very large number of binary indicators. Each indicator corresponds to a fully parametrized anomaly detector built from parametric anomaly scores designed by experts. A feature selection method is used to keep only the most discriminant indicators which are used at inputs of a Naive Bayes classifier. This give an interpretable classifier based on interpretable anomaly detectors whose parameters have been optimized indirectly by the selection process. The proposed methodology is evaluated on simulated data designed to reproduce some of the anomaly types observed in real world engines.Comment: International Joint Conference on Neural Networks (IJCNN 2014), Beijing : China (2014). arXiv admin note: substantial text overlap with arXiv:1407.088

    Dose rationale and pharmacokinetics of dexmedetomidine in mechanically ventilated new-borns : impact of design optimisation

    Get PDF
    Purpose: There is a need for alternative analgosedatives such as dexmedetomidine in neonates. Given the ethical and practical difficulties, protocol design for clinical trials in neonates should be carefully considered before implementation. Our objective was to identify a protocol design suitable for subsequent evaluation of the dosing requirements for dexmedetomidine in mechanically ventilated neonates. Methods: A published paediatric pharmacokinetic model was used to derive the dosing regimen for dexmedetomidine in a first-in-neonate study. Optimality criteria were applied to optimise the blood sampling schedule. The impact of sampling schedule optimisation on model parameter estimation was assessed by simulation and re-estimation procedures for different simulation scenarios. The optimised schedule was then implemented in a neonatal pilot study. Results: Parameter estimates were more precise and similarly accurate in the optimised scenarios, as compared to empirical sampling (normalised root mean square error: 1673.1% vs. 13,229.4% and relative error: 46.4% vs. 9.1%). Most importantly, protocol deviations from the optimal design still allowed reasonable parameter estimation. Data analysis from the pilot group (n = 6) confirmed the adequacy of the optimised trial protocol. Dexmedetomidine pharmacokinetics in term neonates was scaled using allometry and maturation, but results showed a 20% higher clearance in this population compared to initial estimates obtained by extrapolation from a slightly older paediatric population. Clearance for a typical neonate, with a post-menstrual age (PMA) of 40 weeks and weight 3.4 kg, was 2.92 L/h. Extension of the study with 11 additional subjects showed a further increased clearance in pre-term subjects with lower PMA. Conclusions: The use of optimal design in conjunction with simulation scenarios improved the accuracy and precision of the estimates of the parameters of interest, taking into account protocol deviations, which are often unavoidable in this event-prone population

    Physics-related epistemic uncertainties in proton depth dose simulation

    Full text link
    A set of physics models and parameters pertaining to the simulation of proton energy deposition in matter are evaluated in the energy range up to approximately 65 MeV, based on their implementations in the Geant4 toolkit. The analysis assesses several features of the models and the impact of their associated epistemic uncertainties, i.e. uncertainties due to lack of knowledge, on the simulation results. Possible systematic effects deriving from uncertainties of this kind are highlighted; their relevance in relation to the application environment and different experimental requirements are discussed, with emphasis on the simulation of radiotherapy set-ups. By documenting quantitatively the features of a wide set of simulation models and the related intrinsic uncertainties affecting the simulation results, this analysis provides guidance regarding the use of the concerned simulation tools in experimental applications; it also provides indications for further experimental measurements addressing the sources of such uncertainties.Comment: To be published in IEEE Trans. Nucl. Sc

    Precision and neuronal dynamics in the human posterior parietal cortex during evidence accumulation

    Get PDF
    Primate studies show slow ramping activity in posterior parietal cortex (PPC) neurons during perceptual decision-making. These findings have inspired a rich theoretical literature to account for this activity. These accounts are largely unrelated to Bayesian theories of perception and predictive coding, a related formulation of perceptual inference in the cortical hierarchy. Here, we tested a key prediction of such hierarchical inference, namely that the estimated precision (reliability) of information ascending the cortical hierarchy plays a key role in determining both the speed of decision-making and the rate of increase of PPC activity. Using dynamic causal modelling of magnetoencephalographic (MEG) evoked responses, recorded during a simple perceptual decision-making task, we recover ramping-activity from an anatomically and functionally plausible network of regions, including early visual cortex, the middle temporal area (MT) and PPC. Precision, as reflected by the gain on pyramidal cell activity, was strongly correlated with both the speed of decision making and the slope of PPC ramping activity. Our findings indicate that the dynamics of neuronal activity in the human PPC during perceptual decision-making recapitulate those observed in the macaque, and in so doing we link observations from primate electrophysiology and human choice behaviour. Moreover, the synaptic gain control modulating these dynamics is consistent with predictive coding formulations of evidence accumulation
    corecore