8,380 research outputs found
Genetic algorithms with DNN-based trainable crossover as an example of partial specialization of general search
Universal induction relies on some general search procedure that is doomed to
be inefficient. One possibility to achieve both generality and efficiency is to
specialize this procedure w.r.t. any given narrow task. However, complete
specialization that implies direct mapping from the task parameters to
solutions (discriminative models) without search is not always possible. In
this paper, partial specialization of general search is considered in the form
of genetic algorithms (GAs) with a specialized crossover operator. We perform a
feasibility study of this idea implementing such an operator in the form of a
deep feedforward neural network. GAs with trainable crossover operators are
compared with the result of complete specialization, which is also represented
as a deep neural network. Experimental results show that specialized GAs can be
more efficient than both general GAs and discriminative models.Comment: AGI 2017 procedding, The final publication is available at
link.springer.co
Network estimation in State Space Model with L1-regularization constraint
Biological networks have arisen as an attractive paradigm of genomic science
ever since the introduction of large scale genomic technologies which carried
the promise of elucidating the relationship in functional genomics. Microarray
technologies coupled with appropriate mathematical or statistical models have
made it possible to identify dynamic regulatory networks or to measure time
course of the expression level of many genes simultaneously. However one of the
few limitations fall on the high-dimensional nature of such data coupled with
the fact that these gene expression data are known to include some hidden
process. In that regards, we are concerned with deriving a method for inferring
a sparse dynamic network in a high dimensional data setting. We assume that the
observations are noisy measurements of gene expression in the form of mRNAs,
whose dynamics can be described by some unknown or hidden process. We build an
input-dependent linear state space model from these hidden states and
demonstrate how an incorporated regularization constraint in an
Expectation-Maximization (EM) algorithm can be used to reverse engineer
transcriptional networks from gene expression profiling data. This corresponds
to estimating the model interaction parameters. The proposed method is
illustrated on time-course microarray data obtained from a well established
T-cell data. At the optimum tuning parameters we found genes TRAF5, JUND, CDK4,
CASP4, CD69, and C3X1 to have higher number of inwards directed connections and
FYB, CCNA2, AKT1 and CASP8 to be genes with higher number of outwards directed
connections. We recommend these genes to be object for further investigation.
Caspase 4 is also found to activate the expression of JunD which in turn
represses the cell cycle regulator CDC2.Comment: arXiv admin note: substantial text overlap with arXiv:1308.359
Inferring rate coefficents of biochemical reactions from noisy data with KInfer
Dynamical models of inter- and intra-cellular processes contain the rate constants of the biochemical reactions. These kinetic parameters are often not accessible directly through experiments, but they can be inferred from time-resolved data. Time resolved data, that is, measurements of reactant concentration at series of time points, are usually affected by different types of error, whose source can be both experimental and biological. The noise in the input data makes the estimation of the model parameters a very difficult task, as if the inference method is not sufficiently robust to the noise, the resulting estimates are not reliable. Therefore "noise-robust" methods that estimate rate constants with the maximum precision and accuracy are needed. In this report we present the probabilistic generative model of parameter inference implemented by the software prototype KInfer and we show the ability of this tool of estimating the rate coefficients of models of biochemical network with a good accuracy even from very noisy input data
Machine Learning for Fluid Mechanics
The field of fluid mechanics is rapidly advancing, driven by unprecedented
volumes of data from field measurements, experiments and large-scale
simulations at multiple spatiotemporal scales. Machine learning offers a wealth
of techniques to extract information from data that could be translated into
knowledge about the underlying fluid mechanics. Moreover, machine learning
algorithms can augment domain knowledge and automate tasks related to flow
control and optimization. This article presents an overview of past history,
current developments, and emerging opportunities of machine learning for fluid
mechanics. It outlines fundamental machine learning methodologies and discusses
their uses for understanding, modeling, optimizing, and controlling fluid
flows. The strengths and limitations of these methods are addressed from the
perspective of scientific inquiry that considers data as an inherent part of
modeling, experimentation, and simulation. Machine learning provides a powerful
information processing framework that can enrich, and possibly even transform,
current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
Learning stable and predictive structures in kinetic systems: Benefits of a causal approach
Learning kinetic systems from data is one of the core challenges in many
fields. Identifying stable models is essential for the generalization
capabilities of data-driven inference. We introduce a computationally efficient
framework, called CausalKinetiX, that identifies structure from discrete time,
noisy observations, generated from heterogeneous experiments. The algorithm
assumes the existence of an underlying, invariant kinetic model, a key
criterion for reproducible research. Results on both simulated and real-world
examples suggest that learning the structure of kinetic systems benefits from a
causal perspective. The identified variables and models allow for a concise
description of the dynamics across multiple experimental settings and can be
used for prediction in unseen experiments. We observe significant improvements
compared to well established approaches focusing solely on predictive
performance, especially for out-of-sample generalization
A survey on utilization of data mining approaches for dermatological (skin) diseases prediction
Due to recent technology advances, large volumes of medical data is obtained. These data contain valuable information. Therefore data mining techniques can be used to extract useful patterns. This paper is intended to introduce data mining and its various techniques and a survey of the available literature on medical data mining. We emphasize mainly on the application of data mining on skin diseases. A categorization has been provided based on the different data mining techniques. The utility of the various data mining methodologies is highlighted. Generally association mining is suitable for extracting rules. It has been used especially in cancer diagnosis. Classification is a robust method in medical mining. In this paper, we have summarized the different uses of classification in dermatology. It is one of the most important methods for diagnosis of erythemato-squamous diseases. There are different methods like Neural Networks, Genetic Algorithms and fuzzy classifiaction in this topic. Clustering is a useful method in medical images mining. The purpose of clustering techniques is to find a structure for the given data by finding similarities between data according to data characteristics. Clustering has some applications in dermatology. Besides introducing different mining methods, we have investigated some challenges which exist in mining skin data
Finding undetected protein associations in cell signaling by belief propagation
External information propagates in the cell mainly through signaling cascades
and transcriptional activation, allowing it to react to a wide spectrum of
environmental changes. High throughput experiments identify numerous molecular
components of such cascades that may, however, interact through unknown
partners. Some of them may be detected using data coming from the integration
of a protein-protein interaction network and mRNA expression profiles. This
inference problem can be mapped onto the problem of finding appropriate optimal
connected subgraphs of a network defined by these datasets. The optimization
procedure turns out to be computationally intractable in general. Here we
present a new distributed algorithm for this task, inspired from statistical
physics, and apply this scheme to alpha factor and drug perturbations data in
yeast. We identify the role of the COS8 protein, a member of a gene family of
previously unknown function, and validate the results by genetic experiments.
The algorithm we present is specially suited for very large datasets, can run
in parallel, and can be adapted to other problems in systems biology. On
renowned benchmarks it outperforms other algorithms in the field.Comment: 6 pages, 3 figures, 1 table, Supporting Informatio
- âŠ