631 research outputs found
Incremental dimension reduction of tensors with random index
We present an incremental, scalable and efficient dimension reduction
technique for tensors that is based on sparse random linear coding. Data is
stored in a compactified representation with fixed size, which makes memory
requirements low and predictable. Component encoding and decoding are performed
on-line without computationally expensive re-analysis of the data set. The
range of tensor indices can be extended dynamically without modifying the
component representation. This idea originates from a mathematical model of
semantic memory and a method known as random indexing in natural language
processing. We generalize the random-indexing algorithm to tensors and present
signal-to-noise-ratio simulations for representations of vectors and matrices.
We present also a mathematical analysis of the approximate orthogonality of
high-dimensional ternary vectors, which is a property that underpins this and
other similar random-coding approaches to dimension reduction. To further
demonstrate the properties of random indexing we present results of a synonym
identification task. The method presented here has some similarities with
random projection and Tucker decomposition, but it performs well at high
dimensionality only (n>10^3). Random indexing is useful for a range of complex
practical problems, e.g., in natural language processing, data mining, pattern
recognition, event detection, graph searching and search engines. Prototype
software is provided. It supports encoding and decoding of tensors of order >=
1 in a unified framework, i.e., vectors, matrices and higher order tensors.Comment: 36 pages, 9 figure
The Inferred Cardiogenic Gene Regulatory Network in the Mammalian Heart
Cardiac development is a complex, multiscale process encompassing cell fate adoption, differentiation and morphogenesis. To elucidate pathways underlying this process, a recently developed algorithm to reverse engineer gene regulatory networks was applied to time-course microarray data obtained from the developing mouse heart. Approximately 200 genes of interest were input into the algorithm to generate putative network topologies that are capable of explaining the experimental data via model simulation. To cull specious network interactions, thousands of putative networks are merged and filtered to generate scale-free, hierarchical networks that are statistically significant and biologically relevant. The networks are validated with known gene interactions and used to predict regulatory pathways important for the developing mammalian heart. Area under the precision-recall curve and receiver operator characteristic curve are 9% and 58%, respectively. Of the top 10 ranked predicted interactions, 4 have already been validated. The algorithm is further tested using a network enriched with known interactions and another depleted of them. The inferred networks contained more interactions for the enriched network versus the depleted network. In all test cases, maximum performance of the algorithm was achieved when the purely data-driven method of network inference was combined with a data-independent, functional-based association method. Lastly, the network generated from the list of approximately 200 genes of interest was expanded using gene-profile uniqueness metrics to include approximately 900 additional known mouse genes and to form the most likely cardiogenic gene regulatory network. The resultant network supports known regulatory interactions and contains several novel cardiogenic regulatory interactions. The method outlined herein provides an informative approach to network inference and leads to clear testable hypotheses related to gene regulation
A Quantitative Study of Pure Parallel Processes
In this paper, we study the interleaving -- or pure merge -- operator that
most often characterizes parallelism in concurrency theory. This operator is a
principal cause of the so-called combinatorial explosion that makes very hard -
at least from the point of view of computational complexity - the analysis of
process behaviours e.g. by model-checking. The originality of our approach is
to study this combinatorial explosion phenomenon on average, relying on
advanced analytic combinatorics techniques. We study various measures that
contribute to a better understanding of the process behaviours represented as
plane rooted trees: the number of runs (corresponding to the width of the
trees), the expected total size of the trees as well as their overall shape.
Two practical outcomes of our quantitative study are also presented: (1) a
linear-time algorithm to compute the probability of a concurrent run prefix,
and (2) an efficient algorithm for uniform random sampling of concurrent runs.
These provide interesting responses to the combinatorial explosion problem
Novel metrics for evaluating the functional coherence of protein groups via protein semantic network
Metrics are presented for assessing overall functional coherence of a group of proteins based on the associated biomedical literature
Significance Testing Against the Random Model for Scoring Models on Top k Predictions
Performance at top k predictions, where instances are ranked by a (learned) scoring model, has
been used as an evaluation metric in machine learning for various reasons such as where the entire
corpus is unknown (e.g., the web) or where the results are to be used by a person with limited time or
resources (e.g., ranking financial news stories where the investor only has time to look at relatively
few stories per day). This evaluation metric is primarily used to report whether the performance
of a given method is significantly better than other (baseline) methods. It has not, however, been
used to show whether the result is significant when compared to the simplest of baselines â the
random model. If no models outperform the random model at a given confidence interval, then the
results may not be worth reporting. This paper introduces a technique to perform an analysis of the
expected performance of the top k predictions from the random model given k and a p-value on an
evaluation dataset D. The technique is based on the realization that the distribution of the number
of positives seen in the top k predictions follows a hypergeometric distribution, which has welldefined
statistical density functions. As this distribution is discrete, we show that using parametric
estimations based on a binomial distribution are almost always in complete agreement with the
discrete distribution and that, if they differ, an interpolation of the discrete bounds gets very close
to the parametric estimations. The technique is demonstrated on results from three prior published
works, in which it clearly shows that even though performance is greatly increased (sometimes over
100%) with respect to the expected performance of the random model (at p = 0.5), these results,
although qualitatively impressive, are not always as significant (p = 0.1) as might be suggested
by the impressive qualitative improvements. The technique is used to show, given k, both how
many positive instances are needed to achieve a specific significance threshold is as well as how
significant a given top k performance is. The technique when used in a more global setting is able
to identify the crossover points, with respect to k, when a method becomes significant for a given
p. Lastly, the technique is used to generate a complete confidence curve, which shows a general
trend over all k and visually shows where a method is significantly better than the random model
over all values of k.Information Systems Working Papers Serie
Making Presentation Math Computable
This Open-Access-book addresses the issue of translating mathematical expressions from LaTeX to the syntax of Computer Algebra Systems (CAS). Over the past decades, especially in the domain of Sciences, Technology, Engineering, and Mathematics (STEM), LaTeX has become the de-facto standard to typeset mathematical formulae in publications. Since scientists are generally required to publish their work, LaTeX has become an integral part of today's publishing workflow. On the other hand, modern research increasingly relies on CAS to simplify, manipulate, compute, and visualize mathematics. However, existing LaTeX import functions in CAS are limited to simple arithmetic expressions and are, therefore, insufficient for most use cases. Consequently, the workflow of experimenting and publishing in the Sciences often includes time-consuming and error-prone manual conversions between presentational LaTeX and computational CAS formats. To address the lack of a reliable and comprehensive translation tool between LaTeX and CAS, this thesis makes the following three contributions. First, it provides an approach to semantically enhance LaTeX expressions with sufficient semantic information for translations into CAS syntaxes. Second, it demonstrates the first context-aware LaTeX to CAS translation framework LaCASt. Third, the thesis provides a novel approach to evaluate the performance for LaTeX to CAS translations on large-scaled datasets with an automatic verification of equations in digital mathematical libraries. This is an open access book
Systems approaches to drug repositioning
PhD ThesisDrug discovery has overall become less fruitful and more costly, despite vastly increased
biomedical knowledge and evolving approaches to Research and Development (R&D).
One complementary approach to drug discovery is that of drug repositioning which
focusses on identifying novel uses for existing drugs. By focussing on existing drugs
that have already reached the market, drug repositioning has the potential to both
reduce the timeframe and cost of getting a disease treatment to those that need it.
Many marketed examples of repositioned drugs have been found via serendipitous or
rational observations, highlighting the need for more systematic methodologies.
Systems approaches have the potential to enable the development of novel methods to
understand the action of therapeutic compounds, but require an integrative approach
to biological data. Integrated networks can facilitate systems-level analyses by combining
multiple sources of evidence to provide a rich description of drugs, their targets and
their interactions. Classically, such networks can be mined manually where a skilled
person can identify portions of the graph that are indicative of relationships between
drugs and highlight possible repositioning opportunities. However, this approach is
not scalable. Automated procedures are required to mine integrated networks systematically
for these subgraphs and bring them to the attention of the user. The aim
of this project was the development of novel computational methods to identify new
therapeutic uses for existing drugs (with particular focus on active small molecules)
using data integration.
A framework for integrating disparate data relevant to drug repositioning, Drug Repositioning
Network Integration Framework (DReNInF) was developed as part of this
work. This framework includes a high-level ontology, Drug Repositioning Network
Integration Ontology (DReNInO), to aid integration and subsequent mining; a suite
of parsers; and a generic semantic graph integration platform. This framework enables
the production of integrated networks maintaining strict semantics that are important
in, but not exclusive to, drug repositioning. The DReNInF is then used to create Drug Repositioning Network Integration (DReNIn), a semantically-rich Resource Description
Framework (RDF) dataset. A Web-based front end was developed, which includes
a SPARQL Protocol and RDF Query Language (SPARQL) endpoint for querying this
dataset.
To automate the mining of drug repositioning datasets, a formal framework for the
definition of semantic subgraphs was established and a method for Drug Repositioning
Semantic Mining (DReSMin) was developed. DReSMin is an algorithm for mining
semantically-rich networks for occurrences of a given semantic subgraph. This algorithm
allows instances of complex semantic subgraphs that contain data about putative
drug repositioning opportunities to be identified in a computationally tractable
fashion, scaling close to linearly with network data.
The ability of DReSMin to identify novel Drug-Target (D-T) associations was investigated.
9,643,061 putative D-T interactions were identified and ranked, with a strong
correlation between highly scored associations and those supported by literature observed.
The 20 top ranked associations were analysed in more detail with 14 found
to be novel and six found to be supported by the literature. It was also shown that
this approach better prioritises known D-T interactions, than other state-of-the-art
methodologies.
The ability of DReSMin to identify novel Drug-Disease (Dr-D) indications was also
investigated. As target-based approaches are utilised heavily in the field of drug discovery,
it is necessary to have a systematic method to rank Gene-Disease (G-D) associations.
Although methods already exist to collect, integrate and score these associations,
these scores are often not a reliable re
flection of expert knowledge. Therefore, an
integrated data-driven approach to drug repositioning was developed using a Bayesian
statistics approach and applied to rank 309,885 G-D associations using existing knowledge.
Ranked associations were then integrated with other biological data to produce
a semantically-rich drug discovery network. Using this network it was shown that
diseases of the central nervous system (CNS) provide an area of interest. The network
was then systematically mined for semantic subgraphs that capture novel Dr-D relations.
275,934 Dr-D associations were identified and ranked, with those more likely to
be side-effects filtered. Work presented here includes novel tools and algorithms to enable research within
the field of drug repositioning. DReNIn, for example, includes data that previous
comparable datasets relevant to drug repositioning have neglected, such as clinical
trial data and drug indications. Furthermore, the dataset may be easily extended
using DReNInF to include future data as and when it becomes available, such as G-D
association directionality (i.e. is the mutation a loss-of-function or gain-of-function).
Unlike other algorithms and approaches developed for drug repositioning, DReSMin
can be used to infer any types of associations captured in the target semantic network.
Moreover, the approaches presented here should be more generically applicable to
other fields that require algorithms for the integration and mining of semantically rich
networks.European and Physical Sciences Research Council (EPSRC) and GS
Categorization of interestingness measures for knowledge extraction
Finding interesting association rules is an important and active research
field in data mining. The algorithms of the Apriori family are based on two
rule extraction measures, support and confidence. Although these two measures
have the virtue of being algorithmically fast, they generate a prohibitive
number of rules most of which are redundant and irrelevant. It is therefore
necessary to use further measures which filter uninteresting rules. Many
synthesis studies were then realized on the interestingness measures according
to several points of view. Different reported studies have been carried out to
identify "good" properties of rule extraction measures and these properties
have been assessed on 61 measures. The purpose of this paper is twofold. First
to extend the number of the measures and properties to be studied, in addition
to the formalization of the properties proposed in the literature. Second, in
the light of this formal study, to categorize the studied measures. This paper
leads then to identify categories of measures in order to help the users to
efficiently select an appropriate measure by choosing one or more measure(s)
during the knowledge extraction process. The properties evaluation on the 61
measures has enabled us to identify 7 classes of measures, classes that we
obtained using two different clustering techniques.Comment: 34 pages, 4 figure
- …