8,562 research outputs found
Effective pattern discovery for text mining
Many data mining techniques have been proposed for mining useful patterns in text documents. However, how to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase) based approaches should perform better than the term-based ones, but many experiments did not support this hypothesis. This paper presents an innovative technique, effective pattern discovery which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance
Information profiles for DNA pattern discovery
Finite-context modeling is a powerful tool for compressing and hence for
representing DNA sequences. We describe an algorithm to detect genomic
regularities, within a blind discovery strategy. The algorithm uses information
profiles built using suitable combinations of finite-context models. We used
the genome of the fission yeast Schizosaccharomyces pombe strain 972 h- for
illustration, unveilling locations of low information content, which are
usually associated with DNA regions of potential biological interest.Comment: Full version of DCC 2014 paper "Information profiles for DNA pattern
discovery
Multiple Hypothesis Testing in Pattern Discovery
The problem of multiple hypothesis testing arises when there are more than
one hypothesis to be tested simultaneously for statistical significance. This
is a very common situation in many data mining applications. For instance,
assessing simultaneously the significance of all frequent itemsets of a single
dataset entails a host of hypothesis, one for each itemset. A multiple
hypothesis testing method is needed to control the number of false positives
(Type I error). Our contribution in this paper is to extend the multiple
hypothesis framework to be used with a generic data mining algorithm. We
provide a method that provably controls the family-wise error rate (FWER, the
probability of at least one false positive) in the strong sense. We evaluate
the performance of our solution on both real and generated data. The results
show that our method controls the FWER while maintaining the power of the test.Comment: 28 page
An Algorithm for Pattern Discovery in Time Series
We present a new algorithm for discovering patterns in time series and other
sequential data. We exhibit a reliable procedure for building the minimal set
of hidden, Markovian states that is statistically capable of producing the
behavior exhibited in the data -- the underlying process's causal states.
Unlike conventional methods for fitting hidden Markov models (HMMs) to data,
our algorithm makes no assumptions about the process's causal architecture (the
number of hidden states and their transition structure), but rather infers it
from the data. It starts with assumptions of minimal structure and introduces
complexity only when the data demand it. Moreover, the causal states it infers
have important predictive optimality properties that conventional HMM states
lack. We introduce the algorithm, review the theory behind it, prove its
asymptotic reliability, use large deviation theory to estimate its rate of
convergence, and compare it to other algorithms which also construct HMMs from
data. We also illustrate its behavior on an example process, and report
selected numerical results from an implementation.Comment: 26 pages, 5 figures; 5 tables;
http://www.santafe.edu/projects/CompMech Added discussion of algorithm
parameters; improved treatment of convergence and time complexity; added
comparison to older method
Parallel Pattern Discovery
Üks huvitav uurimisprobleem andmete analüüsimisel on mustriotsing. Mustrid võivad näidata kuidas andmed on tekkinud ja kuidas ta ennast kordab. Andmete mahu kiire kasvamise tõttu on vajadus algoritmidele, mis skaleeruvad mitmele protsessile. Selles töös me uurime kuidas paralleliseerida olemasolevat algoritmi kasutades kolme ideed: üldistamine, liigendamine ja reifitseerimine. Me rakendame neid ideid SPEXS-il, mustriotsingu algoritm, ning tuletame paralleelse algoritmi SPEXS2, mille me ka implementeerime. Lisaks me uurime probleeme, mis tekkisid selle algoritmi implementeerimisel. Selles töös tutvustatud ideid saab kasutada teiste algoritmide üldistamisel ning paralleliseerimisel.An interesting research problem in dataset analysis is the discovery of patterns. Patterns can show how the dataset was formed and how it repeats itself. Due to the fast growth of data collection there is a need for algorithms that can scale with the data. In this thesis we examine how we can take an existing algorithm and make it parallel with three ideas: generalization, decomposition and reification of the existing algorithm. We apply these ideas to SPEXS, a pattern discovery algorithm, and generate a new algorithm SPEXS2, which we also implement. We also analyze several problems when implementing a generic algorithm. The ideas described could be used to parallelize other algorithms as well
Pattern matching and pattern discovery algorithms for protein topologies
We describe algorithms for pattern matching and pattern
learning in TOPS diagrams (formal descriptions of protein topologies).
These problems can be reduced to checking for subgraph isomorphism
and finding maximal common subgraphs in a restricted class of ordered
graphs. We have developed a subgraph isomorphism algorithm for
ordered graphs, which performs well on the given set of data. The
maximal common subgraph problem then is solved by repeated
subgraph extension and checking for isomorphisms. Despite the
apparent inefficiency such approach gives an algorithm with time
complexity proportional to the number of graphs in the input set and is
still practical on the given set of data. As a result we obtain fast
methods which can be used for building a database of protein
topological motifs, and for the comparison of a given protein of known
secondary structure against a motif database
- …