54,813 research outputs found
A Comparison of the Quality of Rule Induction from Inconsistent Data Sets and Incomplete Data Sets
In data mining, decision rules induced from known examples are used to classify unseen cases. There are various rule induction algorithms, such as LEM1 (Learning from Examples Module version 1), LEM2 (Learning from Examples Module version 2) and MLEM2 (Modified Learning from Examples Module version 2). In the real world, many data sets are imperfect, either inconsistent or incomplete. The idea of lower and upper approximations, or more generally, the probabilistic approximation, provides an effective way to induce rules from inconsistent data sets and incomplete data sets. But the accuracies of rule sets induced from imperfect data sets are expected to be lower. The objective of this project is to investigate which kind of imperfect data sets (inconsistent or incomplete) is worse in terms of the quality of rule induction. In this project, experiments were conducted on eight inconsistent data sets and eight incomplete data sets with lost values. We implemented the MLEM2 algorithm to induce certain and possible rules from inconsistent data sets, and implemented the local probabilistic version of MLEM2 algorithm to induce certain and possible rules from incomplete data sets. A program called Rule Checker was also developed to classify unseen cases with induced rules and measure the classification error rate. Ten-fold cross validation was carried out and the average error rate was used as the criterion for comparison. The Mann-Whitney nonparametric tests were performed to compare, separately for certain and possible rules, incompleteness with inconsistency. The results show that there is no significant difference between inconsistent and incomplete data sets in terms of the quality of rule induction
Recommended from our members
Knowledge aquisition for expert systems: inducing modular rules from examples
Knowledge acquisition for expert systems is notoriously difficult, often demanding an enormous effort on the part of the domain expert, who is essentially expected to spell out everything he knows about the domain. The task is non-trivial and can be time-consuming and tedious. Machine learning research, particularly into automatic rule induction from examples, may provide a way of easing this burden.
Arguably, the most popular and successful rule induction algorithm in general use today is Quinlan's ID3. ID3 induces rules in the form of decision trees. However, the research reported in this thesis identifies some major limitations of a decision tree representation. Decision trees can be incomprehensible, but more importantly, there are rules which cannot be represented by trees. Ideally, induced rules should be modular and should capture the essence of causality, avoiding irrelevance and redundancy.
The information theoretic approach employed in ID3 is examined in detail and some of its weaknesses identified. A new algorithm is developed which, by avoiding these weaknesses, induces rules which are modular rather than decision trees. This algorithm forms the basis of a new rule induction program, PRISM.
Given an ideal training set, PRISM induces a complete and correct set of maximally general rules. The program and its results are described using training sets from two domains, contact lens fitting and a chess endgame. Induction from incomplete training sets is discussed and the performance of PRISM is compared with that of ID3 with particular reference to predictive power.
A series of experiments is described, in which PRISM and ID3 were applied to training sets of different sizes and predictive power calculated. The results show that PRISM generally performs better than ID3 in these two domains, inducing fewer, more general rules, which classify a similar number of instances correctly and significantly fewer incorrectly
Four Lessons in Versatility or How Query Languages Adapt to the Web
Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”
Inconsistency-tolerant Query Answering in Ontology-based Data Access
Ontology-based data access (OBDA) is receiving great attention as a new paradigm for managing information systems through semantic technologies. According to this paradigm, a Description Logic ontology provides an abstract and formal representation of the domain of interest to the information system, and is used as a sophisticated schema for accessing the data and formulating queries over them. In this paper, we address the problem of dealing with inconsistencies in OBDA. Our general goal is both to study DL semantical frameworks that are inconsistency-tolerant, and to devise techniques for answering unions of conjunctive queries under such inconsistency-tolerant semantics. Our work is inspired by the approaches to consistent query answering in databases, which are based on the idea of living with inconsistencies in the database, but trying to obtain only consistent information during query answering, by relying on the notion of database repair. We first adapt the notion of database repair to our context, and show that, according to such a notion, inconsistency-tolerant query answering is intractable, even for very simple DLs. Therefore, we propose a different repair-based semantics, with the goal of reaching a good compromise between the expressive power of the semantics and the computational complexity of inconsistency-tolerant query answering. Indeed, we show that query answering under the new semantics is first-order rewritable in OBDA, even if the ontology is expressed in one of the most expressive members of the DL-Lite family
Attribute Exploration of Gene Regulatory Processes
This thesis aims at the logical analysis of discrete processes, in particular
of such generated by gene regulatory networks. States, transitions and
operators from temporal logics are expressed in the language of Formal Concept
Analysis. By the attribute exploration algorithm, an expert or a computer
program is enabled to validate a minimal and complete set of implications, e.g.
by comparison of predictions derived from literature with observed data. Here,
these rules represent temporal dependencies within gene regulatory networks
including coexpression of genes, reachability of states, invariants or possible
causal relationships. This new approach is embedded into the theory of
universal coalgebras, particularly automata, Kripke structures and Labelled
Transition Systems. A comparison with the temporal expressivity of Description
Logics is made. The main theoretical results concern the integration of
background knowledge into the successive exploration of the defined data
structures (formal contexts). Applying the method a Boolean network from
literature modelling sporulation of Bacillus subtilis is examined. Finally, we
developed an asynchronous Boolean network for extracellular matrix formation
and destruction in the context of rheumatoid arthritis.Comment: 111 pages, 9 figures, file size 2.1 MB, PhD thesis University of
Jena, Germany, Faculty of Mathematics and Computer Science, 2011. Online
available at http://www.db-thueringen.de/servlets/DocumentServlet?id=1960
Dominance-based Rough Set Approach, basic ideas and main trends
Dominance-based Rough Approach (DRSA) has been proposed as a machine learning
and knowledge discovery methodology to handle Multiple Criteria Decision Aiding
(MCDA). Due to its capacity of asking the decision maker (DM) for simple
preference information and supplying easily understandable and explainable
recommendations, DRSA gained much interest during the years and it is now one
of the most appreciated MCDA approaches. In fact, it has been applied also
beyond MCDA domain, as a general knowledge discovery and data mining
methodology for the analysis of monotonic (and also non-monotonic) data. In
this contribution, we recall the basic principles and the main concepts of
DRSA, with a general overview of its developments and software. We present also
a historical reconstruction of the genesis of the methodology, with a specific
focus on the contribution of Roman S{\l}owi\'nski.Comment: This research was partially supported by TAILOR, a project funded by
European Union (EU) Horizon 2020 research and innovation programme under GA
No 952215. This submission is a preprint of a book chapter accepted by
Springer, with very few minor differences of just technical natur
- …