2 research outputs found

    A Parameterized Algorithm for Exploring Concept Lattices

    Get PDF
    Abstract. Kuznetsov shows that Formal Concept Analysis (FCA) is a natural framework for learning from positive and negative examples. Indeed, the results of learning from positive examples (respectively negative examples) are sets of frequent concepts with respect to a minimal support, whose extent contains only positive examples (respectively negative examples). In terms of association rules, the above learning can be seen as searching the premises of exact rules where the consequence is fixed. When augmented with statistical indicators like confidence and support it is possible to extract various kinds of concept-based rules taking into account exceptions. FCA considers attributes as a non-ordered set. When attributes of the context are ordered, Conceptual Scaling allows the related taxonomy to be taken into account by producing a context completed with all attributes deduced from the taxonomy. The drawback of that method is concept intents contain redundant information. In a previous work, we proposed an algorithm based on Bordat’s algorithm to find frequent concepts in a context with taxonomy. In that algorithm, the taxonomy is taken into account during the computation so as to remove all redundancy from intents. In this article, we propose a parameterized generalization of that algorithm for learning rules in the presence of a taxonomy. Simply changing one component, that parameterized algorithm can compute various kinds of concept-based rules. We present applications of the parameterized algorithm to find positive and negative rules.
    corecore