44 research outputs found

    Enhancing multi-class classification in FARC-HD fuzzy classifier: on the synergy between n-dimensional overlap functions and decomposition strategies

    Get PDF
    There are many real-world classification problems involving multiple classes, e.g., in bioinformatics, computer vision or medicine. These problems are generally more difficult than their binary counterparts. In this scenario, decomposition strategies usually improve the performance of classifiers. Hence, in this paper we aim to improve the behaviour of FARC-HD fuzzy classifier in multi-class classification problems using decomposition strategies, and more specifically One-vs-One (OVO) and One-vs-All (OVA) strategies. However, when these strategies are applied on FARC-HD a problem emerges due to the low confidence values provided by the fuzzy reasoning method. This undesirable condition comes from the application of the product t-norm when computing the matching and association degrees, obtaining low values, which are also dependent on the number of antecedents of the fuzzy rules. As a result, robust aggregation strategies in OVO such as the weighted voting obtain poor results with this fuzzy classifier. In order to solve these problems, we propose to adapt the inference system of FARC-HD replacing the product t-norm with overlap functions. To do so, we define n-dimensional overlap functions. The usage of these new functions allows one to obtain more adequate outputs from the base classifiers for the subsequent aggregation in OVO and OVA schemes. Furthermore, we propose a new aggregation strategy for OVO to deal with the problem of the weighted voting derived from the inappropriate confidences provided by FARC-HD for this aggregation method. The quality of our new approach is analyzed using twenty datasets and the conclusions are supported by a proper statistical analysis. In order to check the usefulness of our proposal, we carry out a comparison against some of the state-of-the-art fuzzy classifiers. Experimental results show the competitiveness of our method.This work was supported in part by the Spanish Ministry of Science and Technology under projects TIN2011-28488, TIN-2012-33856 and TIN-2013- 40765-P and the Andalusian Research Plan P10-TIC-6858 and P11-TIC-7765

    Constructing interval-valued fuzzy material implication functions derived from general interval-valued grouping functions

    Get PDF
    Grouping functions and their dual counterpart, overlap functions, have drawn the attention of many authors, mainly because they constitute a richer class of operators compared to other types of aggregation functions. Grouping functions are a useful theoretical tool to be applied in various problems, like decision making based on fuzzy preference relations. In pairwise comparisons, for instance, those functions allow one to convey the measure of the amount of evidence in favor of either of two given alternatives. Recently, some generalizations of grouping functions were proposed, such as (i) the n-dimensional grouping functions and the more flexible general grouping functions, which allowed their application in n-dimensional problems, and (ii) n-dimensional and general interval-valued grouping functions, in order to handle uncertainty on the definition of the membership functions in real-life problems. Taking into account the importance of interval-valued fuzzy implication functions in several application problems under uncertainty, such as fuzzy inference mechanisms, this paper aims at introducing a new class of interval-valued fuzzy material implication functions. We study their properties, characterizations, construction methods and provide examples.upported by CNPq (301618/2019-4, 311429/2020-3), FAPERGS (19/2551-0001660-3), UFERSA, the Spanish Ministry of Science and Technology (TIN2016-77356-P, PID2019-108392GB I00 (MCIN/AEI/10.13039/501100011033)) and Navarra de Servicios y Tecnologías, S.A. (NASERTIC)

    QCBA: Postoptimization of Quantitative Attributes in Classifiers based on Association Rules

    Full text link
    The need to prediscretize numeric attributes before they can be used in association rule learning is a source of inefficiencies in the resulting classifier. This paper describes several new rule tuning steps aiming to recover information lost in the discretization of numeric (quantitative) attributes, and a new rule pruning strategy, which further reduces the size of the classification models. We demonstrate the effectiveness of the proposed methods on postoptimization of models generated by three state-of-the-art association rule classification algorithms: Classification based on Associations (Liu, 1998), Interpretable Decision Sets (Lakkaraju et al, 2016), and Scalable Bayesian Rule Lists (Yang, 2017). Benchmarks on 22 datasets from the UCI repository show that the postoptimized models are consistently smaller -- typically by about 50% -- and have better classification performance on most datasets
    corecore