88 research outputs found

    Applying d-XChoquet integrals in classification problems

    Get PDF
    Several generalizations of the Choquet integral have been applied in the Fuzzy Reasoning Method (FRM) of Fuzzy Rule-Based Classification Systems (FRBCS's) to improve its performance. Additionally, to achieve that goal, researchers have searched for new ways to provide more flexibility to those generalizations, by restricting the requirements of the functions being used in their constructions and relaxing the monotonicity of the integral. This is the case of CT-integrals, CC-integrals, CF-integrals, CF1F2-integrals and dCF-integrals, which obtained good performance in classification algorithms, more specifically, in the fuzzy association rule-based classification method for high-dimensional problems (FARC-HD). Thereafter, with the introduction of Choquet integrals based on restricted dissimilarity functions (RDFs) in place of the standard difference, a new generalization was made possible: the d-XChoquet (d-XC) integrals, which are ordered directional increasing functions and, depending on the adopted RDF, may also be a pre-aggregation function. Those integrals were applied in multi-criteria decision making problems and also in a motor-imagery brain computer interface framework. In the present paper, we introduce a new FRM based on the d-XC integral family, analyzing its performance by applying it to 33 different datasets from the literature.Supported by Navarra de Servicios y Tecnologías, S.A. (NASERTIC), CNPq (301618/2019-4, 305805/2021-5), FAPERGS (19/2551-0001660-3), the Spanish Ministry of Science and Technology (TIN2016-77356-P, PID2019- 108392GB I00 (MCIN/AEI/10.13039/501100011033)

    Data-informed fuzzy measures for fuzzy integration of intervals and fuzzy numbers

    Get PDF
    The fuzzy integral (FI) with respect to a fuzzy measure (FM) is a powerful means of aggregating information. The most popular FIs are the Choquet and Sugeno, and most research focuses on these two variants. The arena of the FM is much more populated, including numerically derived FMs such as the Sugeno λ-measure and decomposable measure, expert-defined FMs, and data-informed FMs. The drawback of numerically derived and expert-defined FMs is that one must know something about the relative values of the input sources. However, there are many problems where this information is unavailable, such as crowdsourcing. This paper focuses on data-informed FMs, or those FMs that are computed by an algorithm that analyzes some property of the input data itself, gleaning the importance of each input source by the data they provide. The original instantiation of a data-informed FM is the agreement FM, which assigns high confidence to combinations of sources that numerically agree with one another. This paper extends upon our previous work in datainformed FMs by proposing the uniqueness measure and additive measure of agreement for interval-valued evidence. We then extend data-informed FMs to fuzzy number (FN)-valued inputs. We demonstrate the proposed FMs by aggregating interval and FN evidence with the Choquet and Sugeno FIs for both synthetic and real-world data

    Computation of Choquet integral for finite sets: Notes on a ChatGPT-driven experience

    Get PDF
    The Choquet integral, credited to Gustave Choquet in 1954, initially found its roots in decision making under uncertainty following Schmeidler's pioneering work in this field. Surprisingly, it was not until the 1990s that this integral gained recognition in the realm of multi-criteria decision aid. Nowadays, the Choquet integral boasts numerous generalizations and serves as a focal point for intensive research and development across various domains. Here we share our journey of utilizing ChatGPT as a helpful assistant to delve into the computation of the discrete Choquet integral using Mathematica. Additionally, we have demonstrated our ChatGPT experience by crafting a Beamer presentation with its assistance. The ultimate aim of this exercise is to pave the way for the application of the discrete Choquet integral in the context of N-soft sets

    Aggregation functions based on penalties

    Full text link
    This article studies a large class of averaging aggregation functions based on minimizing a distance from the vector of inputs, or equivalently, minimizing a penalty imposed for deviations of individual inputs from the aggregated value. We provide a systematization of various types of penalty based aggregation functions, and show how many special cases arise as the result. We show how new aggregation functions can be constructed either analytically or numerically and provide many examples. We establish connection with the maximum likelihood principle, and present tools for averaging experimental noisy data with distinct noise distributions.<br /

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    Fuzzy measures on the Gene Ontology for gene product similarity

    Get PDF
    pre-printOne of the most important objects in bioinformatics is a gene product (protein or RNA). For many gene products, functional information is summarized in a set of Gene Ontology (GO) annotations. For these genes, it is reasonable to include similarity measures based on the terms found in the GO or other taxonomy. In this paper, we introduce several novel measures for computing the similarity of two gene products annotated with GO terms. The fuzzy measure similarity (FMS) has the advantage that it takes into consideration the context of both complete sets of annotation terms when computing the similarity between two gene products. When the two gene products are not annotated by common taxonomy terms, we propose a method that avoids a zero similarity result. To account for the variations in the annotation reliability, we propose a similarity measure based on the Choquet integral. These similarity measures provide extra tools for the biologist in search of functional information for gene products. The initial testing on a group of 194 sequences representing three proteins families shows a higher correlation of the FMS and Choquet similarities to the BLAST sequence similarities than the traditional similarity measures such as pairwise average or pairwise maximum

    Defining Bonferroni means over lattices

    Full text link
    In the face of mass amounts of information and the need for transparent and fair decision processes, aggregation functions are essential for summarizing data and providing overall evaluations. Although families such as weighted means and medians have been well studied, there are still applications for which no existing aggregation functions can capture the decision makers\u27 preferences. Furthermore, extensions of aggregation functions to lattices are often needed to model operations on L-fuzzy sets, interval-valued and intuitionistic fuzzy sets. In such cases, the aggregation properties need to be considered in light of the lattice structure, as otherwise counterintuitive or unreliable behavior may result. The Bonferroni mean has recently received attention in the fuzzy sets and decision making community as it is able to model useful notions such as mandatory requirements. Here, we consider its associated penalty function to extend the generalized Bonferroni mean to lattices. We show that different notions of dissimilarity on lattices can lead to alternative expressions.<br /
    • …
    corecore