607 research outputs found

    Extension of the fuzzy integral for general fuzzy set-valued information

    Get PDF
    The fuzzy integral (FI) is an extremely flexible aggregation operator. It is used in numerous applications, such as image processing, multicriteria decision making, skeletal age-at-death estimation, and multisource (e.g., feature, algorithm, sensor, and confidence) fusion. To date, a few works have appeared on the topic of generalizing Sugeno's original real-valued integrand and fuzzy measure (FM) for the case of higher order uncertain information (both integrand and measure). For the most part, these extensions are motivated by, and are consistent with, Zadeh's extension principle (EP). Namely, existing extensions focus on fuzzy number (FN), i.e., convex and normal fuzzy set- (FS) valued integrands. Herein, we put forth a new definition, called the generalized FI (gFI), and efficient algorithm for calculation for FS-valued integrands. In addition, we compare the gFI, numerically and theoretically, with our non-EP-based FI extension called the nondirect FI (NDFI). Examples are investigated in the areas of skeletal age-at-death estimation in forensic anthropology and multisource fusion. These applications help demonstrate the need and benefit of the proposed work. In particular, we show there is not one supreme technique. Instead, multiple extensions are of benefit in different contexts and applications

    A penalty-based aggregation operator for non-convex intervals

    Full text link
    In the case of real-valued inputs, averaging aggregation functions have been studied extensively with results arising in fields including probability and statistics, fuzzy decision-making, and various sciences. Although much of the behavior of aggregation functions when combining standard fuzzy membership values is well established, extensions to interval-valued fuzzy sets, hesitant fuzzy sets, and other new domains pose a number of difficulties. The aggregation of non-convex or discontinuous intervals is usually approached in line with the extension principle, i.e. by aggregating all real-valued input vectors lying within the interval boundaries and taking the union as the final output. Although this is consistent with the aggregation of convex interval inputs, in the non-convex case such operators are not idempotent and may result in outputs which do not faithfully summarize or represent the set of inputs. After giving an overview of the treatment of non-convex intervals and their associated interpretations, we propose a novel extension of the arithmetic mean based on penalty functions that provides a representative output and satisfies idempotency

    SPFI: shape-preserving Choquet fuzzy integral for non-normal fuzzy set-valued evidence

    Get PDF
    Information or data aggregation is an important part of nearly all analysis problems as summarizing inputs from multiple sources is a ubiquitous goal. In this paper we propose a method for non-linear aggregation of data inputs that take the form of non-normal fuzzy sets. The proposed shape-preserving fuzzy integral (SPFI) is designed to overcome a well-known weakness of the previously-proposed sub-normal fuzzy integral (SuFI). The weakness of SuFI is that the output is constrained to have maximum membership equal to the minimum of the maximum memberships of the inputs; hence, if one input has a small height, then the output is constrained to that height. The proposed SPFI does not suffer from this weakness and, furthermore, preserves in the output the shape of the input sets. That is, the output looks like the inputs. The SPFI method is based on the well-known Choquet fuzzy integral with respect to a capacity measure, i.e., fuzzy measure. We demonstrate SPFI on synthetic and real-world data, comparing it to the SuFI and non-direct fuzzy integral (NDFI)

    Data-informed fuzzy measures for fuzzy integration of intervals and fuzzy numbers

    Get PDF
    The fuzzy integral (FI) with respect to a fuzzy measure (FM) is a powerful means of aggregating information. The most popular FIs are the Choquet and Sugeno, and most research focuses on these two variants. The arena of the FM is much more populated, including numerically derived FMs such as the Sugeno λ-measure and decomposable measure, expert-defined FMs, and data-informed FMs. The drawback of numerically derived and expert-defined FMs is that one must know something about the relative values of the input sources. However, there are many problems where this information is unavailable, such as crowdsourcing. This paper focuses on data-informed FMs, or those FMs that are computed by an algorithm that analyzes some property of the input data itself, gleaning the importance of each input source by the data they provide. The original instantiation of a data-informed FM is the agreement FM, which assigns high confidence to combinations of sources that numerically agree with one another. This paper extends upon our previous work in datainformed FMs by proposing the uniqueness measure and additive measure of agreement for interval-valued evidence. We then extend data-informed FMs to fuzzy number (FN)-valued inputs. We demonstrate the proposed FMs by aggregating interval and FN evidence with the Choquet and Sugeno FIs for both synthetic and real-world data

    Construction of aggregation operators with noble reinforcement

    Full text link
    This paper examines disjunctive aggregation operators used in various recommender systems. A specific requirement in these systems is the property of noble reinforcement: allowing a collection of high-valued arguments to reinforce each other while avoiding reinforcement of low-valued arguments. We present a new construction of Lipschitz-continuous aggregation operators with noble reinforcement property and its refinements. <br /

    A review of convex approaches for control, observation and safety of linear parameter varying and Takagi-Sugeno systems

    Get PDF
    This paper provides a review about the concept of convex systems based on Takagi-Sugeno, linear parameter varying (LPV) and quasi-LPV modeling. These paradigms are capable of hiding the nonlinearities by means of an equivalent description which uses a set of linear models interpolated by appropriately defined weighing functions. Convex systems have become very popular since they allow applying extended linear techniques based on linear matrix inequalities (LMIs) to complex nonlinear systems. This survey aims at providing the reader with a significant overview of the existing LMI-based techniques for convex systems in the fields of control, observation and safety. Firstly, a detailed review of stability, feedback, tracking and model predictive control (MPC) convex controllers is considered. Secondly, the problem of state estimation is addressed through the design of proportional, proportional-integral, unknown input and descriptor observers. Finally, safety of convex systems is discussed by describing popular techniques for fault diagnosis and fault tolerant control (FTC).Peer ReviewedPostprint (published version

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    A Similarity Measure Based on Bidirectional Subsethood for Intervals

    Get PDF
    With a growing number of areas leveraging interval-valued data—including in the context of modelling human uncertainty (e.g., in Cyber Security), the capacity to accurately and systematically compare intervals for reasoning and computation is increasingly important. In practice, well established set-theoretic similarity measures such as the Jaccard and Sørensen-Dice measures are commonly used, while axiomatically a wide breadth of possible measures have been theoretically explored. This paper identifies, articulates, and addresses an inherent and so far not discussed limitation of popular measures—their tendency to be subject to aliasing—where they return the same similarity value for very different sets of intervals. The latter risks counter-intuitive results and poor automated reasoning in real-world applications dependent on systematically comparing interval-valued system variables or states. Given this, we introduce new axioms establishing desirable properties for robust similarity measures, followed by putting forward a novel set-theoretic similarity measure based on the concept of bidirectional subsethood which satisfies both the traditional and new axioms. The proposed measure is designed to be sensitive to the variation in the size of intervals, thus avoiding aliasing. The paper provides a detailed theoretical exploration of the new proposed measure, and systematically demonstrates its behaviour using an extensive set of synthetic and real-world data. Specifically, the measure is shown to return robust outputs that follow intuition—essential for real world applications. For example, we show that it is bounded above and below by the Jaccard and Sørensen-Dice similarity measures (when the minimum t-norm is used). Finally, we show that a dissimilarity or distance measure, which satisfies the properties of a metric, can easily be derived from the proposed similarity measure
    • …
    corecore