55 research outputs found

    Orness For Idempotent Aggregation Functions

    Get PDF
    Aggregation functions are mathematical operators that merge given data in order to obtain a global value that preserves the information given by the data as much as possible. In most practical applications, this value is expected to be between the infimum and the supremum of the given data, which is guaranteed only when the aggregation functions are idempotent. Ordered weighted averaging (OWA) operators are particular cases of this kind of function, with the particularity that the obtained global value depends on neither the source nor the expert that provides each datum, but only on the set of values. They have been classified by means of the ornessa measurement of the proximity of an OWA operator to the OR-operator. In this paper, the concept of orness is extended to the framework of idempotent aggregation functions defined both on the real unit interval and on a complete lattice with a local finiteness condition.This work has been partially supported by the research projects MTM2015-63608-P of the Spanish Government and IT974-16 of the Basque Government

    Orness for idempotent aggregation functions

    Get PDF
    Aggregation functions are mathematical operators that merge given data in order to obtain a global value that preserves the information given by the data as much as possible. In most practical applications, this value is expected to be between the infimum and the supremum of the given data, which is guaranteed only when the aggregation functions are idempotent. Ordered weighted averaging (OWA) operators are particular cases of this kind of function, with the particularity that the obtained global value depends on neither the source nor the expert that provides each datum, but only on the set of values. They have been classified by means of the orness—a measurement of the proximity of an OWA operator to the OR-operator. In this paper, the concept of orness is extended to the framework of idempotent aggregation functions defined both on the real unit interval and on a complete lattice with a local finiteness condition.This work has been partially supported by the research projects MTM2015-63608-P of the Spanish Government and IT974-16 of the Basque Government

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification
    • …
    corecore