3 research outputs found

    Multi-criteria based learning of the Choquet integral using Goal programming

    No full text
    In this paper, we explore a new way to learn an aggregation operator for fusion based on a combination of one or more labeled training data sets and information from one or more experts. One problem with learning an aggregation from training data alone is that it often results in solutions that are overly complex and expensive to implement. It also runs the risk of over-fitting and the quality of that solution is based in large on the size and diversity of the data employed. On the other hand, learning an aggregation based on only expert opinion can be overly subjective and may not result in desired performance for some given task. In order to overcome these shortcomings, we explore a new way to combine both of these important sources. However, conflict between data sets, experts or a combination of the two, can (and often do) occur and must be addressed. Herein, weighted Goal programming, an approach from multi-criteria decision making (MCDM), is employed to learn the fuzzy measure (FM) relative to the Choquet integral (CI) for data/information fusion. This framework provides an interesting way in which we can set the priority order of any number of combination of these two sources. Furthermore, it provides a mechanism to preserve the monotonicity constraints of the FM. We demonstrate results from synthetic experiments across a range of different conflicting and combination of sources scenarios

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification
    corecore