199,068 research outputs found

    Dominance Measuring Approach using Stochastic Weights

    Full text link
    In this paper we propose an approach to obtain a ranking of alternatives in multicriteria decision-making problems when there is imprecision concerning the alternative performances, component utility functions and weights. We assume decision maker's preferences are represented by an additive multi-attribute utility function, in which weights are modeled by independent normal variables, the performance in each attribute for each alternative is an interval value and classes of utility functions are available for each attribute. The approach we propose is based on dominance measures, which are computed in a similar way that when the imprecision concerning weights is modeled by uniform distributions or by an ordinal relation. In this paper we will show how the approach can be applied when the imprecision concerning weights are represented by normal distributions. Extensions to other distributions, such as truncated normal or beta, can be feasible using Monte Carlo simulation techniques

    Choosing Attribute Weights for Item Dissimilarity using Clikstream Data with an Application to a Product Catalog Map

    Get PDF
    In content- and knowledge-based recommender systems often a measure of (dis)similarity between items is used. Frequently, this measure is based on the attributes of the items. However, which attributes are important for the users of the system remains an important question to answer. In this paper, we present an approach to determine attribute weights in a dissimilarity measure using clickstream data of an e-commerce website. Counted is how many times products are sold and based on this a Poisson regression model is estimated. Estimates of this model are then used to determine the attribute weights in the dissimilarity measure. We show an application of this approach on a product catalog of MP3 players provided by Compare Group, owner of the Dutch price comparison site http://www.vergelijk.nl, and show how the dissimilarity measure can be used to improve 2D product catalog visualizations.dissimilarity measure;attribute weights;clickstream data;comparison

    Using a modified DEA model to estimate the importance of objectives. An application to agricultural economics.

    Get PDF
    This paper shows a connection between Data Envelopment Analysis (DEA) and the methodology proposed by Sumpsi et al. (1997) to estimate the weights of objectives for decision makers in a multiple attribute approach. This connection gives rise to a modified DEA model that allows to estimate not only efficiency measures but also preference weights by radially projecting each unit onto a linear combination of the elements of the payoff matrix (which is obtained by standard multicriteria methods). For users of Multiple Attribute Decision Analysis the basic contribution of this paper is a new interpretation of the methodology by Sumpsi et al. (1997) in terms of efficiency. We also propose a modified procedure to calculate an efficient payoff matrix and a procedure to estimate weights through a radial projection rather than a distance minimization. For DEA users, we provide a modified DEA procedure to calculate preference weights and efficiency measures which does not depend on any observations in the dataset. This methodology has been applied to an agricultural case study in Spain.Multicriteria Decision Making, Goal Programming, Weights, Preferences, Data Envelopment Analysis.

    Fully supervised training of Gaussian radial basis function networks in WEKA

    Get PDF
    Radial basis function networks are a type of feedforward network with a long history in machine learning. In spite of this, there is relatively little literature on how to train them so that accurate predictions are obtained. A common strategy is to train the hidden layer of the network using k-means clustering and the output layer using supervised learning. However, Wettschereck and Dietterich found that supervised training of hidden layer parameters can improve predictive performance. They investigated learning center locations, local variances of the basis functions, and attribute weights, in a supervised manner. This document discusses supervised training of Gaussian radial basis function networks in the WEKA machine learning software. More specifically, we discuss the RBFClassifier and RBFRegressor classes available as part of the RBFNetwork package for WEKA 3.7 and consider (a) learning of center locations and one global variance parameter, (b) learning of center locations and one local variance parameter per basis function, and (c) learning center locations with per-attribute local variance parameters. We also consider learning attribute weights jointly with other parameters

    Using interval weights in MADM problems

    Get PDF
    The choice of weights vectors in multiple attribute decision making (MADM) problems has generated an important literature, and a large number of methods have been proposed for this task. In some situations the decision maker (DM) may not be willing or able to provide exact values of the weights, but this difficulty can be avoided by allowing the DM to give some variability in the weights. In this paper we propose a model where the weights are not fixed, but can take any value from certain intervals, so the score of each alternative is the maximum value that the weighted mean can reach when the weights belong to those intervals. We provide a closed-form expression for the scores achieved by the alternatives so that they can be ranked them without solving the proposed model, and apply this new method to an MADM problem taken from the literature.Este trabajo forma parte del proyecto de investigación: MEC-FEDER Grant ECO2016-77900-P

    A Decision tree-based attribute weighting filter for naive Bayes

    Get PDF
    The naive Bayes classifier continues to be a popular learning algorithm for data mining applications due to its simplicity and linear run-time. Many enhancements to the basic algorithm have been proposed to help mitigate its primary weakness--the assumption that attributes are independent given the class. All of them improve the performance of naïve Bayes at the expense (to a greater or lesser degree) of execution time and/or simplicity of the final model. In this paper we present a simple filter method for setting attribute weights for use with naive Bayes. Experimental results show that naive Bayes with attribute weights rarely degrades the quality of the model compared to standard naive Bayes and, in many cases, improves it dramatically. The main advantages of this method compared to other approaches for improving naive Bayes is its run-time complexity and the fact that it maintains the simplicity of the final model
    corecore