2,663 research outputs found

    Choosing Attribute Weights for Item Dissimilarity using Clikstream Data with an Application to a Product Catalog Map

    Get PDF
    In content- and knowledge-based recommender systems often a measure of (dis)similarity between items is used. Frequently, this measure is based on the attributes of the items. However, which attributes are important for the users of the system remains an important question to answer. In this paper, we present an approach to determine attribute weights in a dissimilarity measure using clickstream data of an e-commerce website. Counted is how many times products are sold and based on this a Poisson regression model is estimated. Estimates of this model are then used to determine the attribute weights in the dissimilarity measure. We show an application of this approach on a product catalog of MP3 players provided by Compare Group, owner of the Dutch price comparison site http://www.vergelijk.nl, and show how the dissimilarity measure can be used to improve 2D product catalog visualizations.dissimilarity measure;attribute weights;clickstream data;comparison

    A Decision tree-based attribute weighting filter for naive Bayes

    Get PDF
    The naive Bayes classifier continues to be a popular learning algorithm for data mining applications due to its simplicity and linear run-time. Many enhancements to the basic algorithm have been proposed to help mitigate its primary weakness--the assumption that attributes are independent given the class. All of them improve the performance of naĆÆve Bayes at the expense (to a greater or lesser degree) of execution time and/or simplicity of the final model. In this paper we present a simple filter method for setting attribute weights for use with naive Bayes. Experimental results show that naive Bayes with attribute weights rarely degrades the quality of the model compared to standard naive Bayes and, in many cases, improves it dramatically. The main advantages of this method compared to other approaches for improving naive Bayes is its run-time complexity and the fact that it maintains the simplicity of the final model

    Fully supervised training of Gaussian radial basis function networks in WEKA

    Get PDF
    Radial basis function networks are a type of feedforward network with a long history in machine learning. In spite of this, there is relatively little literature on how to train them so that accurate predictions are obtained. A common strategy is to train the hidden layer of the network using k-means clustering and the output layer using supervised learning. However, Wettschereck and Dietterich found that supervised training of hidden layer parameters can improve predictive performance. They investigated learning center locations, local variances of the basis functions, and attribute weights, in a supervised manner. This document discusses supervised training of Gaussian radial basis function networks in the WEKA machine learning software. More specifically, we discuss the RBFClassifier and RBFRegressor classes available as part of the RBFNetwork package for WEKA 3.7 and consider (a) learning of center locations and one global variance parameter, (b) learning of center locations and one local variance parameter per basis function, and (c) learning center locations with per-attribute local variance parameters. We also consider learning attribute weights jointly with other parameters

    Determination of Attribute Weights for Recommender Systems Based on Product Popularity

    Get PDF
    In content- and knowledge-based recommender systems often a measure of (dis)similarity between products is used. Frequently, this measure is based on the attributes of the products. However, which attributes are important for the users of the system remains an important question to answer. In this paper, we present two approaches to determine attribute weights in a dissimilarity measure based on product popularity. We count how many times products are sold and based on this, we create two models to determine attribute weights: a Poisson regression model and a novel boosting model minimizing Poisson deviance. We evaluate these two models in two ways, namely using a clickstream analysis on four different product catalogs and a user experiment. The clickstream analysis shows that for each product catalog the standard equal weights model is outperformed by at least one of the weighting models. The user experiment shows that users seem to have a different notion of product similarity in an experimental context

    An interval-valued intuitionistic fuzzy multiattribute group decision making framework with incomplete preference over alternatives

    Get PDF
    This article proposes a framework to handle multiattribute group decision making problems with incomplete pairwise comparison preference over decision alternatives where qualitative and quantitative attribute values are furnished as linguistic variables and crisp numbers, respectively. Attribute assessments are then converted to interval-valued intuitionistic fuzzy numbers (IVIFNs) to characterize fuzziness and uncertainty in the evaluation process. Group consistency and inconsistency indices are introduced for incomplete pairwise comparison preference relations on alternatives provided by the decision-makers (DMs). By minimizing the group inconsistency index under certain constraints, an auxiliary linear programming model is developed to obtain unified attribute weights and an interval-valued intuitionistic fuzzy positive ideal solution (IVIFPIS). Attribute weights are subsequently employed to calculate distances between alternatives and the IVIFPIS for ranking alternatives. An illustrative example is provided to demonstrate the applicability and effectiveness of this method

    A mathematical programming approach to multi-attribute decision making with interval-valued intuitionistic fuzzy assessment information

    Get PDF
    This article proposes an approach to handle multi-attribute decision making (MADM) problems under the interval-valued intuitionistic fuzzy environment, in which both assessments of alternatives on attributes (hereafter, referred to as attribute values) and attribute weights are provided as interval-valued intuitionistic fuzzy numbers (IVIFNs). The notion of relative closeness is extended to interval values to accommodate IVIFN decision data, and fractional programming models are developed based on the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method to determine a relative closeness interval where attribute weights are independently determined for each alternative. By employing a series of optimization models, a quadratic program is established for obtaining a unified attribute weight vector, whereby the individual IVIFN attribute values are aggregated into relative closeness intervals to the ideal solution for final ranking. An illustrative supplier selection problem is employed to demonstrate how to apply the proposed procedure

    The influence of rankings on attribute weights in multi-attribute decision tasks

    Get PDF
    This paper investigates two alternative mechanisms through which rankings may influence attribute weights. While the choice of sorting attribute may serve as a sign of relevance (conversational norms mechanism), consumers could also deduce attribute importance from the ease of processing (fluency mechanism). It is shown that rankings only influence the weight of less familiar attributes. Using eye-movement data, we found the sorting attribute to correspond with a decrease in attention, which is not compatible with the conversational norms mechanism. We provide evidence for the ā€œease of comparisonā€ as an explaining factor with a cognitive load manipulation
    • ā€¦
    corecore