6 research outputs found
Subgroup Preference Neural Network.
Subgroup label ranking aims to rank groups of labels using a single ranking model, is a new problem faced in preference learning. This paper introduces the Subgroup Preference Neural Network (SGPNN) that combines multiple networks have different activation function, learning rate, and output layer into one artificial neural network (ANN) to discover the hidden relation between the subgroups' multi-labels. The SGPNN is a feedforward (FF), partially connected network that has a single middle layer and uses stairstep (SS) multi-valued activation function to enhance the prediction's probability and accelerate the ranking convergence. The novel structure of the proposed SGPNN consists of a multi-activation function neuron (MAFN) in the middle layer to rank each subgroup independently. The SGPNN uses gradient ascent to maximize the Spearman ranking correlation between the groups of labels. Each label is represented by an output neuron that has a single SS function. The proposed SGPNN using conjoint dataset outperforms the other label ranking methods which uses each dataset individually. The proposed SGPNN achieves an average accuracy of 91.4% using the conjoint dataset compared to supervised clustering, decision tree, multilayer perceptron label ranking and label ranking forests that achieve an average accuracy of 60%, 84.8%, 69.2% and 73%, respectively, using the individual dataset
Preference Neural Network
This paper proposes a preference neural network (PNN) to address the problem
of indifference preferences orders with new activation function. PNN also
solves the Multi-label ranking problem, where labels may have indifference
preference orders or subgroups are equally ranked. PNN follows a multi-layer
feedforward architecture with fully connected neurons. Each neuron contains a
novel smooth stairstep activation function based on the number of preference
orders. PNN inputs represent data features and output neurons represent label
indexes. The proposed PNN is evaluated using new preference mining dataset that
contains repeated label values which have not experimented before. PNN
outperforms five previously proposed methods for strict label ranking in terms
of accurate results with high computational efficiency.Comment: The current content is inappropriate and requires to be
comprehensively reviewed agai
Mixture-Based Probabilistic Graphical Models for the Label Ranking Problem
The goal of the Label Ranking (LR) problem is to learn preference models that predict the
preferred ranking of class labels for a given unlabeled instance. Different well-known machine
learning algorithms have been adapted to deal with the LR problem. In particular, fine-tuned
instance-based algorithms (e.g., k-nearest neighbors) and model-based algorithms (e.g., decision
trees) have performed remarkably well in tackling the LR problem. Probabilistic Graphical Models
(PGMs, e.g., Bayesian networks) have not been considered to deal with this problem because of the
difficulty of modeling permutations in that framework. In this paper, we propose a Hidden Naive
Bayes classifier (HNB) to cope with the LR problem. By introducing a hidden variable, we can design a
hybrid Bayesian network in which several types of distributions can be combined: multinomial for
discrete variables, Gaussian for numerical variables, and Mallows for permutations. We consider two
kinds of probabilistic models: one based on a Naive Bayes graphical structure (where only univariate
probability distributions are estimated for each state of the hidden variable) and another where we
allow interactions among the predictive attributes (using a multivariate Gaussian distribution for the
parameter estimation). The experimental evaluation shows that our proposals are competitive with
the start-of-the-art algorithms in both accuracy and in CPU time requirements