171,532 research outputs found

    Ordinal Regression by Extended Binary Classification

    Get PDF
    We present a reduction framework from ordinal regression to binary classification based on extended examples. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranking rule from the binary classifier. A weighted 0/1 loss of the binary classifier would then bound the mislabeling cost of the ranking rule. Our framework allows not only to design good ordinal regression algorithms based on well-tuned binary classification approaches, but also to derive new generalization bounds for ordinal regression from known bounds for binary classification. In addition, our framework unifies many existing ordinal regression algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms, which demonstrates the usefulness of our framework

    Incremental Sparse Bayesian Ordinal Regression

    Get PDF
    Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning. An important class of approaches to OR models the problem as a linear combination of basis functions that map features to a high dimensional non-linear space. However, most of the basis function-based algorithms are time consuming. We propose an incremental sparse Bayesian approach to OR tasks and introduce an algorithm to sequentially learn the relevant basis functions in the ordinal scenario. Our method, called Incremental Sparse Bayesian Ordinal Regression (ISBOR), automatically optimizes the hyper-parameters via the type-II maximum likelihood method. By exploiting fast marginal likelihood optimization, ISBOR can avoid big matrix inverses, which is the main bottleneck in applying basis function-based algorithms to OR tasks on large-scale datasets. We show that ISBOR can make accurate predictions with parsimonious basis functions while offering automatic estimates of the prediction uncertainty. Extensive experiments on synthetic and real word datasets demonstrate the efficiency and effectiveness of ISBOR compared to other basis function-based OR approaches

    Penalized Regression with Ordinal Predictors

    Get PDF
    Ordered categorial predictors are a common case in regression modeling. In contrast to the case of ordinal response variables, ordinal predictors have been largely neglected in the literature. In this article penalized regression techniques are proposed. Based on dummy coding two types of penalization are explicitly developed; the first imposes a difference penalty, the second is a ridge type refitting procedure. A Bayesian motivation as well as alternative ways of derivation are provided. Simulation studies and real world data serve for illustration and to compare the approach to methods often seen in practice, namely linear regression on the group labels and pure dummy coding. The proposed regression techniques turn out to be highly competitive. On the basis of GLMs the concept is generalized to the case of non-normal outcomes by performing penalized likelihood estimation. The paper is a preprint of an article published in the International Statistical Review. Please use the journal version for citation

    Ordinal Ridge Regression with Categorical Predictors

    Get PDF
    In multi-category response models categories are often ordered. In case of ordinal response models, the usual likelihood approach becomes unstable with ill-conditioned predictor space or when the number of parameters to be estimated is large relative to the sample size. The likelihood estimates do not exist when the number of observations is less than the number of parameters. The same problem arises if constraints on the order of intercept values are not met during the iterative fitting procedure. Proportional odds models are most commonly used for ordinal responses. In this paper penalized likelihood with quadratic penalty is used to address these issues with a special focus on proportional odds models. To avoid large differences between two parameter values corresponding to the consecutive categories of an ordinal predictor, the differences between the parameters of two adjacent categories should be penalized. The considered penalized likelihood function penalizes the parameter estimates or differences between the parameters estimates according to the type of predictors. Mean squared error for parameter estimates, deviance of fitted probabilities and prediction error for ridge regression are compared with usual likelihood estimates in a simulation study and an application

    Rank-consistent Ordinal Regression for Neural Networks

    Full text link
    In many real-world predictions tasks, class labels include information about the relative ordering between labels, which is not captured by commonly-used loss functions such as multi-category cross-entropy. Recently, ordinal regression frameworks have been adopted by the deep learning community to take such ordering information into account. Using a framework that transforms ordinal targets into binary classification subtasks, neural networks were equipped with ordinal regression capabilities. However, this method suffers from inconsistencies among the different binary classifiers. We hypothesize that addressing the inconsistency issue in these binary classification task-based neural networks improves predictive performance. To test this hypothesis, we propose the COnsistent RAnk Logits (CORAL) framework with strong theoretical guarantees for rank-monotonicity and consistent confidence scores. Moreover, the proposed method is architecture-agnostic and can extend arbitrary state-of-the-art deep neural network classifiers for ordinal regression tasks. The empirical evaluation of the proposed rank-consistent method on a range of face-image datasets for age prediction shows a substantial reduction of the prediction error compared to the reference ordinal regression network.Comment: In the previous manuscript version, an issue with the figures caused certain versions of Adobe Acrobat Reader to crash. This version fixes this issu

    Regularized Ordinal Regression and the ordinalNet R Package

    Full text link
    Regularization techniques such as the lasso (Tibshirani 1996) and elastic net (Zou and Hastie 2005) can be used to improve regression model coefficient estimation and prediction accuracy, as well as to perform variable selection. Ordinal regression models are widely used in applications where the use of regularization could be beneficial; however, these models are not included in many popular software packages for regularized regression. We propose a coordinate descent algorithm to fit a broad class of ordinal regression models with an elastic net penalty. Furthermore, we demonstrate that each model in this class generalizes to a more flexible form, for instance to accommodate unordered categorical data. We introduce an elastic net penalty class that applies to both model forms. Additionally, this penalty can be used to shrink a non-ordinal model toward its ordinal counterpart. Finally, we introduce the R package ordinalNet, which implements the algorithm for this model class

    Using rank data to estimate health state utility models

    Get PDF
    In this paper we report the estimation of conditional logistic regression models for the Health Utilities Index Mark 2 and the SF-6D, using ordinal preference data. The results are compared to the conventional regression models estimated from standard gamble data, and to the observed mean standard gamble health state valuations. For both the HUI2 and the SF-6D, the models estimated using ordinal data are broadly comparable to the models estimated on standard gamble data and the predictive performance of these models is close to that of the standard gamble models. Our research indicates that ordinal data have the potential to provide useful insights into community health state preferences. However, important questions remain
    corecore