40,536 research outputs found

    Uncertainty with ordinal likelihood information

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00355-012-0689-8We present a model that is closely related to the so-called models of choice under complete uncertainty, in which the agent has no information about the probability of the outcomes. There are two approaches within the said models: the state space-based approach, which takes into account the possible states of nature and the correspondence between states and outcomes; and the set-based approach, which ignores such information, and solves certain difficulties arising from the state space-based approach. Kelsey (Int Econ Rev 34:297ā€“308, 1993) incorporates into a state space-based framework the assumption that the agent has ordinal information about the likelihood of the states. This paper incorporates this same assumption into a set-based framework, thus filling a theoretical gap in the literature. Compared to the set-based models of choice under complete uncertainty we introduce the information about the ordinal likelihood of the outcomes while, compared to Kelseyā€™s approach, we incorporate the advantages of describing uncertainty environments from the set-based perspective. We present an axiomatic study that includes adaptations of some of the axioms found in the related literature and we characterize some rules featuring different combinations of information about the ordinal likelihood of the outcomes and information about their desirability.We acknowledge financial support from the Spanish Ministry of Science and Technology (Projects ECO2008-04756, ECO2009-11213, ECO2009-12836 and RamĆ³n y Cajal program), the Junta de Castilla y LeĆ³n (Project VA092A08), FEDER, and the Barcelona Economics Program of CREA

    Uncertainty with ordinal likelihood information

    Get PDF
    This paper proposes a new framework of choice under uncertainty, where the only information available to the decision maker is about the the ordinal likelihood of the different outcomes each action generates. This contrasts both with the classical models where the potential outcomes of each action have an associated probability distribution, and with the more recent complete uncertainty models, where the agent has no information whatever about the probability of the outcomes, even of an ordinal nature. We present an impossibility result in our framework, and some ways to circumvent it that result in different ranking rules.The authors acknowledge financial support by the Spanish Ministerio de EducaciĆ³n y Ciencia (Project SEC2003-08105 and SEJ2006-11510) and by Junta de Castilla y LeĆ³n (Project VA040A05)

    Incremental Sparse Bayesian Ordinal Regression

    Get PDF
    Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning. An important class of approaches to OR models the problem as a linear combination of basis functions that map features to a high dimensional non-linear space. However, most of the basis function-based algorithms are time consuming. We propose an incremental sparse Bayesian approach to OR tasks and introduce an algorithm to sequentially learn the relevant basis functions in the ordinal scenario. Our method, called Incremental Sparse Bayesian Ordinal Regression (ISBOR), automatically optimizes the hyper-parameters via the type-II maximum likelihood method. By exploiting fast marginal likelihood optimization, ISBOR can avoid big matrix inverses, which is the main bottleneck in applying basis function-based algorithms to OR tasks on large-scale datasets. We show that ISBOR can make accurate predictions with parsimonious basis functions while offering automatic estimates of the prediction uncertainty. Extensive experiments on synthetic and real word datasets demonstrate the efficiency and effectiveness of ISBOR compared to other basis function-based OR approaches

    Fears and realisations of employment insecurity

    Get PDF
    We investigate the validity of subjective data expectations of job loss and on the probability of re-employment consequent on job loss, by examining associations between expectations and realisations. We find that subjective expectations data reveal private information about subsequent job loss, the expectations data perform better with numerical descriptors than with ordinal verbal descriptors. On average, employees overestimate the chance of losing their job; while they underestimate the difficulty of finding another job as good as the currently-held one. We recommend that survey items on employment insecurity should be explicit about each risk investigation, and utilise a cardinal probability scale with discrete numerical descriptors

    Clustering South African households based on their asset status using latent variable models

    Full text link
    The Agincourt Health and Demographic Surveillance System has since 2001 conducted a biannual household asset survey in order to quantify household socio-economic status (SES) in a rural population living in northeast South Africa. The survey contains binary, ordinal and nominal items. In the absence of income or expenditure data, the SES landscape in the study population is explored and described by clustering the households into homogeneous groups based on their asset status. A model-based approach to clustering the Agincourt households, based on latent variable models, is proposed. In the case of modeling binary or ordinal items, item response theory models are employed. For nominal survey items, a factor analysis model, similar in nature to a multinomial probit model, is used. Both model types have an underlying latent variable structure - this similarity is exploited and the models are combined to produce a hybrid model capable of handling mixed data types. Further, a mixture of the hybrid models is considered to provide clustering capabilities within the context of mixed binary, ordinal and nominal response data. The proposed model is termed a mixture of factor analyzers for mixed data (MFA-MD). The MFA-MD model is applied to the survey data to cluster the Agincourt households into homogeneous groups. The model is estimated within the Bayesian paradigm, using a Markov chain Monte Carlo algorithm. Intuitive groupings result, providing insight to the different socio-economic strata within the Agincourt region.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS726 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Semi-parametric analysis of multi-rater data

    Get PDF
    Datasets that are subjectively labeled by a number of experts are becoming more common in tasks such as biological text annotation where class definitions are necessarily somewhat subjective. Standard classification and regression models are not suited to multiple labels and typically a pre-processing step (normally assigning the majority class) is performed. We propose Bayesian models for classification and ordinal regression that naturally incorporate multiple expert opinions in defining predictive distributions. The models make use of Gaussian process priors, resulting in great flexibility and particular suitability to text based problems where the number of covariates can be far greater than the number of data instances. We show that using all labels rather than just the majority improves performance on a recent biological dataset

    Using ordinal logistic regression to evaluate the performance of laser-Doppler predictions of burn-healing time

    Get PDF
    Background Laser-Doppler imaging (LDI) of cutaneous blood flow is beginning to be used by burn surgeons to predict the healing time of burn wounds; predicted healing time is used to determine wound treatment as either dressings or surgery. In this paper, we do a statistical analysis of the performance of the technique. Methods We used data from a study carried out by five burn centers: LDI was done once between days 2 to 5 post burn, and healing was assessed at both 14 days and 21 days post burn. Random-effects ordinal logistic regression and other models such as the continuation ratio model were used to model healing-time as a function of the LDI data, and of demographic and wound history variables. Statistical methods were also used to study the false-color palette, which enables the laser-Doppler imager to be used by clinicians as a decision-support tool. Results Overall performance is that diagnoses are over 90% correct. Related questions addressed were what was the best blood flow summary statistic and whether, given the blood flow measurements, demographic and observational variables had any additional predictive power (age, sex, race, % total body surface area burned (%TBSA), site and cause of burn, day of LDI scan, burn center). It was found that mean laser-Doppler flux over a wound area was the best statistic, and that, given the same mean flux, women recover slightly more slowly than men. Further, the likely degradation in predictive performance on moving to a patient group with larger %TBSA than those in the data sample was studied, and shown to be small. Conclusion Modeling healing time is a complex statistical problem, with random effects due to multiple burn areas per individual, and censoring caused by patients missing hospital visits and undergoing surgery. This analysis applies state-of-the art statistical methods such as the bootstrap and permutation tests to a medical problem of topical interest. New medical findings are that age and %TBSA are not important predictors of healing time when the LDI results are known, whereas gender does influence recovery time, even when blood flow is controlled for. The conclusion regarding the palette is that an optimum three-color palette can be chosen 'automatically', but the optimum choice of a 5-color palette cannot be made solely by optimizing the percentage of correct diagnoses

    Heuristic Voting as Ordinal Dominance Strategies

    Full text link
    Decision making under uncertainty is a key component of many AI settings, and in particular of voting scenarios where strategic agents are trying to reach a joint decision. The common approach to handle uncertainty is by maximizing expected utility, which requires a cardinal utility function as well as detailed probabilistic information. However, often such probabilities are not easy to estimate or apply. To this end, we present a framework that allows "shades of gray" of likelihood without probabilities. Specifically, we create a hierarchy of sets of world states based on a prospective poll, with inner sets contain more likely outcomes. This hierarchy of likelihoods allows us to define what we term ordinally-dominated strategies. We use this approach to justify various known voting heuristics as bounded-rational strategies.Comment: This is the full version of paper #6080 accepted to AAAI'1

    Bayesian Item Response Modeling in R with brms and Stan

    Get PDF
    Item Response Theory (IRT) is widely applied in the human sciences to model persons' responses on a set of items measuring one or more latent constructs. While several R packages have been developed that implement IRT models, they tend to be restricted to respective prespecified classes of models. Further, most implementations are frequentist while the availability of Bayesian methods remains comparably limited. We demonstrate how to use the R package brms together with the probabilistic programming language Stan to specify and fit a wide range of Bayesian IRT models using flexible and intuitive multilevel formula syntax. Further, item and person parameters can be related in both a linear or non-linear manner. Various distributions for categorical, ordinal, and continuous responses are supported. Users may even define their own custom response distribution for use in the presented framework. Common IRT model classes that can be specified natively in the presented framework include 1PL and 2PL logistic models optionally also containing guessing parameters, graded response and partial credit ordinal models, as well as drift diffusion models of response times coupled with binary decisions. Posterior distributions of item and person parameters can be conveniently extracted and post-processed. Model fit can be evaluated and compared using Bayes factors and efficient cross-validation procedures.Comment: 54 pages, 16 figures, 3 table
    • ā€¦
    corecore