447 research outputs found

    Assortment optimisation under a general discrete choice model: A tight analysis of revenue-ordered assortments

    Full text link
    The assortment problem in revenue management is the problem of deciding which subset of products to offer to consumers in order to maximise revenue. A simple and natural strategy is to select the best assortment out of all those that are constructed by fixing a threshold revenue π\pi and then choosing all products with revenue at least π\pi. This is known as the revenue-ordered assortments strategy. In this paper we study the approximation guarantees provided by revenue-ordered assortments when customers are rational in the following sense: the probability of selecting a specific product from the set being offered cannot increase if the set is enlarged. This rationality assumption, known as regularity, is satisfied by almost all discrete choice models considered in the revenue management and choice theory literature, and in particular by random utility models. The bounds we obtain are tight and improve on recent results in that direction, such as for the Mixed Multinomial Logit model by Rusmevichientong et al. (2014). An appealing feature of our analysis is its simplicity, as it relies only on the regularity condition. We also draw a connection between assortment optimisation and two pricing problems called unit demand envy-free pricing and Stackelberg minimum spanning tree: These problems can be restated as assortment problems under discrete choice models satisfying the regularity condition, and moreover revenue-ordered assortments correspond then to the well-studied uniform pricing heuristic. When specialised to that setting, the general bounds we establish for revenue-ordered assortments match and unify the best known results on uniform pricing.Comment: Minor changes following referees' comment

    Childbearing intentions in a low fertility context: the case of Romania

    Get PDF
    This paper applies the Theory of Planned Behaviour (TPB) to find out the predictors of fertility intentions in Romania, a low-fertility country. We analyse how attitudes, subjective norms and perceived behavioural control relate to the intention to have a child among childless individuals and one-child parents. Principal axis factor analysis confirms which items proposed by the Generation and Gender Survey (GGS 2005) act as valid and reliable measures of the suggested theoretical socio-psychological factors. Four parity-specific logistic regression models are applied to evaluate the relationship between the socio-psychological factors and childbearing intentions. Social pressure emerges as the most important aspect in fertility decision-making among childless individuals and one-child parents, and positive attitudes towards childbearing are a strong component in planning for a child. This paper also underlines the importance of the region-specific factors when studying childbearing intentions: planning for the second child significantly differs among the development regions, representing the cultural and socio-economic divisions of the Romanian territory

    Inferring from an imprecise Plackett–Luce model : application to label ranking

    Get PDF
    Learning ranking models is a difficult task, in which data may be scarce and cautious predictions desirable. To address such issues, we explore the extension of the popular parametric probabilistic Plackett–Luce model, often used to model rankings, to the imprecise setting where estimated parameters are set-valued. In particular, we study how to achieve cautious or conservative inference with it, and illustrate their application on label ranking problems, a specific supervised learning task

    Study protocol: developing a decision system for inclusive housing: applying a systematic, mixed-method quasi-experimental design

    Get PDF
    Background Identifying the housing preferences of people with complex disabilities is a much needed, but under-developed area of practice and scholarship. Despite the recognition that housing is a social determinant of health and quality of life, there is an absence of empirical methodologies that can practically and systematically involve consumers in this complex service delivery and housing design market. A rigorous process for making effective and consistent development decisions is needed to ensure resources are used effectively and the needs of consumers with complex disability are properly met. Methods/Design This 3-year project aims to identify how the public and private housing market in Australia can better respond to the needs of people with complex disabilities whilst simultaneously achieving key corporate objectives. First, using the Customer Relationship Management framework, qualitative (Nominal Group Technique) and quantitative (Discrete Choice Experiment) methods will be used to quantify the housing preferences of consumers and their carers. A systematic mixed-method, quasi-experimental design will then be used to quantify the development priorities of other key stakeholders (e.g., architects, developers, Government housing services etc.) in relation to inclusive housing for people with complex disabilities. Stakeholders randomly assigned to Group 1 (experimental group) will participate in a series of focus groups employing Analytical Hierarchical Process (AHP) methodology. Stakeholders randomly assigned to Group 2 (control group) will participate in focus groups employing existing decision making processes to inclusive housing development (e.g., Risk, Opportunity, Cost, Benefit considerations). Using comparative stakeholder analysis, this research design will enable the AHP methodology (a proposed tool to guide inclusive housing development decisions) to be tested. Discussion It is anticipated that the findings of this study will enable stakeholders to incorporate consumer housing preferences into commercial decisions. Housing designers and developers will benefit from the creation of a parsimonious set of consumer-led housing preferences by which to make informed investments in future housing and contribute to future housing policy. The research design has not been applied in the Australian research context or elsewhere, and will provide a much needed blueprint for market investment to develop viable, consumer directed inclusive housing options for people with complex disability

    Temporal processes in prime–mask interaction: Assessing perceptual consequences of masked information

    Get PDF
    Visual backward masking is frequently used to study the temporal dynamics of visual perception. These dynamics may include the temporal features of conscious percepts, as suggested, for instance, by the asynchronous–updating model (Neumann, 1982) and perceptual–retouch theory ((Bachmann, 1994). These models predict that the perceptual latency of a visual backward mask is shorter than that of a like reference stimulus that was not preceded by a masked stimulus. The prediction has been confirmed by studies using temporal–order judgments: For certain asynchronies between mask and reference stimulus, temporal–order reversals are quite frequent (e.g. Scharlau, & Neumann, 2003a). However, it may be argued that these reversals were due to a response bias in favour of the mask rather than true temporal-perceptual effects. I introduce two measures for assessing latency effects that (1) are not prone to such a response bias, (2) allow to quantify the latency gain, and (3) extend the perceptual evidence from order reversals to duration/interval perception, that is, demonstrate that the perceived interval between a mask and a reference stimulus may be shortened as well as prolonged by the presence of a masked stimulus. Consequences for theories of visual masking such as asynchronous–updating, perceptual–retouch, and reentrant models are discussed

    Crises and collective socio-economic phenomena: simple models and challenges

    Full text link
    Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the Random Field Ising model (RFIM) indeed provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilising self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of RFIM-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can badly fail at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria to be reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.Comment: Review paper accepted for a special issue of J Stat Phys; several minor improvements along reviewers' comment

    Intuitive geometry and visuospatial working memory in children showing symptoms of nonverbal learning disabilities.

    Get PDF
    Visuospatial working memory (VSWM) and intuitive geometry were examined in two groups aged 11-13, one with children displaying symptoms of nonverbal learning disability (NLD; n = 16), and the other, a control group without learning disabilities (n = 16). The two groups were matched for general verbal abilities, age, gender, and socioeconomic level. The children were presented with simple storage and complex-span tasks involving VSWM and with the intuitive geometry task devised by Dehaene, Izard, Pica, and Spelke (2006 ). Results revealed that the two groups differed in the intuitive geometry task. Differences were particularly evident in Euclidean geometry and in geometrical transformations. Moreover, the performance of NLD children was worse than controls to a larger extent in complex-span than in simple storage tasks, and VSWM differences were able to account for group differences in geometry. Finally, a discriminant function analysis confirmed the crucial role of complex-span tasks involving VSWM in distinguishing between the two groups. Results are discussed with reference to the relationship between VSWM and mathematics difficulties in nonverbal learning disabilities

    Rescaling quality of life values from discrete choice experiments for use as QALYs: a cautionary tale

    Get PDF
    Background: Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods: Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results: Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion: Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents
    • 

    corecore