154 research outputs found

    Perspectives on Preference Aggregation

    Get PDF
    For centuries, the mathematical aggregation of preferences by groups, organizations or society has received keen interdisciplinary attention. Extensive 20th century theoretical work in Economics and Political Science highlighted that competing notions of “rational social choice” intrinsically contradict each other. This led some researchers to consider coherent “democratic decision making” a mathematical impossibility. Recent empirical work in Psychology qualifies that view. This nontechnical review sketches a quantitative research paradigm for the behavioral investigation of mathematical social choice rules on real ballot, experimental choice, or attitudinal survey data. The paper poses a series of open questions. Some classical work sometimes makes assumptions about voter preferences that are descriptively invalid. Do such technical assumptions lead the theory astray? How can empirical work inform the formulation of meaningful theoretical primitives? Classical “impossibility results” leverage the fact that certain desirable mathematical properties logically cannot hold universally in all conceivable electorates. Do these properties nonetheless hold in empirical distributions of preferences? Will future behavioral analyses continue to contradict the expectations of established theory? Under what conditions and why do competing consensus methods yield identical outcomes?

    Perspectives on preference aggregation

    Full text link
    For centuries, the mathematical aggregation of preferences by groups, organizations or society has received keen interdisciplinary attention. Extensive 20th century theoretical work in Economics and Political Science highlighted that competing notions of “rational social choice” intrinsically contradict each other. This led some researchers to consider coherent “democratic decision making” a mathematical impossibility. Recent empirical work in Psychology qualifies that view. This nontechnical review sketches a quantitative research paradigm for the behavioral investigation of mathematical social choice rules on real ballot, experimental choice, or attitudinal survey data. The paper poses a series of open questions. Some classical work sometimes makes assumptions about voter preferences that are descriptively invalid. Do such technical assumptions lead the theory astray? How can empirical work inform the formulation of meaningful theoretical primitives? Classical “impossibility results” leverage the fact that certain desirable mathematical properties logically cannot hold universally in all conceivable electorates. Do these properties nonetheless hold in empirical distributions of preferences? Will future behavioral analyses continue to contradict the expectations of established theory? Under what conditions and why do competing consensus methods yield identical outcomes

    Behavioral Social Choice: Probabilistic Models, Statistical Inference, and Applications

    Full text link
    Behavioral Social Choice looks at the probabilistic foundations of collective decision-making rules. The authors challenge much of the existing theoretical wisdom about social choice processes, and seek to restore faith in the possibility of democratic decision-making. In particular, they argue that worries about the supposed prevalence of majority rule cycles that would preclude groups from reaching a final decision about what alternative they prefer have been greatly overstated. In practice, majority rule can be expected to work well in most real-world settings. Furthermore, if there is a problem, they show that the problem is more likely to be one of sample estimates missing the majority winner in a close contest (e.g., Bush-Gore) than a problem about cycling. The authors also provide new mathematical tools to estimate the prevalence of cycles as a function of sample size and insights into how alternative model specifications can change our estimates of social orderings

    Beyond Statistical Similarity: Rethinking Metrics for Deep Generative Models in Engineering Design

    Full text link
    Deep generative models such as Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), Diffusion Models, and Transformers, have shown great promise in a variety of applications, including image and speech synthesis, natural language processing, and drug discovery. However, when applied to engineering design problems, evaluating the performance of these models can be challenging, as traditional statistical metrics based on likelihood may not fully capture the requirements of engineering applications. This paper doubles as a review and practical guide to evaluation metrics for deep generative models (DGMs) in engineering design. We first summarize the well-accepted `classic' evaluation metrics for deep generative models grounded in machine learning theory. Using case studies, we then highlight why these metrics seldom translate well to design problems but see frequent use due to the lack of established alternatives. Next, we curate a set of design-specific metrics which have been proposed across different research communities and can be used for evaluating deep generative models. These metrics focus on unique requirements in design and engineering, such as constraint satisfaction, functional performance, novelty, and conditioning. Throughout our discussion, we apply the metrics to models trained on simple-to-visualize 2-dimensional example problems. Finally, we evaluate four deep generative models on a bicycle frame design problem and structural topology generation problem. In particular, we showcase the use of proposed metrics to quantify performance target achievement, design novelty, and geometric constraints. We publicly release the code for the datasets, models, and metrics used throughout the paper at https://decode.mit.edu/projects/metrics/

    Testing Transitivity of Preferences on Two-Alternative Forced Choice Data

    Get PDF
    As Duncan Luce and other prominent scholars have pointed out on several occasions, testing algebraic models against empirical data raises difficult conceptual, mathematical, and statistical challenges. Empirical data often result from statistical sampling processes, whereas algebraic theories are nonprobabilistic. Many probabilistic specifications lead to statistical boundary problems and are subject to nontrivial order constrained statistical inference. The present paper discusses Luce's challenge for a particularly prominent axiom: Transitivity. The axiom of transitivity is a central component in many algebraic theories of preference and choice. We offer the currently most complete solution to the challenge in the case of transitivity of binary preference on the theory side and two-alternative forced choice on the empirical side, explicitly for up to five, and implicitly for up to seven, choice alternatives. We also discuss the relationship between our proposed solution and weak stochastic transitivity. We recommend to abandon the latter as a model of transitive individual preferences

    Learning from Invalid Data: On Constraint Satisfaction in Generative Models

    Full text link
    Generative models have demonstrated impressive results in vision, language, and speech. However, even with massive datasets, they struggle with precision, generating physically invalid or factually incorrect data. This is particularly problematic when the generated data must satisfy constraints, for example, to meet product specifications in engineering design or to adhere to the laws of physics in a natural scene. To improve precision while preserving diversity and fidelity, we propose a novel training mechanism that leverages datasets of constraint-violating data points, which we consider invalid. Our approach minimizes the divergence between the generative distribution and the valid prior while maximizing the divergence with the invalid distribution. We demonstrate how generative models like GANs and DDPMs that we augment to train with invalid data vastly outperform their standard counterparts which solely train on valid data points. For example, our training procedure generates up to 98 % fewer invalid samples on 2D densities, improves connectivity and stability four-fold on a stacking block problem, and improves constraint satisfaction by 15 % on a structural topology optimization benchmark in engineering design. We also analyze how the quality of the invalid data affects the learning procedure and the generalization properties of models. Finally, we demonstrate significant improvements in sample efficiency, showing that a tenfold increase in valid samples leads to a negligible difference in constraint satisfaction, while less than 10 % invalid samples lead to a tenfold improvement. Our proposed mechanism offers a promising solution for improving precision in generative models while preserving diversity and fidelity, particularly in domains where constraint satisfaction is critical and data is limited, such as engineering design, robotics, and medicine

    Economic irrationality is optimal during noisy decision making

    Get PDF
    According to normative theories, reward-maximizing agents should have consistent preferences. Thus, when faced with alternatives A, B, and C, an individual preferring A to B and B to C should prefer A to C. However, it has been widely argued that humans can incur losses by violating this axiom of transitivity, despite strong evolutionary pres- sure for reward-maximizing choices. Here, adopting a biologically plausible computational framework, we show that intransitive (and thus economically irrational) choices paradoxically improve accuracy (and subsequent economic rewards) when decision formation is cor- rupted by internal neural noise. Over three experiments, we show that humans accumulate evidence over time using a “selective inte- gration” policy that discards information about alternatives with mo- mentarily lower value. This policy predicts violations of the axiom of transitivity when three equally valued alternatives differ circularly in their number of winning samples. We confirm this prediction in a fourth experiment reporting significant violations of weak stochastic transitivity in human observers. Crucially, we show that relying on selective integration protects choices against “late” noise that other- wise corrupts decision formation beyond the sensory stage. Indeed, we report that individuals with higher late noise relied more strongly on selective integration. These findings suggest that violations of ra- tional choice theory reflect adaptive computations that have evolved in response to irreducible noise during neural information processing
    corecore