673 research outputs found
Parameterized complexity results for agenda safety in judgment aggregation
Abstract Many problems arising in computational social choice are of high computational complexity, and some are located at higher levels of the Polynomial Hierarchy. We argue that a parameterized complexity analysis provides a lot of insight about the factors contributing to the complexity of these problems, and can lead to practically useful algorithms. As a case study, we consider the problem of agenda safety in judgment aggregation, consider several natural parameters for this problem, and determine the parameterized complexity for each of these. Our analysis is aimed at obtaining fixed-parameter tractable (fpt) algorithms that use a small number of calls to a SAT solver. We hope that this work may initiate a structured parameterized complexity investigation of problems arising in the field of computational social choice that are located at higher levels of the Polynomial Hierarchy. A by-product of our case study is the development of complexity-theoretic techniques to provide lower bounds on the number of SAT calls needed by fpt-algorithms to solve certain problems
The Complexity Landscape of Outcome Determination in Judgment Aggregation
We provide a comprehensive analysis of the computational complexity of the outcome determination problem for the most important aggregation rules proposed in the literature on logic-based judgment aggregation. Judgment aggregation is a powerful and flexible framework for studying problems of collective decision making that has attracted interest in a range of disciplines, including Legal Theory, Philosophy, Economics, Political Science, and Artificial Intelligence. The problem of computing the outcome for a given list of individual judgments to be aggregated into a single collective judgment is the most fundamental algorithmic challenge arising in this context. Our analysis applies to several different variants of the basic framework of judgment aggregation that have been discussed in the literature, as well as to a new framework that encompasses all existing such frameworks in terms of expressive power and representational succinctness.publishedVersio
Toward a Theory of Public Entrepreneurship
This paper explores innovation, experimentation, and creativity in the public domain and in the public interest. Researchers in various disciplines have studied public entrepreneurship, but there is little work in management and economics on the nature, incentives, constraints and boundaries of entrepreneurship directed to public ends. We identify a framework for analyzing public entrepreneurship and its relationship to private entrepreneurial behavior. We submit that public and private entrepreneurship share essential features but differ critically regarding the definition and measurement of objectives, the nature of the selection environment, and the opportunities for rent-seeking. We describe four levels of analysis for studying public entrepreneurship, provide examples, and suggest new research directions.Entrepreneurship, public administration, political economy, institutions, transaction costs
Large Language Models as Subpopulation Representative Models: A Review
Of the many commercial and scientific opportunities provided by large
language models (LLMs; including Open AI's ChatGPT, Meta's LLaMA, and
Anthropic's Claude), one of the more intriguing applications has been the
simulation of human behavior and opinion. LLMs have been used to generate human
simulcra to serve as experimental participants, survey respondents, or other
independent agents, with outcomes that often closely parallel the observed
behavior of their genuine human counterparts. Here, we specifically consider
the feasibility of using LLMs to estimate subpopulation representative models
(SRMs). SRMs could provide an alternate or complementary way to measure public
opinion among demographic, geographic, or political segments of the population.
However, the introduction of new technology to the socio-technical
infrastructure does not come without risk. We provide an overview of behavior
elicitation techniques for LLMs, and a survey of existing SRM implementations.
We offer frameworks for the analysis, development, and practical implementation
of LLMs as SRMs, consider potential risks, and suggest directions for future
work
Epistemically Detrimental Dissent in Climate Science
Dissent, criticism and controversy are integral to scientific practice, especially when we consider science as a communal enterprise. However, not every form of dissent is acceptable in science. The aim of this paper is to characterize what constitutes the kind of dissent that impedes the growth of knowledge, in other words epistemically detrimental dissent (EDD), and apply that analysis to climate science. I argue that the intrusion of non-epistemic considerations is inescapable in climate science and other policy-relevant sciences. As such there is the need to look beyond the presence of non-epistemic factors (such as non-epistemic risks and economic interests) when evaluating cases of dissent in policy-relevant science. I make the claim that the stable factors in the production of are the presence of skewed research and the effective dissemination of this ‘research’ to the public; the intrusion of non-epistemic values and consideration is only a contingent enabling factor
Probabilistic Decision Tools for Determining Impacts of Agricultural Development Policy on Household Nutrition
Governments around the world have agreed to end hunger and food insecurity and to improve global nutrition, largely through changes to agriculture and food systems. However, they are faced with a lot of uncertainty when making policy decisions, since any agricultural changes will influence social and biophysical systems, which could yield either positive or negative nutrition outcomes. We outline a holistic probability modeling approach with Bayesian Network (BN) models for nutritional impacts resulting from agricultural development policy. The approach includes the elicitation of expert knowledge for impact model development, including sensitivity analysis and value of information calculations. It aims at a generalizable methodology that can be applied in a wide range of contexts. To showcase this approach, we develop an impact model of Vision 2040, Uganda's development strategy, which, among other objectives, seeks to transform the country's agricultural landscape from traditional systems to large-scale commercial agriculture. Model results suggest that Vision 2040 is likely to have negative outcomes for the rural livelihoods it intends to support; it may have no appreciable influence on household hunger but, by influencing preferences for and access to quality nutritional foods, may increase the prevalence of micronutrient deficiency. The results highlight the trade-offs that must be negotiated when making decisions regarding agriculture for nutrition, and the capacity of BNs to make these trade-offs explicit. The work illustrates the value of BNs for supporting evidence-based agricultural development decisions
Choice set formation for outdoor destinations: the role of motivations and preference discrimination in site selection for the management of public expenditures on protected areas
Effective public expenditure currently dominates the management focus of many protected areas. This calls for explicit modeling of constraints and motivations that, respectively, obstruct and stimulate visits to selected outdoor destinations. Choice set formation is the result of screening and/or inclusion of specific sites (alternatives) to form the set of sites considered in real choices. Evidence shows that the omission of a structural representation of choice set formation is harmful to econometric inference. Yet, the literature has largely ignored the underlying behavioral phenomenon. We show, using a discrete choice experiment involving selection among seven recreational sites in an Italian national park, that choice set formation is behaviorally relevant, even after controlling for preference discrimination. Motivations (why visit?) are important determinants of preliminary site screening for choice set inclusion, as well as site selection, justifying the additional value of such modeling extension
Process-oriented risk assessment methodology for manufacturing process evaluation
A process-oriented risk assessment methodology is proposed. Risks involved in a process and the corresponding risk factors are identified through an objectives-oriented risk identification approach and evaluated qualitatively in the Process FMEA. The critical risks of the PFMEA are then incorporated in the process model for further quantitative analysis employing simulation technique. Using the proposed methodology as a decision-making tool, alternative scenarios are developed and evaluated against the developed risk measures. The risk measures values issues out of simulation are normalized and aggregated to form a global risk indicator to rank the alternative processes on the basis of desirability. The methodology is illustrated with a case study issued from parts manufacturing but is applicable to a wide range of other processes
PEER Testbed Study on a Laboratory Building: Exercising Seismic Performance Assessment
From 2002 to 2004 (years five and six of a ten-year funding cycle), the PEER Center organized
the majority of its research around six testbeds. Two buildings and two bridges, a campus, and a
transportation network were selected as case studies to “exercise” the PEER performance-based
earthquake engineering methodology. All projects involved interdisciplinary teams of
researchers, each producing data to be used by other colleagues in their research. The testbeds
demonstrated that it is possible to create the data necessary to populate the PEER performancebased framing equation, linking the hazard analysis, the structural analysis, the development of
damage measures, loss analysis, and decision variables.
This report describes one of the building testbeds—the UC Science Building. The project
was chosen to focus attention on the consequences of losses of laboratory contents, particularly
downtime. The UC Science testbed evaluated the earthquake hazard and the structural
performance of a well-designed recently built reinforced concrete laboratory building using the
OpenSees platform. Researchers conducted shake table tests on samples of critical laboratory
contents in order to develop fragility curves used to analyze the probability of losses based on
equipment failure. The UC Science testbed undertook an extreme case in performance
assessment—linking performance of contents to operational failure. The research shows the
interdependence of building structure, systems, and contents in performance assessment, and
highlights where further research is needed.
The Executive Summary provides a short description of the overall testbed research
program, while the main body of the report includes summary chapters from individual
researchers. More extensive research reports are cited in the reference section of each chapter
- …