796 research outputs found

    Complexity of Terminating Preference Elicitation

    Full text link
    Complexity theory is a useful tool to study computational issues surrounding the elicitation of preferences, as well as the strategic manipulation of elections aggregating together preferences of multiple agents. We study here the complexity of determining when we can terminate eliciting preferences, and prove that the complexity depends on the elicitation strategy. We show, for instance, that it may be better from a computational perspective to elicit all preferences from one agent at a time than to elicit individual preferences from multiple agents. We also study the connection between the strategic manipulation of an election and preference elicitation. We show that what we can manipulate affects the computational complexity of manipulation. In particular, we prove that there are voting rules which are easy to manipulate if we can change all of an agent's vote, but computationally intractable if we can change only some of their preferences. This suggests that, as with preference elicitation, a fine-grained view of manipulation may be informative. Finally, we study the connection between predicting the winner of an election and preference elicitation. Based on this connection, we identify a voting rule where it is computationally difficult to decide the probability of a candidate winning given a probability distribution over the votes.Comment: 7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008

    Where are the really hard manipulation problems? The phase transition in manipulating the veto rule

    Full text link
    Voting is a simple mechanism to aggregate the preferences of agents. Many voting rules have been shown to be NP-hard to manipulate. However, a number of recent theoretical results suggest that this complexity may only be in the worst-case since manipulation is often easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We demonstrate that there is a smooth transition in the probability that a coalition can elect a desired candidate using the veto rule as the size of the manipulating coalition increases. We show that a rescaled probability curve displays a simple and universal form independent of the size of the problem. We argue that manipulation of the veto rule is asymptotically easy for many independent and identically distributed votes even when the coalition of manipulators is critical in size. Based on this argument, we identify a situation in which manipulation is computationally hard. This is when votes are highly correlated and the election is "hung". We show, however, that even a single uncorrelated voter is enough to make manipulation easy again.Comment: Proceedings of the Twenty-first International Joint Conference on Artificial Intelligence (IJCAI-09

    Cooperative Negotiation in Autonomic Systems using Incremental Utility Elicitation

    Full text link
    Decentralized resource allocation is a key problem for large-scale autonomic (or self-managing) computing systems. Motivated by a data center scenario, we explore efficient techniques for resolving resource conflicts via cooperative negotiation. Rather than computing in advance the functional dependence of each element's utility upon the amount of resource it receives, which could be prohibitively expensive, each element's utility is elicited incrementally. Such incremental utility elicitation strategies require the evaluation of only a small set of sampled utility function points, yet they find near-optimal allocations with respect to a minimax regret criterion. We describe preliminary computational experiments that illustrate the benefit of our approach.Comment: Appears in Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence (UAI2003

    Emotionalism within People-Oriented Software Design

    Full text link
    In designing most software applications, much effort is placed upon the functional goals, which make a software system useful. However, the failure to consider emotional goals, which make a software system pleasurable to use, can result in disappointment and system rejection even if utilitarian goals are well implemented. Although several studies have emphasized the importance of people's emotional goals in developing software, there is little advice on how to address these goals in the software system development process. This paper proposes a theoretically-sound and practical method by combining the theories and techniques of software engineering, requirements engineering, and decision making. The outcome of this study is the Emotional Goal Systematic Analysis Technique (EG-SAT), which facilitates the process of finding software system capabilities to address emotional goals in software design. EG-SAT is easy to learn and easy to use technique that helps analysts to gain insights into how to address people's emotional goals. To demonstrate the method in use, a two-part evaluation is conducted. First, EG-SAT is used to analyze the emotional goals of potential users of a mobile learning application that provides information about low carbon living for tradespeople and professionals in the building industry in Australia. The results of using EG-SAT in this case study are compared with a professionally-developed baseline. Second, we ran a semi-controlled experiment in which 12 participants were asked to apply EG-SAT and another technique on part of our case study. The outcomes show that EG-SAT helped participants to both analyse emotional goals and gain valuable insights about the functional and non-functional goals for addressing people's emotional goals

    Iterative Judgment Aggregation

    Full text link
    Judgment aggregation problems form a class of collective decision-making problems represented in an abstract way, subsuming some well known problems such as voting. A collective decision can be reached in many ways, but a direct one-step aggregation of individual decisions is arguably most studied. Another way to reach collective decisions is by iterative consensus building -- allowing each decision-maker to change their individual decision in response to the choices of the other agents until a consensus is reached. Iterative consensus building has so far only been studied for voting problems. Here we propose an iterative judgment aggregation algorithm, based on movements in an undirected graph, and we study for which instances it terminates with a consensus. We also compare the computational complexity of our iterative procedure with that of related judgment aggregation operators

    Just Sort It! A Simple and Effective Approach to Active Preference Learning

    Get PDF
    We address the problem of learning a ranking by using adaptively chosen pairwise comparisons. Our goal is to recover the ranking accurately but to sample the comparisons sparingly. If all comparison outcomes are consistent with the ranking, the optimal solution is to use an efficient sorting algorithm, such as Quicksort. But how do sorting algorithms behave if some comparison outcomes are inconsistent with the ranking? We give favorable guarantees for Quicksort for the popular Bradley-Terry model, under natural assumptions on the parameters. Furthermore, we empirically demonstrate that sorting algorithms lead to a very simple and effective active learning strategy: repeatedly sort the items. This strategy performs as well as state-of-the-art methods (and much better than random sampling) at a minuscule fraction of the computational cost.Comment: Accepted at ICML 201

    Determining Possible and Necessary Winners Given Partial Orders

    Full text link
    Usually a voting rule requires agents to give their preferences as linear orders. However, in some cases it is impractical for an agent to give a linear order over all the alternatives. It has been suggested to let agents submit partial orders instead. Then, given a voting rule, a profile of partial orders, and an alternative (candidate) c, two important questions arise: first, is it still possible for c to win, and second, is c guaranteed to win? These are the possible winner and necessary winner problems, respectively. Each of these two problems is further divided into two sub-problems: determining whether c is a unique winner (that is, c is the only winner), or determining whether c is a co-winner (that is, c is in the set of winners). We consider the setting where the number of alternatives is unbounded and the votes are unweighted. We completely characterize the complexity of possible/necessary winner problems for the following common voting rules: a class of positional scoring rules (including Borda), Copeland, maximin, Bucklin, ranked pairs, voting trees, and plurality with runoff

    ACon: A learning-based approach to deal with uncertainty in contextual requirements at runtime

    Get PDF
    Context: Runtime uncertainty such as unpredictable operational environment and failure of sensors that gather environmental data is a well-known challenge for adaptive systems. Objective: To execute requirements that depend on context correctly, the system needs up-to-date knowledge about the context relevant to such requirements. Techniques to cope with uncertainty in contextual requirements are currently underrepresented. In this paper we present ACon (Adaptation of Contextual requirements), a data-mining approach to deal with runtime uncertainty affecting contextual requirements. Method: ACon uses feedback loops to maintain up-to-date knowledge about contextual requirements based on current context information in which contextual requirements are valid at runtime. Upon detecting that contextual requirements are affected by runtime uncertainty, ACon analyses and mines contextual data, to (re-)operationalize context and therefore update the information about contextual requirements. Results: We evaluate ACon in an empirical study of an activity scheduling system used by a crew of 4 rowers in a wild and unpredictable environment using a complex monitoring infrastructure. Our study focused on evaluating the data mining part of ACon and analysed the sensor data collected onboard from 46 sensors and 90,748 measurements per sensor. Conclusion: ACon is an important step in dealing with uncertainty affecting contextual requirements at runtime while considering end-user interaction. ACon supports systems in analysing the environment to adapt contextual requirements and complements existing requirements monitoring approaches by keeping the requirements monitoring specification up-to-date. Consequently, it avoids manual analysis that is usually costly in today’s complex system environments.Peer ReviewedPostprint (author's final draft

    Information and Decision Theoretic Approaches to Problems in Active Diagnosis.

    Full text link
    In applications such as active learning or disease/fault diagnosis, one often encounters the problem of identifying an unknown object while minimizing the number of ``yes" or ``no" questions (queries) posed about that object. This problem has been commonly referred to as object/entity identification or active diagnosis in the literature. In this thesis, we consider several extensions of this fundamental problem that are motivated by practical considerations in real-world, time-critical identification tasks such as emergency response. First, we consider the problem where the objects are partitioned into groups, and the goal is to identify only the group to which the object belongs. We then consider the case where the cost of identifying an object grows exponentially in the number of queries. To address these problems we show that a standard algorithm for object identification, known as the splitting algorithm or generalized binary search (GBS), may be viewed as a generalization of Shannon-Fano coding. We then extend this result to the group-based and the exponential cost settings, leading to new, improved algorithms. We then study the problem of active diagnosis under persistent query noise. Previous work in this area either assumed that the noise is independent or that the underlying query noise distribution is completely known. We make no such assumptions, and introduce an algorithm that returns a ranked list of objects, such that the expected rank of the true object is optimized. Finally, we study the problem of active diagnosis where multiple objects are present, such as in disease/fault diagnosis. Current algorithms in this area have an exponential time complexity making them slow and intractable. We address this issue by proposing an extension of our rank-based approach to the multiple object scenario, where we optimize the area under the ROC curve of the rank-based output. The AUC criterion allows us to make a simplifying assumption that significantly reduces the complexity of active diagnosis (from exponential to near quadratic), with little or no compromise on the performance quality. Further, we demonstrate the performance of the proposed algorithms through extensive experiments on both synthetic and real world datasets.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91606/1/gowtham_1.pd

    Rank Pruning for Dominance Queries in CP-Nets

    Full text link
    Conditional preference networks (CP-nets) are a graphical representation of a person's (conditional) preferences over a set of discrete variables. In this paper, we introduce a novel method of quantifying preference for any given outcome based on a CP-net representation of a user's preferences. We demonstrate that these values are useful for reasoning about user preferences. In particular, they allow us to order (any subset of) the possible outcomes in accordance with the user's preferences. Further, these values can be used to improve the efficiency of outcome dominance testing. That is, given a pair of outcomes, we can determine which the user prefers more efficiently. Through experimental results, we show that this method is more effective than existing techniques for improving dominance testing efficiency. We show that the above results also hold for CP-nets that express indifference between variable values.Comment: 58 pages, 8 figure
    • …
    corecore