11 research outputs found

    Planning with Multiple Biases

    Full text link
    Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because its decisions suffer from an underlying bias. These approaches have generally considered an agent who experiences a single behavioral bias, studying the effect of this bias on the outcome. In general, however, decision-making can and will be affected by multiple biases operating at the same time. How do multiple biases interact to produce the overall outcome? Here we consider decisions in the presence of a pair of biases exhibiting an intuitively natural interaction: present bias -- the tendency to value costs incurred in the present too highly -- and sunk-cost bias -- the tendency to incorporate costs experienced in the past into one's plans for the future. We propose a theoretical model for planning with this pair of biases, and we show how certain natural behavioral phenomena can arise in our model only when agents exhibit both biases. As part of our model we differentiate between agents that are aware of their biases (sophisticated) and agents that are unaware of them (naive). Interestingly, we show that the interaction between the two biases is quite complex: in some cases, they mitigate each other's effects while in other cases they might amplify each other. We obtain a number of further results as well, including the fact that the planning problem in our model for an agent experiencing and aware of both biases is computationally hard in general, though tractable under more relaxed assumptions

    Time-inconsistent Planning: Simple Motivation Is Hard to Find

    Full text link
    With the introduction of the graph-theoretic time-inconsistent planning model due to Kleinberg and Oren, it has been possible to investigate the computational complexity of how a task designer best can support a present-biased agent in completing the task. In this paper, we study the complexity of finding a choice reduction for the agent; that is, how to remove edges and vertices from the task graph such that a present-biased agent will remain motivated to reach his target even for a limited reward. While this problem is NP-complete in general, this is not necessarily true for instances which occur in practice, or for solutions which are of interest to task designers. For instance, a task designer may desire to find the best task graph which is not too complicated. We therefore investigate the problem of finding simple motivating subgraphs. These are structures where the agent will modify his plan at most kk times along the way. We quantify this simplicity in the time-inconsistency model as a structural parameter: The number of branching vertices (vertices with out-degree at least 22) in a minimal motivating subgraph. Our results are as follows: We give a linear algorithm for finding an optimal motivating path, i.e. when k=0k=0. On the negative side, we show that finding a simple motivating subgraph is NP-complete even if we allow only a single branching vertex --- revealing that simple motivating subgraphs are indeed hard to find. However, we give a pseudo-polynomial algorithm for the case when kk is fixed and edge weights are rationals, which might be a reasonable assumption in practice.Comment: An extended abstract of this paper is accepted at AAAI 202

    Mitigating Procrastination in Spatial Crowdsourcing Via Efficient Scheduling Algorithm

    Full text link
    Several works related to spatial crowdsourcing have been proposed in the direction where the task executers are to perform the tasks within the stipulated deadlines. Though the deadlines are set, it may be a practical scenario that majority of the task executers submit the tasks as late as possible. This situation where the task executers may delay their task submission is termed as procrastination in behavioural economics. In many applications, these late submission of tasks may be problematic for task providers. So here, the participating agents (both task providers and task executers) are articulated with the procrastination issue. In literature, how to prevent this procrastination within the deadline is not addressed in spatial crowdsourcing scenario. However, in a bipartite graph setting one procrastination aware scheduling is proposed but balanced job (task and job will synonymously be used) distribution in different slots (also termed as schedules) is not considered there. In this paper, a procrastination aware scheduling of jobs is proliferated by proposing an (randomized) algorithm in spatial crowdsourcing scenario. Our algorithm ensures that balancing of jobs in different schedules are maintained. Our scheme is compared with the existing algorithm through extensive simulation and in terms of balancing effect, our proposed algorithm outperforms the existing one. Analytically it is shown that our proposed algorithm maintains the balanced distribution

    Chunking Tasks for Present-Biased Agents

    Full text link
    Everyone puts things off sometimes. How can we combat this tendency to procrastinate? A well-known technique used by instructors is to break up a large project into more manageable chunks. But how should this be done best? Here we study the process of chunking using the graph-theoretic model of present bias introduced by Kleinberg and Oren (2014). We first analyze how to optimally chunk single edges within a task graph, given a limited number of chunks. We show that for edges on the shortest path, the optimal chunking makes initial chunks easy and later chunks progressively harder. For edges not on the shortest path, optimal chunking is significantly more complex, but we provide an efficient algorithm that chunks the edge optimally. We then use our optimal edge-chunking algorithm to optimally chunk task graphs. We show that with a linear number of chunks on each edge, the biased agent's cost can be exponentially lowered, to within a constant factor of the true cheapest path. Finally, we extend our model to the case where a task designer must chunk a graph for multiple types of agents simultaneously. The problem grows significantly more complex with even two types of agents, but we provide optimal graph chunking algorithms for two types. Our work highlights the efficacy of chunking as a means to combat present bias.Comment: Published in Economics and Computation 202

    Exploiting graph structures for computational efficiency

    Get PDF
    Coping with NP-hard graph problems by doing better than simply brute force is a field of significant practical importance, and which have also sparked wide theoretical interest. One route to cope with such hard graph problems is to exploit structures which can possibly be found in the input data or in the witness for a solution. In the framework of parameterized complexity, we attempt to quantify such structures by defining numbers which describe "how structured" the graph is. We then do a fine-grained classification of its computational complexity, where not only the input size, but also the structural measure in question come in to play. There is a number of structural measures called width parameters, which includes treewidth, clique-width, and mim-width. These width parameters can be compared by how many classes of graphs that have bounded width. In general there is a tradeoff; if more graph classes have bounded width, then fewer problems can be efficiently solved with the aid of a small width; and if a width is bounded for only a few graph classes, then it is easier to design algorithms which exploit the structure described by the width parameter. For each of the mentioned width parameters, there are known meta-theorems describing algorithmic results for a wide array of graph problems. Hence, showing that decompositions with bounded width can be found for a certain graph class yields algorithmic results for the given class. In the current thesis, we show that several graph classes have bounded width measures, which thus gives algorithmic consequences. Algorithms which are FPT or XP parameterized by width parameters are exploiting structure of the input graph. However, it is also possible to exploit structures that are required of a witness to the solution. We use this perspective to give a handful of polynomial-time algorithms for NP-hard problems whenever the witness belongs to certain graph classes. It is also possible to combine structures of the input graph with structures of the solution witnesses in order to obtain parameterized algorithms, when each structure individually is provably insufficient to provide so under standard complexity assumptions. We give an example of this in the final chapter of the thesis
    corecore