35 research outputs found
Monotone threshold representations
Motivated by the literature on ``choice overload'', we study a boundedly rational agent whose choice behavior admits a \textit{monotone threshold representation}: There is an underlying rational benchmark, corresponding to maximization of a utility function , from which the agent's choices depart in a menu-dependent manner. The severity of the departure is quantified by a threshold map , which is monotone with respect to set inclusion. We derive an axiomatic characterization of the model, extending familiar characterizations of rational choice. We classify monotone threshold representations as a special case of Simon's theory of ``satisficing'', but as strictly more general than both Tyson's (2008) ``expansive satisficing'' model as well as Fishburn (1975) and Luce's (1956) model of choice behavior generated by a semiorder. We axiomatically characterize the difference, providing novel foundations for these models
Innovation Adoption by Forward-Looking Social Learners
Motivated by the rise of social media, we build a model studying the eļ¬ect of an economyās potential for social learning on the adoption of innovations of uncertain quality. Provided consumers are forward-looking (i.e., recognize the value of waiting for information), equilibrium dynamics depend non-trivially on qualitative and quantitative features of the informational environment. We identify informational environments that are subject to a saturation eļ¬ect, whereby increased opportunities for social learning can slow down adoption and learning and do not increase consumer welfare. We also suggest a novel, purely informational explanation for diļ¬erent commonly observed adoption curves (S-shaped vs. concave)
Innovation Adoption by Forward-Looking Social Learners
We build a model studying the effect of an economyās potential for social learning on the adoption of innovations of uncertain quality. Provided consumers are forward-looking (i.e., recognize the value of waiting for information), we show how quantitative and qualitative features of the learning environment affect observed adoption dynamics, welfare, and the speed of learning. Our analysis has two main implications. First, we identify environments that are subject to a āsaturation effect,ā whereby increased opportunities for social learning can slow down adoption and learning and do not increase consumer welfare, possibly even being harmful. Second, we show how differences in the learning environment translate into observable differences in adoption dynamics, suggesting a purely informational channel for two commonly documented adoption patternsāS-shaped and concave curves
Dynamic Random Utility
Under dynamic random utility, an agent (or population of agents) solves a dynamic decision problem subject to evolving private information. We analyze the fully general and non-parametric model, axiomatically characterizing the implied dynamic stochastic choice behavior. A key new feature relative to static or i.i.d. versions of the model is that when private information displays serial correlation, choices appear history dependent: diļ¬erent sequences of past choices reflect diļ¬erent private information of the agent, and hence typically lead to diļ¬erent distributions of current choices. Our axiomatization imposes discipline on the form of history dependence that can arise under arbitrary serial correlation. Dynamic stochastic choice data lets us distinguish central models that coincide in static domains, in particular private information in the form of utility shocks vs. learning, and to study inherently dynamic phenomena such as choice persistence. We relate our model to speciļ¬cations of utility shocks widely used in empirical work, highlighting new modeling tradeoļ¬s in the dynamic discrete choice literature. Finally, we extend our characterization to allow past consumption to directly aļ¬ect the agentās utility process, accommodating models of habit formation and experimentation
Welfare Comparisons for Biased Learning
We study robust welfare comparisons of learning biases, i.e., deviations from correct Bayesian updating. Given a true signal distribution, we deem one bias more harmful than another if it yields lower objective expected payoffs in all decision problems. We characterize this ranking in static (one signal) and dynamic (many signals) settings. While the static characterization compares posteriors signal-by-signal, the dynamic characterization employs an āefficiency indexā quantifying the speed of belief convergence. Our results yield welfare-founded quantifications of the severity of well-documented biases. Moreover, the static and dynamic rankings can conflict, and āsmallerā biases can be worse in dynamic settings
Dispersed Behavior and Perceptions in Assortative Societies
Motivated by the fact that peopleās perceptions of their societies are routinely incorrect, we study the possibility and implications of misperception in social interactions. We focus on coordination games with assortative interactions, where agents with higher types (e.g., wealth, political attitudes) are more likely than lower types to interact with other high types. Assortativity creates scope for misperception, because what agents observe in their local interactions need not be representative of society as a whole. To model this, we define a tractable solution concept, ālocal perception equilibriumā (LPE), that describes possible behavior and perceptions when agentsā beliefs are derived only from their local interactions. We show that there is a unique form of misperception that can persist in any environment: This is assortativity neglect, where all agents believe the people they interact with to be a representative sample of society as a whole. Relative to the case with correct perceptions, assortativity neglect generates two mutually reinforcing departures: A āfalse consensus effect,ā whereby agentsā perceptions of average characteristics in the population are increasing in their own type; and more ādispersedā behavior in society, which adversely affects welfare due to increased miscoordination. Finally, we propose a comparative notion of when one society is more assortative than another and show that more assortative societies are characterized precisely by greater action dispersion and a more severe false consensus effect, and as a result, greater assortativity has an ambiguous effect on welfare
Misinterpreting Others and the Fragility of Social Learning
We study to what extent information aggregation in social learning environments is robust to slight misperceptions of othersā characteristics (e.g., tastes or risk attitudes). We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agentsā actions over time, where agentsā actions depend not only on their beliefs about the state but also on their idiosyncratic types. When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, our ļ¬rst main result shows that even arbitrarily small amounts of misperception can generate extreme breakdowns of information aggregation, wherein the long run all agents incorrectly assign probability 1 to some ļ¬xed state of the world, regardless of the true underlying state. This stark discontinuous departure from the correctly speciļ¬ed benchmark motivates independent analysis of information aggregation under misperception. Our second main result shows that any misperception of the type distribution gives rise to a speciļ¬c failure of information aggregation where agentsā long-run beliefs and behavior vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Finally, we show that how sensitive information aggregation is to misperception depends on how rich agentsā payoļ¬-relevant uncertainty is. A design implication is that information aggregation can be improved through interventions aimed at simplifying the agentsā learning environment
Misinterpreting Others and the Fragility of Social Learning
We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long-run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agentsā actions over time, where agentsā actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, ļ¬rst, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some ļ¬xed state of the world, regardless of the true underlying state. Second, any misperception of the type distribution leads long-run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agentsā payoļ¬-relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agentsā learning environment. The key feature behind our ļ¬ndings is that agentsā belief-updating becomes ādecoupledā from the true state over time. We point to other environments where this feature is present and leads to similar fragility results
Dispersed Behavior and Perceptions in Assortative Societies
We take an equilibrium-based approach to study the interplay between behavior and misperceptions in coordination games with assortative interactions. Our focus is assortativity neglect, where agents fail to take into account the extent of assortativity in society. We show, ļ¬rst, that assortativity neglect ampliļ¬es action dispersion, both in ļ¬xed societies and by exacerbating the eļ¬ect of social changes. Second, unlike other misperceptions, assortativity neglect is a misperception that agents can rationalize in any true environment. Finally, assortativity neglect provides a lens through which to understand how empirically documented misperceptions about distributions of population characteristics (e.g., income inequality) vary across societies
Welfare Comparisons for Biased Learning
We study robust welfare comparisons of learning biases, i.e., deviations from correct Bayesian updating. Given a true signal distribution, we deem one bias more harmful than another if it yields lower objective expected payoļ¬s in all decision problems. We characterize this ranking in static (one signal) and dynamic (many signals) settings. While the static characterization compares posteriors signal-by-signal, the dynamic characterization employs an āeļ¬iciency indexā quantifying the speed of belief convergence. Our results yield welfare-founded quantiļ¬cations of the severity of well-documented biases. Moreover, the static and dynamic rankings can disagree, and āsmallerā biases can be worse in dynamic settings