22 research outputs found
Innovation Adoption by Forward-Looking Social Learners
Motivated by the rise of social media, we build a model studying the eļ¬ect of an economyās potential for social learning on the adoption of innovations of uncertain quality. Provided consumers are forward-looking (i.e., recognize the value of waiting for information), equilibrium dynamics depend non-trivially on qualitative and quantitative features of the informational environment. We identify informational environments that are subject to a saturation eļ¬ect, whereby increased opportunities for social learning can slow down adoption and learning and do not increase consumer welfare. We also suggest a novel, purely informational explanation for diļ¬erent commonly observed adoption curves (S-shaped vs. concave)
Innovation Adoption by Forward-Looking Social Learners
We build a model studying the effect of an economyās potential for social learning on the adoption of innovations of uncertain quality. Provided consumers are forward-looking (i.e., recognize the value of waiting for information), we show how quantitative and qualitative features of the learning environment affect observed adoption dynamics, welfare, and the speed of learning. Our analysis has two main implications. First, we identify environments that are subject to a āsaturation effect,ā whereby increased opportunities for social learning can slow down adoption and learning and do not increase consumer welfare, possibly even being harmful. Second, we show how differences in the learning environment translate into observable differences in adoption dynamics, suggesting a purely informational channel for two commonly documented adoption patternsāS-shaped and concave curves
Reputation Building Under Uncertain Monitoring
We study a canonical model of reputation between a long-run player and a sequence of short-run opponents, in which the long-run player is privately informed about an uncertain state that determines the monitoring structure in the reputation game. The long-run player plays a stage-game repeatedly against a sequence of short-run opponents. We present necessary and suļ¬icient conditions (on the monitoring structure and the type space) to obtain reputation building in this setting. Speciļ¬cally, in contrast to the previous literature, with only stationary commitment types, reputation building is generally not possible and highly sensitive to the inclusion of other commitment types. However, with the inclusion of appropriate dynamic commitment types, reputation building can again be sustained while maintaining robustness to the inclusion of other arbitrary types
Essays in Dynamic Games
This dissertation presents three independent essays. Chapter 1, which is joint work with Mira Frick, studies a model of innovation adoption by a large population of long-lived consumers who face stochastic opportunities to adopt an innovation of uncertain quality. We study how the potential for social learning in an economy affects consumers' informational incentives and how these in turn shape the aggregate adoption dynamics of an innovation. For a class of Poisson learning processes, we establish the existence and uniqueness of equilibria. In line with empirical findings, equilibrium adoption patterns are either S-shaped or feature successions of concave bursts. In the former case, our analysis predicts a novel saturation effect: Due to informational free-riding, increased opportunities for social learning necessarily lead to temporary slow-downs in learning and do not produce welfare gains.Economic
Welfare Comparisons for Biased Learning
We study robust welfare comparisons of learning biases, i.e., deviations from correct Bayesian updating. Given a true signal distribution, we deem one bias more harmful than another if it yields lower objective expected payoļ¬s in all decision problems. We characterize this ranking in static (one signal) and dynamic (many signals) settings. While the static characterization compares posteriors signal-by-signal, the dynamic characterization employs an āeļ¬iciency indexā quantifying the speed of belief convergence. Our results yield welfare-founded quantiļ¬cations of the severity of well-documented biases. Moreover, the static and dynamic rankings can disagree, and āsmallerā biases can be worse in dynamic settings
Stability and Robustness in Misspecified Learning Models
We present an approach to analyze learning outcomes in a broad class of misspeciļ¬ed environments, spanning both single-agent and social learning. Our main results provide general criteria to determineāwithout the need to explicitly analyze learning dynamicsāwhen beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). The key ingredient underlying these criteria is a novel āprediction accuracyā ordering over subjective models that reļ¬nes existing comparisons based on Kullback-Leibler divergence. We show that these criteria can be applied, ļ¬rst, to unify and generalize various convergence results in previously studied settings. Second, they enable us to identify and analyze a natural class of environments, including costly information acquisition and sequential social learning, where unlike most settings the literature has focused on so far, long-run beliefs can fail to be robust to the details of the true data generating process or agentsā perception thereof. In particular, even if agents learn the truth when they are correctly speciļ¬ed, vanishingly small amounts of misspeciļ¬cation can lead to extreme failures of learning
Misinterpreting Others and the Fragility of Social Learning
We study to what extent information aggregation in social learning environments is robust to slight misperceptions of othersā characteristics (e.g., tastes or risk attitudes). We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agentsā actions over time, where agentsā actions depend not only on their beliefs about the state but also on their idiosyncratic types. When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, our ļ¬rst main result shows that even arbitrarily small amounts of misperception can generate extreme breakdowns of information aggregation, wherein the long run all agents incorrectly assign probability 1 to some ļ¬xed state of the world, regardless of the true underlying state. This stark discontinuous departure from the correctly speciļ¬ed benchmark motivates independent analysis of information aggregation under misperception. Our second main result shows that any misperception of the type distribution gives rise to a speciļ¬c failure of information aggregation where agentsā long-run beliefs and behavior vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Finally, we show that how sensitive information aggregation is to misperception depends on how rich agentsā payoļ¬-relevant uncertainty is. A design implication is that information aggregation can be improved through interventions aimed at simplifying the agentsā learning environment
Misinterpreting Others and the Fragility of Social Learning
We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long-run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agentsā actions over time, where agentsā actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, ļ¬rst, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some ļ¬xed state of the world, regardless of the true underlying state. Second, any misperception of the type distribution leads long-run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agentsā payoļ¬-relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agentsā learning environment. The key feature behind our ļ¬ndings is that agentsā belief-updating becomes ādecoupledā from the true state over time. We point to other environments where this feature is present and leads to similar fragility results
Competing for Talent
In many labor markets, e.g., for lawyers, consultants, MBA students, and professional sport players, workers get oļ¬ered and sign long-term contracts even though waiting could reveal signiļ¬cant information about their capabilities. This phenomenon is called unraveling. We examine the link between wage bargaining and unraveling. Two ļ¬rms, an incumbent and an entrant, compete to hire a worker of unknown talent. Informational frictions prevent the incumbent from always observing the entrantās arrival, inducing unraveling in all equilibria. We analyze the extent of unraveling, surplus shares, the average talent of employed workers, and the distribution of wages within and across ļ¬rms