184 research outputs found

    Huge electron-hole exchange interaction in aluminum nitride

    Get PDF
    Optical spectroscopy is performed for c-plane homoepitaxial aluminum nitride (AlN) films. The temperature dependence of the polarization-resolved photoluminescence spectra reveals the exciton fine structure. The experimental results demonstrate that the electron-hole exchange interaction energy (j) in AlN is j=6.8 meV, which is the largest value for typical III-V and II-VI compound semiconductors. We propose the effective interatomic distance as the criterion of the electron-hole exchange interaction energy, revealing a universal rule. This study should encourage potential applications of excitonic optoelectronic devices in nitride semiconductors similar to those using II-VI compound semiconductors

    Dispersed Behavior and Perceptions in Assortative Societies

    Get PDF
    We formulate a model of social interactions and misinferences by agents who neglect assortativity in their society, mistakenly believing that they interact with a representative sample of the population. A key component of our approach is the interplay between this bias and agentsā€™ strategic incentives. We highlight a mechanism through which assortativity neglect, combined with strategic complementarities in agentsā€™ behavior, drives up action dispersion in society (e.g., socioeconomic disparities in education investment). We also suggest that the combination of assortativity neglect and strategic incentives may be relevant in understanding empirically documented misperceptions of income inequality and political attitude polarization

    Belief Convergence under Misspecified Learning: A Martingale Approach

    Get PDF
    We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel ā€œprediction accuracyā€ order over subjective models, and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyze environments where learning is ā€œslow,ā€ such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning

    Welfare Comparisons for Biased Learning

    Get PDF
    We study robust welfare comparisons of learning biases, i.e., deviations from correct Bayesian updating. Given a true signal distribution, we deem one bias more harmful than another if it yields lower objective expected payoffs in all decision problems. We characterize this ranking in static (one signal) and dynamic (many signals) settings. While the static characterization compares posteriors signal-by-signal, the dynamic characterization employs an ā€œefficiency indexā€ quantifying the speed of belief convergence. Our results yield welfare-founded quantifications of the severity of well-documented biases. Moreover, the static and dynamic rankings can conflict, and ā€œsmallerā€ biases can be worse in dynamic settings

    Dispersed Behavior and Perceptions in Assortative Societies

    Get PDF
    Motivated by the fact that peopleā€™s perceptions of their societies are routinely incorrect, we study the possibility and implications of misperception in social interactions. We focus on coordination games with assortative interactions, where agents with higher types (e.g., wealth, political attitudes) are more likely than lower types to interact with other high types. Assortativity creates scope for misperception, because what agents observe in their local interactions need not be representative of society as a whole. To model this, we define a tractable solution concept, ā€œlocal perception equilibriumā€ (LPE), that describes possible behavior and perceptions when agentsā€™ beliefs are derived only from their local interactions. We show that there is a unique form of misperception that can persist in any environment: This is assortativity neglect, where all agents believe the people they interact with to be a representative sample of society as a whole. Relative to the case with correct perceptions, assortativity neglect generates two mutually reinforcing departures: A ā€œfalse consensus effect,ā€ whereby agentsā€™ perceptions of average characteristics in the population are increasing in their own type; and more ā€œdispersedā€ behavior in society, which adversely affects welfare due to increased miscoordination. Finally, we propose a comparative notion of when one society is more assortative than another and show that more assortative societies are characterized precisely by greater action dispersion and a more severe false consensus effect, and as a result, greater assortativity has an ambiguous effect on welfare

    Stability and Robustness in Misspecified Learning Models

    Get PDF
    We present an approach to analyze learning outcomes in a broad class of misspeciļ¬ed environments, spanning both single-agent and social learning. Our main results provide general criteria to determineā€”without the need to explicitly analyze learning dynamicsā€”when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). The key ingredient underlying these criteria is a novel ā€œprediction accuracyā€ ordering over subjective models that reļ¬nes existing comparisons based on Kullback-Leibler divergence. We show that these criteria can be applied, ļ¬rst, to unify and generalize various convergence results in previously studied settings. Second, they enable us to identify and analyze a natural class of environments, including costly information acquisition and sequential social learning, where unlike most settings the literature has focused on so far, long-run beliefs can fail to be robust to the details of the true data generating process or agentsā€™ perception thereof. In particular, even if agents learn the truth when they are correctly speciļ¬ed, vanishingly small amounts of misspeciļ¬cation can lead to extreme failures of learning

    Misinterpreting Others and the Fragility of Social Learning

    Get PDF
    We study to what extent information aggregation in social learning environments is robust to slight misperceptions of othersā€™ characteristics (e.g., tastes or risk attitudes). We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agentsā€™ actions over time, where agentsā€™ actions depend not only on their beliefs about the state but also on their idiosyncratic types. When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, our ļ¬rst main result shows that even arbitrarily small amounts of misperception can generate extreme breakdowns of information aggregation, wherein the long run all agents incorrectly assign probability 1 to some ļ¬xed state of the world, regardless of the true underlying state. This stark discontinuous departure from the correctly speciļ¬ed benchmark motivates independent analysis of information aggregation under misperception. Our second main result shows that any misperception of the type distribution gives rise to a speciļ¬c failure of information aggregation where agentsā€™ long-run beliefs and behavior vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Finally, we show that how sensitive information aggregation is to misperception depends on how rich agentsā€™ payoļ¬€-relevant uncertainty is. A design implication is that information aggregation can be improved through interventions aimed at simplifying the agentsā€™ learning environment

    Misinterpreting Others and the Fragility of Social Learning

    Get PDF
    We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long-run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agentsā€™ actions over time, where agentsā€™ actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, ļ¬rst, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some ļ¬xed state of the world, regardless of the true underlying state. Second, any misperception of the type distribution leads long-run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agentsā€™ payoļ¬€-relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agentsā€™ learning environment. The key feature behind our ļ¬ndings is that agentsā€™ belief-updating becomes ā€œdecoupledā€ from the true state over time. We point to other environments where this feature is present and leads to similar fragility results

    Dispersed Behavior and Perceptions in Assortative Societies

    Get PDF
    We take an equilibrium-based approach to study the interplay between behavior and misperceptions in coordination games with assortative interactions. Our focus is assortativity neglect, where agents fail to take into account the extent of assortativity in society. We show, ļ¬rst, that assortativity neglect ampliļ¬es action dispersion, both in ļ¬xed societies and by exacerbating the eļ¬€ect of social changes. Second, unlike other misperceptions, assortativity neglect is a misperception that agents can rationalize in any true environment. Finally, assortativity neglect provides a lens through which to understand how empirically documented misperceptions about distributions of population characteristics (e.g., income inequality) vary across societies

    Learning Efficiency of Multi-Agent Information Structures

    Get PDF
    We study settings in which, prior to playing an incomplete information game, players observe many draws of private signals about the state from some information structure. Signals are i.i.d. across draws, but may display arbitrary correlation across players. For each information structure, we deļ¬ne a simple learning eļ¬€iciency index, which only considers the statistical distance between the worst-informed playerā€™s marginal signal distributions in diļ¬€erent states. We show, ļ¬rst, that this index characterizes the speed of common learning (Cripps, Ely, Mailath, and Samuelson, 2008): In particular, the speed at which players achieve approximate common knowledge of the state coincides with the slowest playerā€™s speed of individual learning, and does not depend on the correlation across playersā€™ signals. Second, we build on this characterization to provide a ranking over information structures: We show that, with suļ¬€iciently many signal draws, information structures with a higher learning eļ¬€iciency index lead to better equilibrium outcomes, robustly for a rich class of games and objective functions. We discuss implications of our results for constrained information design in games and for the question when information structures are complements vs. substitutes
    • ā€¦
    corecore