103 research outputs found

    Gradual adjustment and equilibrium uniqueness under noisy monitoring

    Full text link
    We study the implications of flexible adjustment in strategic interactions using a class of finite-horizon models in continuous time. Players take costly actions to affect the evolution of state variables that are commonly observable and perturbed by Brownian noise. The values of these state variables influence players' terminal payoffs at the deadline, as well as their flow payoffs. In contrast to the static case, the equilibrium is unique under a general class of terminal payoff functions. Our characterization of the equilibrium builds on recent developments in the theory of backward stochastic differential equations (BSDEs). We use this tool to analyze applications, including team production, hold-up problems, and dynamic contests. In a team production model, the unique equilibrium selects an effcient outcome when frictions vanish

    A Study on PWM Bypass Capacity Control of Scroll Compressors

    Get PDF
    A pulse width modulation (PWM) bypass capacity control technique for air conditioners has been developed to achieve high scroll compressor efficiency over a wide capacity range, and we investigated the techniqueā€™s fundamental characteristics in this study. With this capacity control technique, the refrigerant flow rate is controlled by periodically switching a full load mode, in which the refrigerant is compressed and discharged out of the compressor, and an unload mode, in which the refrigerant is not compressed so that the power needed to compress is unnecessary. In this study, we measured the pressure and input power of a compressor using the control technique to clarify the techniqueā€™s dynamic characteristics. We also developed a numerical simulation model to predict the dynamic behavior of the refrigerant. The calculated pressures were found to be in good agreement with the measured pressures and were used to estimate the relation between the PWM period and the efficiency or the discharge flow rate. Since the PWM period could not be determined by the demanded load capacity, the estimations were used to determine it. Using this period, we carried out performance tests of a compressor with PWM bypass capacity control. The results showed the capacity reached 30% of the lower limit of rotational speed and that the calculated efficiencies agreed well with the experimental findings

    Dynamic Random Utility

    Get PDF
    Under dynamic random utility, an agent (or population of agents) solves a dynamic decision problem subject to evolving private information. We analyze the fully general and non-parametric model, axiomatically characterizing the implied dynamic stochastic choice behavior. A key new feature relative to static or i.i.d. versions of the model is that when private information displays serial correlation, choices appear history dependent: diļ¬€erent sequences of past choices reflect diļ¬€erent private information of the agent, and hence typically lead to diļ¬€erent distributions of current choices. Our axiomatization imposes discipline on the form of history dependence that can arise under arbitrary serial correlation. Dynamic stochastic choice data lets us distinguish central models that coincide in static domains, in particular private information in the form of utility shocks vs. learning, and to study inherently dynamic phenomena such as choice persistence. We relate our model to speciļ¬cations of utility shocks widely used in empirical work, highlighting new modeling tradeoļ¬€s in the dynamic discrete choice literature. Finally, we extend our characterization to allow past consumption to directly aļ¬€ect the agentā€™s utility process, accommodating models of habit formation and experimentation

    Welfare Comparisons for Biased Learning

    Get PDF
    We study robust welfare comparisons of learning biases, i.e., deviations from correct Bayesian updating. Given a true signal distribution, we deem one bias more harmful than another if it yields lower objective expected payoffs in all decision problems. We characterize this ranking in static (one signal) and dynamic (many signals) settings. While the static characterization compares posteriors signal-by-signal, the dynamic characterization employs an ā€œefficiency indexā€ quantifying the speed of belief convergence. Our results yield welfare-founded quantifications of the severity of well-documented biases. Moreover, the static and dynamic rankings can conflict, and ā€œsmallerā€ biases can be worse in dynamic settings

    Dispersed Behavior and Perceptions in Assortative Societies

    Get PDF
    Motivated by the fact that peopleā€™s perceptions of their societies are routinely incorrect, we study the possibility and implications of misperception in social interactions. We focus on coordination games with assortative interactions, where agents with higher types (e.g., wealth, political attitudes) are more likely than lower types to interact with other high types. Assortativity creates scope for misperception, because what agents observe in their local interactions need not be representative of society as a whole. To model this, we define a tractable solution concept, ā€œlocal perception equilibriumā€ (LPE), that describes possible behavior and perceptions when agentsā€™ beliefs are derived only from their local interactions. We show that there is a unique form of misperception that can persist in any environment: This is assortativity neglect, where all agents believe the people they interact with to be a representative sample of society as a whole. Relative to the case with correct perceptions, assortativity neglect generates two mutually reinforcing departures: A ā€œfalse consensus effect,ā€ whereby agentsā€™ perceptions of average characteristics in the population are increasing in their own type; and more ā€œdispersedā€ behavior in society, which adversely affects welfare due to increased miscoordination. Finally, we propose a comparative notion of when one society is more assortative than another and show that more assortative societies are characterized precisely by greater action dispersion and a more severe false consensus effect, and as a result, greater assortativity has an ambiguous effect on welfare

    A Random Ensemble of Encrypted models for Enhancing Robustness against Adversarial Examples

    Full text link
    Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In addition, AEs have adversarial transferability, which means AEs generated for a source model can fool another black-box model (target model) with a non-trivial probability. In previous studies, it was confirmed that the vision transformer (ViT) is more robust against the property of adversarial transferability than convolutional neural network (CNN) models such as ConvMixer, and moreover encrypted ViT is more robust than ViT without any encryption. In this article, we propose a random ensemble of encrypted ViT models to achieve much more robust models. In experiments, the proposed scheme is verified to be more robust against not only black-box attacks but also white-box ones than convention methods.Comment: 4 page

    Dispersed Behavior and Perceptions in Assortative Societies

    Get PDF
    We formulate a model of social interactions and misinferences by agents who neglect assortativity in their society, mistakenly believing that they interact with a representative sample of the population. A key component of our approach is the interplay between this bias and agentsā€™ strategic incentives. We highlight a mechanism through which assortativity neglect, combined with strategic complementarities in agentsā€™ behavior, drives up action dispersion in society (e.g., socioeconomic disparities in education investment). We also suggest that the combination of assortativity neglect and strategic incentives may be relevant in understanding empirically documented misperceptions of income inequality and political attitude polarization

    Belief Convergence under Misspecified Learning: A Martingale Approach

    Get PDF
    We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel ā€œprediction accuracyā€ order over subjective models, and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyze environments where learning is ā€œslow,ā€ such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning

    Stability and Robustness in Misspecified Learning Models

    Get PDF
    We present an approach to analyze learning outcomes in a broad class of misspeciļ¬ed environments, spanning both single-agent and social learning. Our main results provide general criteria to determineā€”without the need to explicitly analyze learning dynamicsā€”when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). The key ingredient underlying these criteria is a novel ā€œprediction accuracyā€ ordering over subjective models that reļ¬nes existing comparisons based on Kullback-Leibler divergence. We show that these criteria can be applied, ļ¬rst, to unify and generalize various convergence results in previously studied settings. Second, they enable us to identify and analyze a natural class of environments, including costly information acquisition and sequential social learning, where unlike most settings the literature has focused on so far, long-run beliefs can fail to be robust to the details of the true data generating process or agentsā€™ perception thereof. In particular, even if agents learn the truth when they are correctly speciļ¬ed, vanishingly small amounts of misspeciļ¬cation can lead to extreme failures of learning
    • ā€¦
    corecore