8 research outputs found

    Learning in hidden Markov models with bounded memory

    Get PDF
    This paper explores the role of memory in decision making in dynamic environments. We examine the inference problem faced by an agent with bounded memory who receives a sequence of signals from a hidden Markov model. We show that the optimal symmetric memory rule may be deterministic. This result contrasts sharply with Hellman and Cover (1970) and Wilson (2004) and solves, for the context of a hidden Markov model, an open question posed by Kalai and Solan (2003)

    Dynamic Information Design Under Constrained Communication Rules

    Full text link
    An information designer wishes to persuade agents to invest in a project of unknown quality. To do so, she must induce investment and collect feedback from these investments. Motivated by data regulations and simplicity concerns, our designer faces communication constraints. These constraints hinder her without benefiting the agents: they impose an upper bound on the induced belief spread, limiting persuasion. Nevertheless, two-rating systems (direct recommendations) are the optimal design when experimentation is needed to generate information and approximate the designer's first-best payoff for specific feedback structures. When the designer has altruistic motives, constrained rules significantly decrease welfare

    Learning in hidden Markov models with bounded memory

    Get PDF
    This paper explores the role of memory in decision making in dynamic environments. We examine the inference problem faced by an agent with bounded memory who receives a sequence of signals from a hidden Markov model. We show that the optimal symmetric memory rule may be deterministic. This result contrasts sharply with Hellman and Cover (1970) and Wilson (2004) and solves, for the context of a hidden Markov model, an open question posed by Kalai and Solan (2003)

    The Value of (Bounded) Memory in a Changing World

    Get PDF
    This paper explores the value of memory in decision making in dynamic environments. We examine the decision problem faced by an agent with bounded memory who receives a sequence of signals from a partially observable Markov decision process. We characterize environments in which the optimal memory consists of only two states. In addition, we show that the marginal value of additional memory states need not be positive, and may even be negative in the absence of free disposal

    The Value of (Bounded) Memory in a Changing World

    Get PDF
    This paper explores the value of memory in decision making in dynamic environments. We examine the decision problem faced by an agent with bounded memory who receives a sequence of signals from a partially observable Markov decision process. We characterize environments in which the optimal memory consists of only two states. In addition, we show that the marginal value of additional memory states need not be positive, and may even be negative in the absence of free disposal
    corecore