8,686 research outputs found

    Imperfect knowledge, inflation expectations, and monetary policy

    Get PDF
    This paper investigates the role that imperfect knowledge about the structure of the economy plays in the formation of expectations, macroeconomic dynamics, and the efficient formulation of monetary policy. Economic agents rely on an adaptive learning technology to form expectations and to update continuously their beliefs regarding the dynamic structure of the economy based on incoming data. The process of perpetual learning introduces an additional layer of dynamic interaction between monetary policy and economic outcomes. We find that policies that would be efficient under rational expectations can perform poorly when knowledge is imperfect. In particular, policies that fail to maintain tight control over inflation are prone to episodes in which the public's expectations of inflation become uncoupled from the policy objective and stagflation results, in a pattern similar to that experienced in the United States during the 1970s. Our results highlight the value of effective communication of a central bank's inflation objective and of continued vigilance against inflation in anchoring inflation expectations and fostering macroeconomic stability. July 2003

    Imperfect Knowledge, Inflation Expectations, and Monetary Policy

    Get PDF
    This paper investigates the role that imperfect knowledge about the structure of the economy plays in the formation of expectations, macroeconomic dynamics, and the efficient formulation of monetary policy. Economic agents rely on an adaptive learning technology to form expectations and to update continuously their beliefs regarding the dynamic structure of the economy based on incoming data. The process of perpetual learning introduces an additional layer of dynamic interaction between monetary policy and economic outcomes. We find that policies that would be efficient under rational expectations can perform poorly when knowledge is imperfect. In particular, policies that fail to maintain tight control over inflation are prone to episodes in which the public’s expectations of inflation become uncoupled from the policy objective and stagflation results, in a pattern similar to that experienced in the United States during the 1970s. Our results highlight the value of effective communication of a central bank’s inflation objective and of continued vigilance against inflation in anchoring inflation expectations and fostering macroeconomic stability.Imperfect Knowledge, Inflation Expectations, Monetary Policy

    Imperfect Knowledge, Inflation Expectations, and Monetary Policy

    Get PDF
    This paper investigates the role that imperfect knowledge about the structure of the economy plays in the formation of expectations, macroeconomic dynamics, and the efficient formulation of monetary policy. Economic agents rely on an adaptive learning technology to form expectations and to update continuously their beliefs regarding the dynamic structure of the economy based on incoming data. The process of perpetual learning introduces an additional layer of dynamic interaction between monetary policy and economic outcomes. We find that policies that would be efficient under rational expectations can perform poorly when knowledge is imperfect. In particular, policies that fail to maintain tight control over inflation are prone to episodes in which the public's expectations of inflation become uncoupled from the policy objective and stagflation results, in a pattern similar to that experienced in the United States during the 1970s. Our results highlight the value of effective communication of a central bank's inflation objective and of continued vigilance against inflation in anchoring inflation expectations and fostering macroeconomic stability.

    Learning Opposites with Evolving Rules

    Full text link
    The idea of opposition-based learning was introduced 10 years ago. Since then a noteworthy group of researchers has used some notions of oppositeness to improve existing optimization and learning algorithms. Among others, evolutionary algorithms, reinforcement agents, and neural networks have been reportedly extended into their opposition-based version to become faster and/or more accurate. However, most works still use a simple notion of opposites, namely linear (or type- I) opposition, that for each x∈[a,b]x\in[a,b] assigns its opposite as x˘I=a+b−x\breve{x}_I=a+b-x. This, of course, is a very naive estimate of the actual or true (non-linear) opposite x˘II\breve{x}_{II}, which has been called type-II opposite in literature. In absence of any knowledge about a function y=f(x)y=f(\mathbf{x}) that we need to approximate, there seems to be no alternative to the naivety of type-I opposition if one intents to utilize oppositional concepts. But the question is if we can receive some level of accuracy increase and time savings by using the naive opposite estimate x˘I\breve{x}_I according to all reports in literature, what would we be able to gain, in terms of even higher accuracies and more reduction in computational complexity, if we would generate and employ true opposites? This work introduces an approach to approximate type-II opposites using evolving fuzzy rules when we first perform opposition mining. We show with multiple examples that learning true opposites is possible when we mine the opposites from the training data to subsequently approximate x˘II=f(x,y)\breve{x}_{II}=f(\mathbf{x},y).Comment: Accepted for publication in The 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2015), August 2-5, 2015, Istanbul, Turke

    Scaling Configuration of Energy Harvesting Sensors with Reinforcement Learning

    Full text link
    With the advent of the Internet of Things (IoT), an increasing number of energy harvesting methods are being used to supplement or supplant battery based sensors. Energy harvesting sensors need to be configured according to the application, hardware, and environmental conditions to maximize their usefulness. As of today, the configuration of sensors is either manual or heuristics based, requiring valuable domain expertise. Reinforcement learning (RL) is a promising approach to automate configuration and efficiently scale IoT deployments, but it is not yet adopted in practice. We propose solutions to bridge this gap: reduce the training phase of RL so that nodes are operational within a short time after deployment and reduce the computational requirements to scale to large deployments. We focus on configuration of the sampling rate of indoor solar panel based energy harvesting sensors. We created a simulator based on 3 months of data collected from 5 sensor nodes subject to different lighting conditions. Our simulation results show that RL can effectively learn energy availability patterns and configure the sampling rate of the sensor nodes to maximize the sensing data while ensuring that energy storage is not depleted. The nodes can be operational within the first day by using our methods. We show that it is possible to reduce the number of RL policies by using a single policy for nodes that share similar lighting conditions.Comment: 7 pages, 5 figure

    Why and how people of limited intelligence become calendrical calculators

    Get PDF
    Calendrical calculation is the rare talent of naming the days of the week for dates in the past and future. Calendrical savants are people with low measured intelligence who have this talent. This paper reviews evidence and speculation about why people become calendrical savants and how they answer date questions. Most savants are known to have intensively studied the calendar and show superior memory for calendrical information. As a result they may answer date questions either from recalling calendars or by using strategies that exploit calendrical regularities. While people of average or superior intelligence may become calendrical calculators through internalising formulae, the arithmetical demands of the formulae make them unlikely as bases for the talents of calendrical savants. We attempt to identify the methods used by a sample of 10 savants. None rely on an internalised formula. Some use strategies based on calendrical regularities probably in conjunction with memory for a range of years. For the rest a decision between use of regularities and recall of calendars cannot be made
    • …
    corecore