4,526 research outputs found

    Macroeconomic Implications of Near Rational Behavior: an Application to the Italian Phillips Curve

    Get PDF
    New-Keynesian macroeconomic models typically conclude that longrun unemployment gravitates around the NAIRU, regardless of the nominal inflation rate. Contrastingly, the model of Akerlof, Dickens and Perry (2000) (ADP) predicts that excessively low inflation may result in a situation where unemployment is high relative to the social optimum. This paper investigates whether ADP-type short- and long-run Phillips Curves may suit the Italian economy. Firstly we estimated a short-run non accelerationist Phillips curve (i.e. where the expected inflation coefficient depends on inflation and it is generally less than unit) on Italian post-war data. Based on these results, we then simulated the long-run Phillips Curve and ran robustness checks by using a rival cointegration approach. We have two main results. First, the Italian short-run Phillips curve is actually non-accelerationist. Second, our estimates indicate that in Italy a long-run trade-o€ between inflation and unemployment cannot be ruled out at low and moderate inflation rates.Near-rationality; Non-accelerationist Phillips Curve; Natural rate of unemployment

    Money Illusion and Rational Expectations: New Evidence from Well Known Survey Data

    Get PDF
    This paper provides further evidence in favor of less than fully rational expectations by making use two instruments, one quite well known, and the other more novel, namely survey data on inflation expectations and Smooth Transition Error Correction Models (STECMs). We use the so called ‘probabilistic approach’ to derive a quantitative measure of expected inflation from qualitative survey data for France, Italy and the UK. The United States are also included by means of the Michigan Survey of Consumers’ expectations series. First, we perform the standard tests to assess the ‘degree of rationality’ of consumers’ inflation forecasts. Afterwards, we specify a STECM of the forecast error, and we quantify the strategic stickiness in the long-run adjustment process of expectations stemming from money illusion. Our evidence is that consumers’ expectations do not generally conform to the prescriptions of the rational expectations hypothesis. In particular, we find that the adjustment process towards the long-run equilibrium is highly nonlinear and it is asymmetric with respect to the size of the past forecast errors. We interpret these findings as supporting the money illusion hypothesis.Nonlinear error correction, inflation expectations, sticky expectations

    A multi-exit recirculating optical packet buffer

    Get PDF
    We propose a new type of recirculating buffer, the multiexit buffer (MEB), for use in asynchronous optical packet switches with statistical multiplexing, operating at speeds of 40-100 Gb/s. We demonstrate that the use of this type of buffer dramatically reduces the packet loss for a given buffer depth, thus reducing the buffer depth requirements and the overall cost of the optical packet switching. Physical layer simulation results show that it is possible to build this type of buffer with currently available active components. A hybrid optoelectronic control system is proposed, which allows control of the MEB with a minimum number of active components

    Optimal mobility-aware admission control in content delivery networks

    Get PDF
    This paper addresses the problem of mobility management in Content Delivery Networks (CDN). We introduce a CDN architecture where admission control is performed at mobility aware access routers. We formulate a Markov Modulated Poisson Decision Process for access control that captures the bursty nature of data and packetized traffic together with the heterogeneity of multimedia services. The optimization of performance parameters, like the blocking probabilities and the overall utilization, is conducted and the structural properties of the optimal solutions are also studied. Heuristics are proposed to encompass the computational difficulties of the optimal solution when several classes of multimedia traffic are considered

    Self-* overload control for distributed web systems

    Full text link
    Unexpected increases in demand and most of all flash crowds are considered the bane of every web application as they may cause intolerable delays or even service unavailability. Proper quality of service policies must guarantee rapid reactivity and responsiveness even in such critical situations. Previous solutions fail to meet common performance requirements when the system has to face sudden and unpredictable surges of traffic. Indeed they often rely on a proper setting of key parameters which requires laborious manual tuning, preventing a fast adaptation of the control policies. We contribute an original Self-* Overload Control (SOC) policy. This allows the system to self-configure a dynamic constraint on the rate of admitted sessions in order to respect service level agreements and maximize the resource utilization at the same time. Our policy does not require any prior information on the incoming traffic or manual configuration of key parameters. We ran extensive simulations under a wide range of operating conditions, showing that SOC rapidly adapts to time varying traffic and self-optimizes the resource utilization. It admits as many new sessions as possible in observance of the agreements, even under intense workload variations. We compared our algorithm to previously proposed approaches highlighting a more stable behavior and a better performance.Comment: The full version of this paper, titled "Self-* through self-learning: overload control for distributed web systems", has been published on Computer Networks, Elsevier. The simulator used for the evaluation of the proposed algorithm is available for download at the address: http://www.dsi.uniroma1.it/~novella/qos_web

    Phenotypic mixing and hiding may contribute to memory in viral quasispecies

    Get PDF
    Background. In a number of recent experiments with food-and-mouth disease virus, a deleterious mutant, was found to avoid extinction and remain in the population for long periods of time. This observation was called quasispecies memory. The origin of quasispecies memory is not fully understood. Results. We propose and analyze a simple model of complementation between the wild type virus and a mutant that has an impaired ability of cell entry. The mutant will go extinct unless it is recreated from the wild type through mutations. However, under phenotypic mixing-and-hiding as a mechanism of complementation, the time to extinction in the absence of mutations increases with increasing multiplicity of infection (m.o.i.). The mutant's frequency at equilibrium under selection-mutation balance also increases with increasing m.o.i. At high m.o.i., a large fraction of mutant genomes are encapsidated with wild-type protein, which enables them to infect cells as efficiently as the wild type virions, and thus increases their fitness to the wild-type level. Moreover, even at low m.o.i. the equilibrium frequency of the mutant is higher than predicted by the standard quasispecies model, because a fraction of mutant virions generated from wild-type parents will also be encapsidated by wild-type protein. Conclusions. Our model predicts that phenotypic hiding will strongly influence the population dynamics of viruses, particularly at high m.o.i., and will also have important effects on the mutation--selection balance at low m.o.i. The delay in mutant extinction and increase in mutant frequencies at equilibrium may, at least in part, explain memory in quasispecies populations.Comment: 10 pages pdf, as published by BM

    Fundamental limits of failure identifiability by Boolean Network Tomography

    Get PDF
    Boolean network tomography is a powerful tool to infer the state (working/failed) of individual nodes from path-level measurements obtained by egde-nodes. We consider the problem of optimizing the capability of identifying network failures through the design of monitoring schemes. Finding an optimal solution is NP-hard and a large body of work has been devoted to heuristic approaches providing lower bounds. Unlike previous works, we provide upper bounds on the maximum number of identifiable nodes, given the number of monitoring paths and different constraints on the network topology, the routing scheme, and the maximum path length. The proposed upper bounds represent a fundamental limit on the identifiability of failures via Boolean network tomography. This analysis provides insights on how to design topologies and related monitoring schemes to achieve the maximum identifiability under various network settings. Through analysis and experiments we demonstrate the tightness of the bounds and efficacy of the design insights for engineered as well as real network
    • 

    corecore