480,856 research outputs found

    Unemployment insurance and the business cycle: prolong benefit entitlements in bad times?

    Get PDF
    The aim of this paper is to study the optimal duration of unemployment benefit entitlement duration across the business cycle. We wonder if the entitlement duration should be prolonged in bad and shortened in good times. Because of consumption smoothing, such a countercyclical policy can be welfare-enhancing as long as it does not affect labor market adjustment too severely or even helps to reduce inefficiencies there. If, however, the labor market is quite inflexible already, procyclical behavior may be preferable. In a calibrated dynamic business cycle framework, we find that countercyclical benefit entitlement duration may be preferable in the US but not in Europe. --Unemployment insurance,entitlement duration,business cycle

    Incorporating Labour Market Frictions into an Optimising-Based Monetary Policy Model

    Get PDF
    This paper examines the effects of introducing a non Walrasian labour market into the "New Neoclassical Synthesis'' framework. A dynamic stochastic general equilibrium model is formulated, solved, and calibrated in order to evaluate its ability to replicate the main features of the Euro area economy. This framework allows us to study the effects of labour market rigidities, nominal rigidities, and other frictions to give account of the impact of monetary policy, technology, public spending, and preference shocks. Our simulations show that: (i) real rigidities complement but do not supplant nominal rigidities, (ii) the Beveridge and Phillips relations are reproduced, (iii) hours worked are too sensitive an adjustment variable, and (iv) the real wage dynamics is still procyclical.DSGE models ; Nominal rigidities ; Real rigidities ; Labour market ; Endogenous persistence ; Euro area

    Monetary policy and inflationary shocks under imperfect credibility

    Get PDF
    This paper quantifies the deterioration of achievable tabilization outcomes when monetary policy operates under imperfect credibility and weak anchoring of long-term expectations. Within a medium-scale DSGE model, we introduce through a simple signal extraction problem, an imperfect knowledge configuration where rice and wage setters wrongly doubt about the determination of the central bank to leave unchanged its long-term inflation objective in the face of inflationary shocks. The magnitude of private sector learning has been calibrated to match the volatility of US inflation expectations at long horizons. Given such illustrative calibrations, we find that the costs of aintaining a given inflation volatility under weak credibility could amount to 0.25 pp of output gap standard deviation. JEL Classification: E4, E5, F4Imperfect credibility, monetary policy, Signal extraction

    Loop Quasi-Invariant Chunk Motion by peeling with statement composition

    Get PDF
    Several techniques for analysis and transformations are used in compilers. Among them, the peeling of loops for hoisting quasi-invariants can be used to optimize generated code, or simply ease developers' lives. In this paper, we introduce a new concept of dependency analysis borrowed from the field of Implicit Computational Complexity (ICC), allowing to work with composed statements called Chunks to detect more quasi-invariants. Based on an optimization idea given on a WHILE language, we provide a transformation method - reusing ICC concepts and techniques - to compilers. This new analysis computes an invariance degree for each statement or chunks of statements by building a new kind of dependency graph, finds the maximum or worst dependency graph for loops, and recognizes if an entire block is Quasi-Invariant or not. This block could be an inner loop, and in that case the computational complexity of the overall program can be decreased. We already implemented a proof of concept on a toy C parser 1 analysing and transforming the AST representation. In this paper, we introduce the theory around this concept and present a prototype analysis pass implemented on LLVM. In a very near future, we will implement the corresponding transformation and provide benchmarks comparisons.Comment: In Proceedings DICE-FOPARA 2017, arXiv:1704.0516

    Government expenditures and unemployment: A DSGE perspective

    Get PDF
    In a New Keynesian DSGE model with labor market frictions and liquidity-constrained consumers aggregate unemployment is likely to increase due to a non-persistent government spending shock. Furthermore, the group of asset-holding households reacts very differently from the group of liquidity-constrained consumers implying that the unemployment rate is likely to decrease for asset-holding households, while it increases among liquidity-constrained consumers. The main driver of our results is the marginal utility of consumption which moves in opposite directions for the two types. Regarding the model's parameters, we find that the size of the fiscal (unemployment) multiplier increases with i) highly sticky prices, ii) high degrees of risk aversion, iii) low convexity in labor disutility iv) high replacement rates, and v) debt-financed expenditures. --Search and matching,government spending shocks,unemployment.

    On Quasi-Interpretations, Blind Abstractions and Implicit Complexity

    Full text link
    Quasi-interpretations are a technique to guarantee complexity bounds on first-order functional programs: with termination orderings they give in particular a sufficient condition for a program to be executable in polynomial time, called here the P-criterion. We study properties of the programs satisfying the P-criterion, in order to better understand its intensional expressive power. Given a program on binary lists, its blind abstraction is the nondeterministic program obtained by replacing lists by their lengths (natural numbers). A program is blindly polynomial if its blind abstraction terminates in polynomial time. We show that all programs satisfying a variant of the P-criterion are in fact blindly polynomial. Then we give two extensions of the P-criterion: one by relaxing the termination ordering condition, and the other one (the bounded value property) giving a necessary and sufficient condition for a program to be polynomial time executable, with memoisation.Comment: 18 page

    On the Strategic Use of Debt and Capacity in Imperfectly Competitive Product Markets

    Get PDF
    In capital intensive industries, firms face complicated multi-stage financing, investment, and production decisions under the watchful eye of existing and potential industry rivals. We consider a two-stage simplification of this environment. In the first stage, an incumbent firm benefits from two first-mover advantages by precommiting to a debt financing policy and a capacity investment policy. In the second stage, the incumbent and a single-stage rival simultaneously choose production levels and realize stochastic profits. We characterize the incumbent's first-stage debt and capacity choices as factors in the production of an intermediate good we call "output deterrence." In our two-factor deterrence model, we show that the incumbent chooses a unique capacity policy and a threshold debt policy to achieve the optimal level of deterrence coinciding with full Stackelberg leadership. When we remove the incumbent's first-mover advantage in capacity, the full Stackelberg level of deterrence is still achievable, albeit with a higher level of debt than the threshold. In contrast, when we remove the incumbent's first-mover advantage in debt, the Stackelberg level of deterrence may no longer be achievable and the incumbent may suffer a dead-weight loss. Evidence on the telecommunications industry shows that firms have increased their leverage in a manner consistent with deterring potential rivals following the 1996 deregulation.Industrial organization; Deregulation; Deterrence; Capital structure; Capacity; Telecommunications

    Optimal monetary policy in an estimated DSGE for the euro area

    Get PDF
    The objective of this paper is to examine the main features of optimal monetary policy within a micro-founded macroeconometric framework. First, using Bayesian techniques, we estimate a medium scale closed economy DSGE for the euro area. Then, we study the properties of the Ramsey allocation through impulse response, variance decomposition and counterfactual analysis. In particular, we show that, controlling for the zero lower bound constraint, does not seem to limit the stabilization properties of optimal monetary policy. the Ramsey allocation reasonably well. Such optimal simple operational rules seem to react specifically to nominal wage inflation. Overall, the Ramsey policy together with its simple rule approximations seem to deliver consistent policy messages and may constitute some useful normative benchmarks within medium to large scale estimated DSGE framework. prove the economic micro-foundation and the econometric identification of the structural disturbances. We also present simple monetary policy rules which can “approximate” and implement However, this normative analysis based on estimated models reinforces the need to improve the economic micro-foundation and the econometric identification of the structural disburbances. JEL Classification: E4, E5Bayesian estimation, DSGE Models, monetary policy, Welfare calculations

    Towards a monetary policy evaluation framework

    Get PDF
    Advances in the development of Dynamic Stochastic General Equilibrium (DSGE) models towards medium-scale structural frameworks with satisfying data coherence have considerably enhanced the range of analytical tools well-suited for monetary policy evaluation. The present paper intends to make a step forward in this direction: using US data over the Volker-Greenspan sample, we perform a DGSE-VAR estimation of a medium-scale DSGE model very close to Smets and Wouters [2007] specification, where monetary policy is set according to a Ramsey-planner decision problem. Those results are then contrasted with the DSGE-VAR estimation of the same model featuring a Taylortype interest rate rule. Our results show in particular that the restrictions imposed by the welfare-maximizing Ramsey policy deteriorates the empirical performance with respect to a Taylor rule specification. However, it turns out that, along selected conditional dimensions, and notably for productivity shocks, the Ramsey policy and the estimated Taylor rule deliver similar economic propagation. JEL Classification: E4, E5, F4Bayesian estimation, DSGE Models, optimal monetary policy

    QROSS-checking RESCUE models

    Get PDF
    • 

    corecore