989 research outputs found

    Teaching to do economics with the computer

    Get PDF
    This paper presents the course "Doing Economics with the Computer" we taught since 1999 at the University of Bern, Switzerland. "Doing Economics with the Computer" is a course we designed to introduce sophomores playfully and painlessly into computational economics. Computational methods are usually used in economics to analyze complex problems, which are impossible (or very difficult) to solve analytically. However, our course only looks at economic models, which can (easily) be solved analytically. This approach has two advantages: First, relying on economic theory students have met in their first year, we can introduce numerical methods at an early stage. This stimulates students to use computational methods later in their academic career when they encounter difficult problems. Second, the confrontation with the analytical analysis shows convincingly both power and limits of numerical methods. Our course introduces students to three types of software: spreadsheet and simple optimizer (Excel with Solver), numerical computation (Matlab) and symbolic computation (Maple). The course consists of 10 sessions, we taught each in a 3-hour lecture. In the 1st part of each session we present the economic problem, sometimes its analytical solution and introduce the software used. The 2nd part, in the computer lab, starts the numerical implementation with step-by-step guidance. In this part, students work on exercises with clearly defined questions and precise guidance for their implementation. The 3rd part is a workshop, where students work in groups on exercises with still very clear defined questions but no help on their implementation. This part teaches students how to practically handle numerical questions in a well-defined framework. The 4th part of a session is a graded take home assignment where students are asked to answer general economic questions. This part teaches students how to translate general economic questions into a numerical task and back into an economically meaningful answer. A short debriefing in the following week is part 5 and completes each session

    Wage Rigidity and Monetary Union

    Get PDF
    We compare monetary union to flexible exchange rates in an asymmetric, three- country model with active monetary policy. Unlike the traditional OCA literature, we find that countries with high nominal wage rigidities benefit from monetary union, specially when they join other, similarly rigid countries. Countries with relatively more flexible wages lose when they form a union with more rigid wage countries. We study the France, Germany and the UK and find that wage asymmetries across these three countries dominate other types of asymmetries (in shocks, monetary policy etc.) in welfare comparisons. And that, if the UK had a substantially higher degree of wage flexibility than France and Germany, then her participation in EMU would be costly.monetary union; wage rigidity; asymmetry; multi-country model

    An Optimum-Currency-Area Odyssey

    Get PDF
    The theory of optimum-currency-areas was conceived and developed in three highly influential papers, written by Mundell (1961), McKinnon (1963) and Kenen (1969). Those authors identified characteristics that potential members of a monetary union should ideally possess in order to make it feasible to surrender a nationally- tailored monetary policy and the adjustment of an exchange rate of a national currency. We trace the development of optimum currency- area theory, which, after a flurry of research into the subject in the 1960s, was relegated to intellectual purgatory for about 20 years. We then discuss factors that led to a renewed interest into the subject, beginning in the early 1990s. Milton Friedman plays a pivotal role in our narrative; Friedman’s work on monetary integration in the early 1950s presaged subsequent optimum-currency-area contributions; Mundell’s classic formulation of an optimal currency area was aimed, in part, at refuting Friedman’s ‘‘strong’’ case for floating exchange rates; and Friedman’s work on the role of monetary policy had the effect of helping to revive interest in optimum-currency-area analysis. The paper concludes with a discussion of recent analytical work, using New Keynesian models, which has the promise of fulfilling the unfinished agenda set-out by the original contributors to the optimum-currency-area literature, that is, providing a consistent framework in which a country’s characteristics can be used to determine its optimal exchange-rate regimeOptimum-currency-areas; Exchange-rate regimes; New Keynesian models

    Some Fiscal Implications of Monetary Policy

    Get PDF
    We study the implications of alternative monetary targeting procedures for real interest rates and economic activity. We find that countercyclical monetary policy rules lead to higher real interest rates, higher average tax rates, lower output but lower variability of tax rates and consumption relative to procyclical rules. For a country with a high level of public debt (e.g. Italy), the adoption of a counter cyclical proceedure such as interest rate pegging may conceivably raise public debt servicing costs by more than half a percentage point of GNP. Our analysis suggests that the current debate on the targeting proceedures of the European Central Bank ought to be broadened to include a discussion of the fiscal implications of monetary policy.

    Truth and Robustness in Cross-country Growth Regressions

    Get PDF
    The work of Levine and Renelt (1992) and Sala-i-Martin (1997a, b) which attempted to test the robustness of various determinants of growth rates of per capita GDP among countries using two variants of Edward Leamerâ??s extreme-bounds analysis is reexamined. In a realistic Monte Carlo experiment in which the universe of potential determinants is drawn from those in Levine and Reneltâ??s study, both versions of the extreme-bounds analysis are evaluated for their ability to recover the true specification. Levine and Reneltâ??s method is shown to have low size and extremely low power: nothing is robust; while Sala-i-Martinâ??s method is shown to have high size and high power: it is undiscriminating. Both methods are compared to a cross-sectional version of the generalto-specific search methodology associated with the LSE approach to econometrics. It is shown to have size near nominal size and high power. Sala-i-Martinâ??s method and the general-to-specific method are then applied to the actual data from the original two studies. The results are consistent with the Monte Carlo results and are suggestive that the factors that most affect differences of growth rates are ones that are beyond the control of policymakers.growth, cross-country growth regressions, extreme-bounds analysis, general-to-specific specification search

    The Allocation of Resources under Uncertainty

    Get PDF
    We study the effects of uncertainty on the allocation of resources in the standard, static, general equilibrium, two-sector, two-factor model. The elasticity of substitution in production vs that in consumption plays a key role in determining whether uncertainty attracts or repels resources. Risk aversion matters, but to a smaller extent, while factor endowments and factor intensities play a more limited role.Uncertainty; general equilibrium; two-sector model
    corecore