4,345 research outputs found

    Higher Order Unification Revisited: Complete Sets of Transformations

    Get PDF
    In this paper, we reexamine the problem of general higher-order unification and develop an approach based on the method of transformations on systems of terms which has its roots in Herbrand\u27s thesis, and which was developed by Martelli and Montanari in the context of first-order unification. This method provides an abstract and mathematically elegant means of analyzing the invariant properties of unification in various settings by providing a clean separation of the logical issues from the specification of procedural information. Our major contribution is three-fold. First, we have extended the Herbrand- Martelli-Montanari method of transformations on systems to higher-order unification and pre-unification; second, we have used this formalism to provide a more direct proof of the completeness of a method for higher-order unification than has previously been available; and, finally, we have shown the completeness of the strategy of eager variable elimination. In addition, this analysis provides another justification of the design of Huet\u27s procedure, and shows how its basic principles work in a more general setting. Finally, it is hoped that this presentation might form a good introduction to higher-order unification for those readers unfamiliar with the field

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    Back to Keynes?

    Get PDF
    After a brief review of classical, Keynesian, New Classical and New Keynesian theories of macroeconomic policy, we assess whether New Keynesian Economics captures the quintessential features stressed by J.M. Keynes. Particular attention is paid to Keynesian features omitted in New Keynesian workhorses such as the micro-founded Keynesian multiplier and the New Keynesian Phillips curve. These theories capture wage and price sluggishness and aggregate demand externalities by departing from a competitive framework and give a key role to expectations. The main deficiencies, however, are the inability to predict a pro-cyclical real wage in the face of demand shocks, the absence of inventories, credit constraints and bankruptcies in explaining the business cycle, and no effect of the nominal as well as the real interest rate on aggregate demand. Furthermore, they fail to allow for quantity rationing and to model unemployment as a catastrophic event. The macroeconomics based on the New Keynesian Phillips curve has quite a way to go before the quintessential Keynesian features are captured.Keynesian economics, New Keynesian Phillips curve, monopolistic competition, nominal wage rigidity, welfare, pro-cyclical real wage, inventories, liquidity, bankruptcy, unemployment, monetary policy
    • 

    corecore