171 research outputs found

    Order-Sorted Equational Computation

    Get PDF
    The expressive power of many-sorted equational logic can be greatly enhanced by allowing for subsorts and multiple function declarations. In this paper we study some computational aspects of such a logic. We start with a self-contained introduction to order-sorted equational logic including initial algebra semantics and deduction rules. We then present a theory of order-sorted term rewriting and show that the key results for unsorted rewriting extend to sort decreasing rewriting. We continue with a review of order-sorted unification and prove the basic results. In the second part of the paper we study hierarchical order-sorted specifications with strict partial functions. We define the appropriate homomorphisms for strict algebras and show that every strict algebra is base isomorphic to a strict algebra with at most one error element. For strict specifications, we show that their categories of strict algebras have initial objects. We validate our approach to partial functions by proving that completely defined total functions can be defined as partial without changing the initial algebra semantics. Finally, we provide decidable sufficient criteria for the consistency and strictness of ground confluent rewriting systems

    Progress Report : 1991 - 1994

    Get PDF

    A lender-based theory of collateral

    Get PDF
    We consider an imperfectly competitive loan market in which a local relationship lender has an information advantage vis-à-vis distant transaction lenders. Competitive pressure from the transaction lenders prevents the local lender from extracting the full surplus from projects, so that she inefficiently rejects marginally profitable projects. Collateral mitigates the inefficiency by increasing the local lender’s payoff from precisely those marginal projects that she inefficiently rejects. The model predicts that, controlling for observable borrower risk, collateralized loans are more likely to default ex post, which is consistent with the empirical evidence. The model also predicts that borrowers for whom local lenders have a relatively smaller information advantage face higher collateral requirements, and that technological innovations that narrow the information advantage of local lenders, such as small business credit scoring, lead to a greater use of collateral in lending relationships. JEL classification: D82; G21 Keywords: Collateral; Soft infomation; Loan market competition; Relationship lendin

    Rule-Based Software Verification and Correction

    Full text link
    The increasing complexity of software systems has led to the development of sophisticated formal Methodologies for verifying and correcting data and programs. In general, establishing whether a program behaves correctly w.r.t. the original programmer s intention or checking the consistency and the correctness of a large set of data are not trivial tasks as witnessed by many case studies which occur in the literature. In this dissertation, we face two challenging problems of verification and correction. Specifically, verification and correction of declarative programs, and the verification and correction of Web sites (i.e. large collections of semistructured data). Firstly, we propose a general correction scheme for automatically correcting declarative, rule-based programs which exploits a combination of bottom-up as well as topdown inductive learning techniques. Our hybrid hodology is able to infer program corrections that are hard, or even impossible, to obtain with a simpler,automatic top-down or bottom-up learner. Moreover, the scheme will be also particularized to some well-known declarative programming paradigm: that is, the functional logic and the functional programming paradigm. Secondly, we formalize a framework for the automated verification of Web sites which can be used to specify integrity conditions for a given Web site, and then automatically check whether these conditions are fulfilled. We provide a rule-based, formal specification language which allows us to define syntactic as well as semantic properties of the Web site. Then, we formalize a verification technique which detects both incorrect/forbidden patterns as well as lack of information, that is, incomplete/missing Web pages. Useful information is gathered during the verification process which can be used to repair the Web site. So, after a verification phase, one can also infer semi-automatically some possible corrections in order to fix theWeb site. The methodology is based on a novel rewritBallis, D. (2005). Rule-Based Software Verification and Correction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/194

    CEO Compensation and Private Information: An Optimal Contracting Perspective

    Get PDF
    We consider the joint optimal design of CEOs’ severance pay and on-the-job pay in a model in which the CEO has interim private information about the likely success of his strategy. The board faces a tradeoff between reducing the likelihood that the firm forgoes an efficient strategy change and limiting the CEO’s informational rents. The optimal truthtelling mech-anism takes a simple form: it consists of fixed severance pay and high-powered, non-linear on-the-job pay, such as a bonus scheme or option grant. Our model makes testable predic-tions linking CEOs’ severance pay and on-the-job pay to each other as well as to the firm’s external business environment, firm size, and corporate governance

    Trade, wages and productivity

    Get PDF
    We develop a new general equilibrium model of trade with heterogeneous firms, variable demand elasticities and endogenously determined wages. Trade integration favours wage convergence, boosts competition, and forces the least efficient firms to leave the market, thereby affecting aggregate productivity. Since wage and productivity responses are endogenous, our model is well suited to studying the impact of trade integration on aggregate productivity and factor prices. Using Canada-US interregional trade data, we first estimate a system of theory-based gravity equations under the general equilibrium constraints generated by the model. Doing so allows us to measure 'border effects' and to decompose them into a 'pure' border effect, relative and absolute wage effects, and a selection effect. Using the estimated parameter values, we then quantify the impact of removing the Canada-US border on wages, productivity, mark-ups, the share of exporters, the mass of varieties produced and consumed, and thus welfare. Finally, we provide a similar quantification with respect to regional population changes.heterogeneous firms; gravity equations; general equilibrium; monopolistic competition; variable demand elasticities

    Pure subtype systems: a type theory for extensible software

    Get PDF
    This thesis presents a novel approach to type theory called “pure subtype systems”, and a core calculus called DEEP which is based on that approach. DEEP is capable of modeling a number of interesting language techniques that have been proposed in the literature, including mixin modules, virtual classes, feature-oriented programming, and partial evaluation. The design of DEEP was motivated by two well-known problems: “the expression problem”, and “the tag elimination problem.” The expression problem is concerned with the design of an interpreter that is extensible, and requires an advanced module system. The tag elimination problem is concerned with the design of an interpreter that is efficient, and requires an advanced partial evaluator. We present a solution in DEEP that solves both problems simultaneously, which has never been done before. These two problems serve as an “acid test” for advanced type theories, because they make heavy demands on the static type system. Our solution in DEEP makes use of the following capabilities. (1) Virtual types are type definitions within a module that can be extended by clients of the module. (2) Type definitions may be mutually recursive. (3) Higher-order subtyping and bounded quantification are used to represent partial information about types. (4) Dependent types and singleton types provide increased type precision. The combination of recursive types, virtual types, dependent types, higher-order subtyping, and bounded quantification is highly non-trivial. We introduce “pure subtype systems” as a way of managing this complexity. Pure subtype systems eliminate the distinction between types and objects; every term can behave as either a type or an object depending on context. A subtype relation is defined over all terms, and subtyping, rather than typing, forms the basis of the theory. We show that higher-order subtyping is strong enough to completely subsume the traditional type relation, and we provide practical algorithms for type checking and for finding minimal types. The cost of using pure subtype systems lies in the complexity of the meta-theory. Unfortunately, we are unable to establish some basic meta-theoretic properties, such as type safety and transitivity elimination, although we have made some progress towards these goals. We formulate the subtype relation as an abstract reduction system, and we show that the type theory is sound if the reduction system is confluent. We can prove that reductions are locally confluent, but a proof of global confluence remains elusive. In summary, pure subtype systems represent a new and interesting approach to type theory. This thesis describes the basic properties of pure subtype systems, and provides concrete examples of how they can be applied. The Deep calculus demonstrates that our approach has a number of real-world practical applications in areas that have proved to be quite difficult for traditional type theories to handle. However, the ultimate soundness of the technique remains an open question

    Essays in panel data econometrics with cross-sectional dependence

    Get PDF
    The behavior of economic agents is characterized by interdependencies that arise from common shocks, strategic interactions or spill-over effects. Developing new econometric methodologies for inference in panel data with cross-sectional dependence is a common theme of this thesis. Another theme is econometric models that allow for heterogeneity across individual observations. Each chapter takes a different approach towards modeling and estimating panels with cross-sectional dependence and heterogeneity. In all chapters, the perspective is one where both the time series and the cross-sectional dimension are large. The first chapter develops a methodology for semiparametric panel data models with heterogeneous nonparametric covariate effects as well as unobserved time and individual-specific effects that may depend on the covariates in an arbitrary way. To model the covariate effects parsimoniously, we impose a dimensionality reducing common component structure on them. In the theoretical part of the chapter, we derive the asymptotic theory of the proposed procedure. In particular, we provide the convergence rates and the asymptotic distribution of our estimators. The asymptotic analysis is complemented by a Monte Carlo experiment that documents the small sample properties of our estimator. The second chapter investigates the effects of fragmentation in equity markets on the quality of trading outcomes. It uses a unique data set that reports the location and volume of trading on the FTSE 100 and 250 companies from 2008 to 2011 at the weekly frequency. This period coincided with a great deal of turbulence in the UK equity markets which had multiple causes that need to be controlled for. To achieve this, we use the common correlated effects estimator for large heterogeneous panels that approximates the unobserved factors with cross-sectional averages. We extend this estimator to quantile regression to analyze the whole conditional distribution of market quality. We find that both fragmentation in visible order books and dark trading that is offered outside the visible order book lower volatility. But dark trading increases the variability of volatility and trading volumes. Visible fragmentation has the opposite effect on the variability of volatility, in particular at the upper quantiles of the conditional distribution. The third chapter develops an estimator for heterogeneous panels with discrete outcomes in a setting where the individual units are subject to unobserved common shocks. Like the estimator in chapter 2, the proposed estimator belongs to the class of common correlated effects estimators and it assumes that the unobserved factors are contained in the span of the observed factors and the cross-sectional averages of the regressors. The proposed estimator can be computed by estimating binary response models applied to regression that is augmented with the crosssectional averages of the individual-specific regressors. The asymptotic properties of this approach are documented as both the time series and the cross-section tend to infinity. In particular, I show that both the estimators of the individual-specific coefficients and the mean group estimator are consistent and asymptotically normal. The small-sample behavior of the mean group estimator is assessed in a Monte Carlo experiment. The methodology is applied to the question of how funding costs in corporate bond markets affect the conditional probability of issuing a corporate bond

    Advanced macroeconomics: an easy guide

    Get PDF
    Macroeconomic policy is one of the most important policy domains, and the tools of macroeconomics are among the most valuable for policy makers. Yet there has been, up to now, a wide gulf between the level at which macroeconomics is taught at the undergraduate level and the level at which it is practiced. At the same time, doctoral-level textbooks are usually not targeted at a policy audience, making advanced macroeconomics less accessible to current and aspiring practitioners. This book, born out of the Masters course the authors taught for many years at the Harvard Kennedy School, fills this gap. It introduces the tools of dynamic optimization in the context of economic growth, and then applies them to a wide range of policy questions – ranging from pensions, consumption, investment and finance, to the most recent developments in fiscal and monetary policy. It does so with the requisite rigor, but also with a light touch, and an unyielding focus on their application to policy-making, as befits the authors’ own practical experience

    Advanced Macroeconomics

    Get PDF
    Macroeconomic policy is one of the most important policy domains, and the tools of macroeconomics are among the most valuable for policy makers. Yet there has been, up to now, a wide gulf between the level at which macroeconomics is taught at the undergraduate level and the level at which it is practiced. At the same time, doctoral-level textbooks are usually not targeted at a policy audience, making advanced macroeconomics less accessible to current and aspiring practitioners. This book, born out of the Masters course the authors taught for many years at the Harvard Kennedy School, fills this gap. It introduces the tools of dynamic optimization in the context of economic growth, and then applies them to a wide range of policy questions – ranging from pensions, consumption, investment and finance, to the most recent developments in fiscal and monetary policy. It does so with the requisite rigor, but also with a light touch, and an unyielding focus on their application to policy-making, as befits the authors’ own practical experience. Advanced Macroeconomics: An Easy Guide is bound to become a great resource for graduate and advanced undergraduate students, and practitioners alike
    corecore