1,756 research outputs found

    ECONOMETRICS AND REALITY

    Get PDF
    Starting with a realist ontology the economic methodologist, Tony Lawson, argues that econometrics is a failed project. Apparently more sympathetic to econometrics, the philosopher of science, Nancy Cartwright, again from a realist perspective, nonetheless argues for conditions of applicability that are so stringent that she must seriously doubt the usefulness of econometrics. In this paper, I reconsider Lawson''s and Cartwright''s analyses and argue that realism supports rather than undermines econometrics properly interpreted and executed.

    The Theoretical Argument for Disproving Asymptotic Upper-Bounds on the Accuracy of Part-of-Speech Tagging Algorithms: Adopting a Linguistics, Rule-Based Approach

    Get PDF
    This paper takes a deep dive into a particular area of the interdisciplinary domain of Computational Linguistics, Part-of-Speech Tagging algorithms. The author relies primarily on scholarly Computer Science and Linguistics papers to describe previous approaches to this task and the often-hypothesized existence of the asymptotic accuracy rate of around 98%, by which this task is allegedly bound. However, after doing more research into why the accuracy of previous algorithms have behaved in this asymptotic manner, the author identifies valid and empirically-backed reasons why the accuracy of previous approaches do not necessarily reflect any sort of general asymptotic bound on the task of automated Part-of-Speech Tagging. In response, a theoretical argument is proposed to circumvent the shortcomings of previous approaches to this task, which involves abandoning the flawed status-quo of training machine learning algorithms and predictive models on outdated corpora, and instead walks the reader from conception through implementation of a rule-based algorithm with roots in both practical and theoretical Linguistics. While the resulting algorithm is simply a prototype which cannot be currently verified in achieving a tagging-accuracy rate of over 98%, its multi-tiered methodology, meant to mirror aspects of human cognition in Natural Language Understanding, is meant to serve as a theoretical blueprint for a new and inevitably more-reliable way to deal with the challenges in Part-of-Speech Tagging, and provide much-needed advances in the popular area of Natural Language Processing

    New Directions in Compensation Research: Synergies, Risk, and Survival

    Get PDF
    We describe and use two theoretical frameworks, the resource-based view of the firm and institutional theory, as lenses for examining three promising areas of compensation research. First, we examine the nature of the relationship between pay and effectiveness. Does pay typically have a main effect or, instead, does the relationship depend on other human resource activities and organization characteristics? If the latter is true, then there are synergies between pay and these other factors and thus, conclusions drawn from main effects models may be misleading. Second, we discuss a relatively neglected issue in pay research, the concept of risk as it applies to investments in pay programs. Although firms and researchers tend to focus on expected returns from compensation interventions, analysis of the risk, or variability, associated with these returns may be essential for effective decision-making. Finally ,pay program survival, which has been virtually ignored in systematic pay research, is investigated. Survival appears to have important consequences for estimating pay plan risk and returns, and is also integral to the discussion of pay synergies. Based upon our two theoretical frameworks, we suggest specific research directions for pay program synergies, risk, and survival

    The Admissibility of TrueAllele: A Computerized DNA Interpretation System

    Full text link

    The Theoretical Argument for Disproving Asymptotic Upper-Bounds on the Accuracy of Part-of-Speech Tagging Algorithms: Adopting a Linguistics, Rule-Based Approach

    Get PDF
    This paper takes a deep dive into a particular area of the interdisciplinary domain of Computational Linguistics, Part-of-Speech Tagging algorithms. The author relies primarily on scholarly Computer Science and Linguistics papers to describe previous approaches to this task and the often-hypothesized existence of the asymptotic accuracy rate of around 98%, by which this task is allegedly bound. However, after doing more research into why the accuracy of previous algorithms have behaved in this asymptotic manner, the author identifies valid and empirically-backed reasons why the accuracy of previous approaches do not necessarily reflect any sort of general asymptotic bound on the task of automated Part-of-Speech Tagging. In response, a theoretical argument is proposed to circumvent the shortcomings of previous approaches to this task, which involves abandoning the flawed status-quo of training machine learning algorithms and predictive models on outdated corpora, and instead walks the reader from conception through implementation of a rule-based algorithm with roots in both practical and theoretical Linguistics. While the resulting algorithm is simply a prototype which cannot be currently verified in achieving a tagging-accuracy rate of over 98%, its multi-tiered methodology, meant to mirror aspects of human cognition in Natural Language Understanding, is meant to serve as a theoretical blueprint for a new and inevitably more-reliable way to deal with the challenges in Part-of-Speech Tagging, and provide much-needed advances in the popular area of Natural Language Processing

    Why experiments matter

    Get PDF
    This is the author accepted manuscript. The final version is available from Taylor & Francis via the DOI in this recordExperimentation is traditionally considered a privileged means of confirmation. However, why and how experiments form a better confirmatory source relative to other strategies is unclear, and recent discussions have identified experiments with various modeling strategies on the one hand, and with ‘natural’ experiments on the other hand. We argue that experiments aiming to test theories are best understood as controlled investigations of specimens. ‘Control’ involves repeated, fine-grained causal manipulation of focal properties. This capacity generates rich knowledge of the object investigated. ‘Specimenhood’ involves possessing relevant properties given the investigative target and the hypothesis in question. Specimens are thus representative members of a class of systems, to which a hypothesis refers. It is in virtue of both control and specimenhood that experiments provide powerful confirmatory evidence. This explains the distinctive power of experiments: although modelers exert extensive control, they do not exert this control over specimens; although natural experiments utilize specimens, control is diminished.John Templeton Foundatio
    • 

    corecore