314 research outputs found

    Chinese Numeratives and the Mass/Count Distinction

    Get PDF
    This study investigates the mass/count distinction for lexical nouns, and how this is formalized morphosyntactically in language. English is one language in which a grammaticized mass/count distinction can be seen--though there are varying explanations regarding what this distinction actually signifies. Chinese, on the other hand, is a language which might appear to be missing a formalized mass/count distinction. However, I postulate that Mandarin Chinese does in fact have a syntactic-distributional diagnostic available for teasing apart mass nouns from count nouns. The diagnostic that I propose for finding a mass/count distinction among lexical nouns in Mandarin lies in the distribution of two measure words, xie and dian. More specifically, I hypothesize that the partitive measures xie and dian have different distributions with lexical nouns. The first one, xie, is postulated to be compatible with all nouns, regardless of mass or count status. The second one, dian, is hypothesized to be more selective, being compatible with mass nouns but not with count nouns. Thus, one might say that English realizes the mass/count distinction in a much more elaborate formal system than does Chinese, but both do nonetheless manifest the distinction. The results of my study suggest that there are grounds for claiming that xie and dian adhere at least in part to the distribution patterns that I have hypothesized, though the distribution was not as strong in some cases as I had originally thought; in fact, there are possibly other variables--notably size of the referent--that influence the acceptability of these measure words with nouns. I believe that follow-up research, with more tightly-controlled stimuli, is needed in order to find out how reliable the xieldian diagnostic truly is as a means toward illuminating a mass/count distinction among lexical nouns in Mandarin Chinese

    If We Build It, Will They Legislate? Empirically Testing the Potential of the Nondelegation Doctrine to Curb Congressional Abdication

    Get PDF
    A widely held view for why the Supreme Court would be right to revive the nondelegation doctrine is that Congress has perverse incentives to abdicate its legislative role and evade accountability through the use of delegations, either expressly delineated or implied through statutory imprecision, and that enforcement of the nondelegation doctrine would correct for those incentives. We call this the Field of Dreams Theory—if we build the nondelegation doctrine, Congress will legislate. Unlike originalist arguments for the revival of the nondelegation doctrine, this theory has widespread appeal and is instrumental to the Court’s project of gaining popular acceptance of a greater judicial role in policing congressional decisions regarding delegation. But is it true?In this article, we comprehensively test the theory at the state level, using two original datasets: one comprising all laws passed by state legislatures and the other comprising all nondelegation decisions in the state Supreme Courts. Using a variety of measures and methods, and in contrast with the one existing study on the subject, we do observe at least some statistically measurable decrease in delegation, if only by certain measures. However, when put in context, these findings are underwhelming compared to the predictions of the Field of Dreams Theory. For instance, we observe that, even where it exists, this effect is substantively small and on par with a number of other factors that influence delegation—our best estimate is that nondelegation cases explain about 1.5 percent of the variation in delegation. Moreover, we also find some evidence that is directly contrary to the Field of Dreams Theory—that is, we find evidence that enforcement of the nondelegation doctrine actually leads to more implied delegation in the form of vague and precatory statutory language. These findings have direct relevance to contemporary debate and cases entertaining a revitalization of the nondelegation doctrine in the federal courts. First, the findings that enforcement of the doctrine can prospectively decrease legislative delegation suggest that there may be something to the Field of Dreams Theory, although that in turn raises the stakes of debates over whether less delegation would actually be good for public welfare. Second, even though there is an effect, the weakness of that effect, both in an absolute sense and relative to other factors, undermines the overblown claims that the nondelegation doctrine could fundamentally transform how government works. And finally, our finding that judicial decisions enforcing the nondelegation doctrine can sometimes lead to more implied delegation through imprecise statutory language suggests that there may be unintended consequences from giving the nondelegation doctrine a new lease on life

    Taxonomy of Risks posed by Language Models

    Get PDF
    Responsible innovation on large-scale Language Models (LMs) re- quires foresight into and in-depth understanding of the risks these models may pose. This paper develops a comprehensive taxon- omy of ethical and social risks associated with LMs. We identify twenty-one risks, drawing on expertise and literature from com- puter science, linguistics, and the social sciences. We situate these risks in our taxonomy of six risk areas: I. Discrimination, Hate speech and Exclusion, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, and VI. Environmental and Socioeconomic harms. For risks that have already been observed in LMs, the causal mechanism leading to harm, evidence of the risk, and approaches to risk mitigation are discussed. We further describe and analyse risks that have not yet been observed but are anticipated based on assessments of other language technologies, and situate these in the same taxonomy. We underscore that it is the responsibility of organizations to engage with the mitigations we discuss throughout the paper. We close by highlighting challenges and directions for further research on risk evaluation and mitigation with the goal of ensuring that language models are developed responsibly

    Detecting grammatical errors with treebank-induced, probabilistic parsers

    Get PDF
    Today's grammar checkers often use hand-crafted rule systems that define acceptable language. The development of such rule systems is labour-intensive and has to be repeated for each language. At the same time, grammars automatically induced from syntactically annotated corpora (treebanks) are successfully employed in other applications, for example text understanding and machine translation. At first glance, treebank-induced grammars seem to be unsuitable for grammar checking as they massively over-generate and fail to reject ungrammatical input due to their high robustness. We present three new methods for judging the grammaticality of a sentence with probabilistic, treebank-induced grammars, demonstrating that such grammars can be successfully applied to automatically judge the grammaticality of an input string. Our best-performing method exploits the differences between parse results for grammars trained on grammatical and ungrammatical treebanks. The second approach builds an estimator of the probability of the most likely parse using grammatical training data that has previously been parsed and annotated with parse probabilities. If the estimated probability of an input sentence (whose grammaticality is to be judged by the system) is higher by a certain amount than the actual parse probability, the sentence is flagged as ungrammatical. The third approach extracts discriminative parse tree fragments in the form of CFG rules from parsed grammatical and ungrammatical corpora and trains a binary classifier to distinguish grammatical from ungrammatical sentences. The three approaches are evaluated on a large test set of grammatical and ungrammatical sentences. The ungrammatical test set is generated automatically by inserting common grammatical errors into the British National Corpus. The results are compared to two traditional approaches, one that uses a hand-crafted, discriminative grammar, the XLE ParGram English LFG, and one based on part-of-speech n-grams. In addition, the baseline methods and the new methods are combined in a machine learning-based framework, yielding further improvements

    Cognitive lexicon

    Get PDF

    Contributions to accounting, auditing and internal control : essays in honour of professor Teija Laitinen

    Get PDF
    Huomautus kielistĂ€: TekstiĂ€ suomeksi ja englanniksiKirjoittajat: BĂŒrkland Sirle, Gullkvist Benita, Kallio Minna, Back Barbro, Kihn Lili, NĂ€si Salme, Koskela Merja, Pilke Nina, Laaksonen Pirjo, Jyrinki Henna, Morton Anja, MyllymĂ€ki Emma-Riikka, Jokipii Annukka, Niskanen Mervi, Virtanen Ailafi=vertaisarvioitu|en=peerReviewed

    Legal Ethics: The Integrity Thesis

    Get PDF
    • 

    corecore