19 research outputs found

    An Analysis Of Punctuation Errors Of Three Hundred Freshmen Students Of Prairie View College, 1944-1945

    Get PDF
    Statement of the Problem The problems of punctuation in the written form of language appear to have caused the teachers of Prairie View College much concern. There have been many criticisms of the college students who continually grow worse in punctuating essay-type examinations, compositions, periodical reports, letters, and investigative papers. This study deals with the punctuation errors of students beginning college work in English, The problem that confronts this research is: 1, What are the punctuation needs of the average Prairie View Student? 2, What alterations in the English curriculum should be made to satisfy these needs? The following questions should be pertinent in finding the answer to the above problems for freshmen students : 1, Are punctuation errors of the Prairie View students numerous enough to suggest special remedial attention? 2, Is there any relationship between the- student1 s background and his ability to punctuate? 3, Is the present program in English adequate to take care of the punctuation needs of the students? Source of Data The data for this study were obtained from the work in individual folders that are kept by the teachers of the English department for each student in their classes. At the beginning of the school term, the students give their folders to their teachers. As assignments are written up by the students, they are put in the folders and kept, after the student has seen his grade and checked his errors, Examinations are filed the same way. The student\u27s work as it appears in the folder has already been corrected by the English teachers. The writer studied the punctuation errors, including those marked by the teacher and others, appearing in three hundred folders, using 197 folders from the freshman class which entered college in September 1944, and 103 folders for the freshman class which entered college in February 1945, The folders contain letters, lists of sentences, compositions, tests, periodical reports, and Biblical stories, as prescribed by the English department. In the selection of this material only those punctuation marks receiving the most frequent usage were considered: namely, the period, the comma, the colon, the semicolon, the question mark, the quotation marks, and the hyphen

    Information-Based Aspects of Punctuation

    Get PDF

    Functional punctuation in secondary-school English

    Full text link
    Thesis (M.A.)--Boston University, 1945. This item was digitized by the Internet Archive

    Computer-aided analysis of English punctuation on a parsed corpus: the special case of comma

    Get PDF
    Ankara : Department of Computer Engineering and Information Science and the Institute of Engineering and Science of Bilkent University, 1996.Thesis (Master's) -- Bilkent University, 1996.Includes bibliographical references leaves 51-56.Punctuation, an orthographical component of language, has usually been ignored by most research in computational linguistics over the years. One reason for this is the overall difficulty of the subject, and another is the absence of a good theory. On the other hand, both ‘conventional’ and computational linguistics have increased their attention to punctuation in recent years because it has been realized that true understanding and processing of written language will be almost impossible if punctuation marks are not taken into account. Except the lists of rules given in style manuals or usage books, we know little about punctuation. These books give us information about how we should punctuate, but they are generally silent about the actual punctuation practice. This thesis contains the details of a computer-aided experiment to investigate English punctuation practice, for the special case of comma (the most significant punctuation mark) in a parsed corpus. The experiment attempts to classify the various uses of comma according to the syntax-patterns in which comma occurs. The corpus (Penn Treebank) consists of syntactically annotated sentences with no part-of-speech tag information about individual words, and this ideally seems to be enough to classify ‘structural’ punctuation marks.Bayraktar, MuratM.S

    Current Approaches to Punctuation in Computational Linguistics

    Get PDF
    Some recent studies in computational linguistics have aimed to take advantage of various cues presented by punctuation marks. This short survey is intended to summarise these research efforts and additionally, to outline a current perspective for the usage and functions of punctuation marks. We conclude by presenting an information-based framework for punctuation, influenced by treatments of several related phenomena in computational linguistics. © 1997 Kluwer Academic Publishers

    Maximum Entropy Models For Natural Language Ambiguity Resolution

    Get PDF
    This thesis demonstrates that several important kinds of natural language ambiguities can be resolved to state-of-the-art accuracies using a single statistical modeling technique based on the principle of maximum entropy. We discuss the problems of sentence boundary detection, part-of-speech tagging, prepositional phrase attachment, natural language parsing, and text categorization under the maximum entropy framework. In practice, we have found that maximum entropy models offer the following advantages: State-of-the-art Accuracy: The probability models for all of the tasks discussed perform at or near state-of-the-art accuracies, or outperform competing learning algorithms when trained and tested under similar conditions. Methods which outperform those presented here require much more supervision in the form of additional human involvement or additional supporting resources. Knowledge-Poor Features: The facts used to model the data, or features, are linguistically very simple, or knowledge-poor but yet succeed in approximating complex linguistic relationships. Reusable Software Technology: The mathematics of the maximum entropy framework are essentially independent of any particular task, and a single software implementation can be used for all of the probability models in this thesis. The experiments in this thesis suggest that experimenters can obtain state-of-the-art accuracies on a wide range of natural language tasks, with little task-specific effort, by using maximum entropy probability models
    corecore