47,551 research outputs found

    Syntax-directed documentation for PL360

    Get PDF
    PL360 is a phrase-structured programming language which provides the facilities of a symbolic machine language for the IBM 360 computers. An automatic process, syntax-directed documentation, is described which acquires programming documentation through the syntactical analysis of a program, followed by the interrogation of the originating programmer. This documentation can be dispensed through reports of file query replies when other programmers later need to know the program structure and its details. A key principle of the programming documentation process is that it is managed solely on the basis of the syntax of programs

    XML technology assisted research paper abstract writing

    Get PDF
    iven its briefness, inherent complexity and massive and critical use in scientific discourse, the research paper abstract (RP A) is a text type particularly interesting for both linguistic modelling (writing and reading) and automatic processing (generation and parsing). Even though the current literature on these fields is large and promising, there are still various gaps to fill, especially in the domain of the interplay between linguistic modelling and the development of applications for the solution of communication problems. Our purpose here is to present the RedACTe Project's approach to the design of software oriented to rhetorical and linguistic assistance in RP A writingFil: Castel, Víctor M.

    Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction

    Full text link
    We develop a natural language interface for human robot interaction that implements reasoning about deep semantics in natural language. To realize the required deep analysis, we employ methods from cognitive linguistics, namely the modular and compositional framework of Embodied Construction Grammar (ECG) [Feldman, 2009]. Using ECG, robots are able to solve fine-grained reference resolution problems and other issues related to deep semantics and compositionality of natural language. This also includes verbal interaction with humans to clarify commands and queries that are too ambiguous to be executed safely. We implement our NLU framework as a ROS package and present proof-of-concept scenarios with different robots, as well as a survey on the state of the art

    E/Valuating new media in language development

    Get PDF
    This paper addresses the need for a new approach to the educational evaluation of software that falls under the rubric "new media" or "multimedia" as distinct from previous generations of Computer-Assisted Language Learning (CALL) software. The authors argue that present approaches to CALL software evaluation are not appropriate for a new genre of CALL software distinguished by its shared assumptions about language learning and teaching as well as by its technical design. The paper sketches a research-based program called "E/Valuation" that aims to assist language educators to answer questions about the educational effectiveness of recent multimedia language learning software. The authors suggest that such program needs to take into account not only the nature of the new media and its potential to promote language learning in novel ways, but also current professional knowledge about language learning and teaching

    Flexibility and Interaction at a Distance: A Mixed-Model Environment For Language Learning

    Get PDF
    This article reports on the process of design and development of two language courses for university students at beginning levels of competence. Following a preliminary experience in a low-tech environment for distance language learning and teaching, and a thorough review of the available literature, we identified two major challenges that would need to be addressed in our design: (1) a necessity to build sufficient flexibility into the materials to cater to a variety of learners' styles, interests and skill levels, therefore sustaining learners' motivation; and (2) a need to design materials that would present the necessary requisites of authenticity and interactivity identified in the examined literature, in spite of the reduced opportunities for face-to-face communication. In response to these considerations, we designed and developed learning materials and tasks to be distributed on CD-ROM, complemented by a WebCT component for added interactivity and task authenticity. Although only part of the original design was implemented, and further research is needed to assess the impact of our environment on learning outcomes, the results of preliminary evaluations are encouraging

    Synthesizing Program Input Grammars

    Full text link
    We present an algorithm for synthesizing a context-free grammar encoding the language of valid program inputs from a set of input examples and blackbox access to the program. Our algorithm addresses shortcomings of existing grammar inference algorithms, which both severely overgeneralize and are prohibitively slow. Our implementation, GLADE, leverages the grammar synthesized by our algorithm to fuzz test programs with structured inputs. We show that GLADE substantially increases the incremental coverage on valid inputs compared to two baseline fuzzers

    A comparative evaluation of deep and shallow approaches to the automatic detection of common grammatical errors

    Get PDF
    This paper compares a deep and a shallow processing approach to the problem of classifying a sentence as grammatically wellformed or ill-formed. The deep processing approach uses the XLE LFG parser and English grammar: two versions are presented, one which uses the XLE directly to perform the classification, and another one which uses a decision tree trained on features consisting of the XLE’s output statistics. The shallow processing approach predicts grammaticality based on n-gram frequency statistics: we present two versions, one which uses frequency thresholds and one which uses a decision tree trained on the frequencies of the rarest n-grams in the input sentence. We find that the use of a decision tree improves on the basic approach only for the deep parser-based approach. We also show that combining both the shallow and deep decision tree features is effective. Our evaluation is carried out using a large test set of grammatical and ungrammatical sentences. The ungrammatical test set is generated automatically by inserting grammatical errors into well-formed BNC sentences

    TRANSDUCER FOR AUTO-CONVERT OF ARCHAIC TO PRESENT DAY ENGLISH FOR MACHINE READABLE TEXT: A SUPPORT FOR COMPUTER ASSISTED LANGUAGE LEARNING

    Get PDF
    There exist some English literary works where some archaic words are still used; they are relatively distinct from Present Day English (PDE). We might observe some archaic words that have undergone regular changing patterns: for instances, archaic modal verbs like mightst, darest, wouldst. The –st ending historically disappears, resulting on might, dare and would. (wouldst > would). However, some archaic words undergo distinct processes, resulting on unpredictable pattern; The occurrence frequency for archaic english pronouns like thee ‘you’, thy ‘your’, thyself ‘yourself’ are quite high. Students that are Non-Native speakers of English might come across many difficulties when they encounter English texts which include these kinds of archaic words. How might computer be a help for the student? This paper aims on providing some supports from the perspective of Computer Assisted Language Learning (CALL). It proposes some designs of lexicon transducers by using Local Grammar Graphs (LGG) for auto-convert of the archaic words to PDE in a literature machine readable text. The transducer is applied to a machine readable text that is taken from Sir Walter Scott’s Ivanhoe. The archaic words in the corpus can be converted automatically to PDE. The transducer also allows the presentation of the two forms (Arhaic and PDE), the PDE lexicons-only, or the original (Archaic Lexicons) form-only. This will help students in understanding English literature works better. All the linguistic resources here are machine readable, ready to use, maintainable and open for further development. The method might be adopted for lexicon tranducer for another language too
    corecore