172 research outputs found

    Mission Analysis Program for Solar Electric Propulsion (MAPSEP). Volume 3: Program manual

    Get PDF
    The internal structure of MAPSEP is described. Topics discussed include: macrologic, variable definition, subroutines, and logical flow. Information is given to facilitate modifications to the models and algorithms of MAPSEP

    Language Learning Tasks and Automatic Analysis of Learner Language: Connecting FLTL and NLP design of ICALL materials supporting use in real-life instruction

    Get PDF
    This thesis studies the application of Natural Language Processing to Foreign Language Teaching and Learning, within the research area of Intelligent Computer- Assisted Language Learning (ICALL). In particular, we investigate the design, the implementation, and the use of ICALL materials to provide learners of foreign languages, particularly English, with automated feedback. We argue that the successful integration of ICALL materials demands a design process considering both pedagogical and computational requirements as equally important. Our investigation pursues two goals. The first one is to integrate into task design insights from Second Language Acquisition and Foreign Language Teaching and Learning with insights from computational linguistic modelling. The second goal is to facilitate the integration of ICALL materials in real-world instruction settings, as opposed to research or lab-oriented instruction settings, by empowering teachers with the methodology and the technology to autonomously author such materials. To achieve the first goal, we propose an ICALL material design process that combines basic principles of Task-Based Language Instruction and Task-Based Test Design with the specification requirements of Natural Language Processing. The relation between pedagogical and computational requirements is elucidated by exploring (i) the formal features of foreign language learning activities, (ii) the complexity and variability of learner language, and (iii) the feasibility of applying computational techniques for the automatic analysis and evaluation of learner responses. To achieve the second goal, we propose an automatic feedback generation strategy that enables teachers to customise the computational resources required to automatically correct ICALL activities without the need for programming skills. This proposal is instantiated and evaluated in real world-instruction settings involving teachers and learners in secondary education. Our work contributes methodologically and empirically to the ICALL field, with a novel approach to the design of materials that highlights the cross-disciplinary and iterative nature of the task. Our findings reveal the strength of characterising tasks both from the perspective of Foreign Language Teaching and Learning and from the perspective of Computational Linguistics as a means to clarify the nature of learning activities. Such a characterisation allows us to identify ICALL materials which are both pedagogically meaningful and computationally feasible. Our results show that teachers can characterise, author and employ ICALL mate- rials as part of their instruction programme, and that the underlying computational machinery can provide the required automatic processing with sufficient efficiency. The authoring tool and the accompanying methodology become a crucial instrument for ICALL research and practice: Teachers are able to design activities for their students to carry out without relying on an expert in Natural Language Processing. Last but not least, our results show that teachers are value the experience very positively as means to engage in technology integration, but also as a means to better apprehend the nature of their instruction task. Moreover, our results show that learners are motivated by the opportunity of using a technology that enhances their learning experience

    The dawn of the human-machine era: a forecast of new and emerging language technologies

    Get PDF
    New language technologies are coming, thanks to the huge and competing private investment fuelling rapid progress; we can either understand and foresee their effects, or be taken by surprise and spend our time trying to catch up. This report scketches out some transformative new technologies that are likely to fundamentally change our use of language. Some of these may feel unrealistically futuristic or far-fetched, but a central purpose of this report - and the wider LITHME network - is to illustrate that these are mostly just the logical development and maturation of technologies currently in prototype. But will everyone benefit from all these shiny new gadgets? Throughout this report we emphasise a range of groups who will be disadvantaged and issues of inequality. Important issues of security and privacy will accompany new language technologies. A further caution is to re-emphasise the current limitations of AI. Looking ahead, we see many intriguing opportunities and new capabilities, but a range of other uncertainties and inequalities. New devices will enable new ways to talk, to translate, to remember, and to learn. But advances in technology will reproduce existing inequalities among those who cannot afford these devices, among the world's smaller languages, and especially for sign language. Debates over privacy and security will flare and crackle with every new immersive gadget. We will move together into this curious new world with a mix of excitement and apprehension - reacting, debating, sharing and disagreeing as we always do. Plug in, as the human-machine era dawn

    Mission Analysis Program for Solar Electric Propulsion (MAPSEP). Volume 3: Program manual for earth orbital MAPSEP

    Get PDF
    A revised user's manual for the computer program MAPSEP is presented. Major changes from the interplanetary version of MAPSEP are summarized. The changes are intended to provide a basic capability to analyze anticipated solar electric missions, and a foundation for future more complex, modifications. For Vol. III, N75-16589

    IST Austria Thesis

    Get PDF
    Designing and verifying concurrent programs is a notoriously challenging, time consuming, and error prone task, even for experts. This is due to the sheer number of possible interleavings of a concurrent program, all of which have to be tracked and accounted for in a formal proof. Inventing an inductive invariant that captures all interleavings of a low-level implementation is theoretically possible, but practically intractable. We develop a refinement-based verification framework that provides mechanisms to simplify proof construction by decomposing the verification task into smaller subtasks. In a first line of work, we present a foundation for refinement reasoning over structured concurrent programs. We introduce layered concurrent programs as a compact notation to represent multi-layer refinement proofs. A layered concurrent program specifies a sequence of connected concurrent programs, from most concrete to most abstract, such that common parts of different programs are written exactly once. Each program in this sequence is expressed as structured concurrent program, i.e., a program over (potentially recursive) procedures, imperative control flow, gated atomic actions, structured parallelism, and asynchronous concurrency. This is in contrast to existing refinement-based verifiers, which represent concurrent systems as flat transition relations. We present a powerful refinement proof rule that decomposes refinement checking over structured programs into modular verification conditions. Refinement checking is supported by a new form of modular, parameterized invariants, called yield invariants, and a linear permission system to enhance local reasoning. In a second line of work, we present two new reduction-based program transformations that target asynchronous programs. These transformations reduce the number of interleavings that need to be considered, thus reducing the complexity of invariants. Synchronization simplifies the verification of asynchronous programs by introducing the fiction, for proof purposes, that asynchronous operations complete synchronously. Synchronization summarizes an asynchronous computation as immediate atomic effect. Inductive sequentialization establishes sequential reductions that captures every behavior of the original program up to reordering of coarse-grained commutative actions. A sequential reduction of a concurrent program is easy to reason about since it corresponds to a simple execution of the program in an idealized synchronous environment, where processes act in a fixed order and at the same speed. Our approach is implemented the CIVL verifier, which has been successfully used for the verification of several complex concurrent programs. In our methodology, the overall correctness of a program is established piecemeal by focusing on the invariant required for each refinement step separately. While the programmer does the creative work of specifying the chain of programs and the inductive invariant justifying each link in the chain, the tool automatically constructs the verification conditions underlying each refinement step

    Proceedings

    Get PDF
    Proceedings of the NODALIDA 2009 workshop Constraint Grammar and robust parsing. Editors: Eckhard Bick, Kristin Hagen, Kaili Müürisep and Trond Trosterud. NEALT Proceedings Series, Vol. 8 (2009), 33 pages. © 2009 The editors and contributors. Published by Northern European Association for Language Technology (NEALT) http://omilia.uio.no/nealt . Electronically published at Tartu University Library (Estonia) http://hdl.handle.net/10062/14180

    Symbolic Crosschecking of Data-Parallel Floating Point Code

    No full text
    In this thesis we present a symbolic execution-based technique for cross-checking programs accelerated using SIMD or OpenCL against an unaccelerated version, as well as a technique for detecting data races in OpenCL programs. Our techniques are implemented in KLEE-CL, a symbolic execution engine based on KLEE that supports symbolic reasoning on the equivalence between expressions involving both integer and floating-point operations. While the current generation of constraint solvers provide good support for integer arithmetic, there is little support available for floating-point arithmetic, due to the complexity inherent in such computations. The key insight behind our approach is that floating-point values are only reliably equal if they are essentially built by the same operations. This allows us to use an algorithm based on symbolic expression matching augmented with canonicalisation rules to determine path equivalence. Under symbolic execution, we have to verify equivalence along every feasible control-flow path. We reduce the branching factor of this process by aggressively merging conditionals, if-converting branches into select operations via an aggressive phi-node folding transformation. To support the Intel Streaming SIMD Extension (SSE) instruction set, we lower SSE instructions to equivalent generic vector operations, which in turn are interpreted in terms of primitive integer and floating-point operations. To support OpenCL programs, we symbolically model the OpenCL environment using an OpenCL runtime library targeted to symbolic execution. We detect data races by keeping track of all memory accesses using a memory log, and reporting a race whenever we detect that two accesses conflict. By representing the memory log symbolically, we are also able to detect races associated with symbolically indexed accesses of memory objects. We used KLEE-CL to find a number of issues in a variety of open source projects that use SSE and OpenCL, including mismatches between implementations, memory errors, race conditions and compiler bugs

    RECONFIGURABLE COMPUTING: NETWORK INTERFACE CONTROLLER AREA NETWORK (CAN)

    Get PDF
    In current embedded computer system development, the methodologies have experienced significant changes due to the advancement in reconfigurable computing technologies. The availability of large capacity programmable logic devices such as field programmable grid arrays (FPGA) and high-level hardware synthesis tools allows embedded system designers to explore various hardware/software partitioning options in order to obtain the most optimum solution. A type of hardware synthesis tool that is gaining significant footing in the industry is Handel-C. a programming language based on the syntax of C but able to produce gate-level information that can be placed and routed on to an FPGA. Controller Area Network (CAN) is an example of embedded system application widely used in modem automobiles and gaining popularity in manufacturing environments where high-speed and robust networking is needed. CAN was designed on a very simple yet effective protocol where messages are identified by their own unique identifiers. Message collisions are handled through a non-destructive arbitration process, eliminating message re-transmission and unnecessary network overloading. A project to design and implement of a version of CAN is presented in this dissertation. The project was performed based on hardware/software co-design methodology with the utilisation of the above-mentioned reconfigurable computing technologies: FPGA and Handel-C. This disse11ation describes the concepts of hardware/software co-design and rcconfigurable computing: the details of CAN protocol, the fundamentals of Handel-C. design ideas considered and the actual implementation of the system
    corecore