40 research outputs found

    Online Educational Outcomes Could Exceed Those of the Traditional Classroom

    Get PDF
    An axiom of online education is that teachers should not mechanically translate existing courses into an online format. If so, how should new or ongoing courses be reshaped for the online environment and why? The answers come both from the opportunities offered by the structure of online education and from a body of research from cognitive psychology and cognitive science that provides insight into the way people actually learn. Freed from the time and space constraints inherent in face-to-face higher education settings as well as the deeply ingrained expectations of both teachers and students, online education provides a more flexible palette upon which evidence-based ideas about learning can be integrated into course structure and design. As a result, online education can potentially deliver learning experiences and outcomes that are superior to typical face-to-face classrooms. The ability to integrate experiences that stimulate real, long lasting learning represents one of online education’s greatest potential benefits

    Variables and Parameters as References and Containers

    No full text
    Most designers of object-based languages adopt a reference model of variables without explicit justification, despite its wide ranging consequences. This paper argues that the traditional container model of variables is more efficient than the reference model, nearly as flexible, and more appropriate to parallel and distributed systems. The topics addressed are object lifetime and its implications for storage management, dynamic typing and its implications for object representation, aliasing and its implications for interference between operations, parameter passing and its implications for communication, and sharing and its implications for contention. We present our experience with the container model in a prototype parallel language. Neither model is always better than the other, and the choice of model should not be left to default. Computing Reviews Categories and Subject Descriptors: D.3.2 [Programming Languages ]: Language Classifications --- concurrent, distributed and parallel..

    An Interface Between Object-Oriented Systems

    No full text
    The description `object-oriented' may apply to both programming languages and operating systems. However, creating an interface between an object-oriented programming language and an object-oriented operating system is not necessarily a straightforward task. Chrysalis++ is a C++ interface to the Chrysalis operating system for the BBN Butterfly Parallel Processor. The development of Chrysalis++ highlights strengths and weaknesses of C++ and the problems of adapting a language based on a conventional memory model to a shared memory parallel processor. This work was supported by United States Army Engineering Topographic Laboratories research contract number DACA76-85-C-0001 and National Science Foundation Coordinated Experimental Research grant number DCR-8320136. We thank the Xerox Corporation University Grants Program for providing equipment used in the preparation of this paper. 1 Introduction The Chrysalis operating system for the Butterfly Parallel Processor presents an objectori..

    Translating an Existing Scientific Application from C to Dataparallel C

    No full text
    This report describes the translation of an existing sequential scientific program written the C programming language into the parallel programming language Dataparallel C. The resulting Dataparallel C program is able to use a network of workstations as though it were a single highperformance parallel computer. We describe the amount of effort required to translate the existing scientific program and the timing results of the translated program are compared to the sequential program. We conclude with some recommendations to parallel programming language designers, implementors of parallel programming languages, and application programmers

    Architectural Adaptability in Parallel Programming via Control Abstraction

    No full text
    Parallel programming involves finding the potential parallelism in an application, choosing an algorithm, and mapping it to the architecture at hand. Since a typical algorithm has much more potential parallelism than any single architecture can effectively exploit, we usually program the parallelism that the available control constructs easily express and that the given architecture efficiently exploits. This approach produces programs that exhibit much less parallelism than the original algorithm and whose performance depends entirely on the underlying architecture. To port such a program to a new architecture, we must rewrite the program to remove any ineffective parallelism and to recover any lost parallelism appropriate for the new machine. In this paper we show how to adapt a parallel program to different architectures using control abstraction. With control abstraction we can define and use a rich variety of control constructs to represent an algorithm's potential parallelism. Since control abstraction separates the definition of a construct from its implementation, a construct may have several different implementations, each exploiting a different subset of the parallelism admitted by the construct. By selecting an implementation for each control construct using annotations, we can vary the parallelism we choose to exploit without otherwise changing the source code. This approach produces programs that exhibit most of, if not all, the potential parallelism in an algorithm, and whose performance can be tuned for a specific architecture simply by choosing among the various implementations for the control constructs in use
    corecore