1,336 research outputs found

    Pervasive Parallel And Distributed Computing In A Liberal Arts College Curriculum

    Get PDF
    We present a model for incorporating parallel and distributed computing (PDC) throughout an undergraduate CS curriculum. Our curriculum is designed to introduce students early to parallel and distributed computing topics and to expose students to these topics repeatedly in the context of a wide variety of CS courses. The key to our approach is the development of a required intermediate-level course that serves as a introduction to computer systems and parallel computing. It serves as a requirement for every CS major and minor and is a prerequisite to upper-level courses that expand on parallel and distributed computing topics in different contexts. With the addition of this new course, we are able to easily make room in upper-level courses to add and expand parallel and distributed computing topics. The goal of our curricular design is to ensure that every graduating CS major has exposure to parallel and distributed computing, with both a breadth and depth of coverage. Our curriculum is particularly designed for the constraints of a small liberal arts college, however, much of its ideas and its design are applicable to any undergraduate CS curriculum

    Incorporating the NSF/TCPP Curriculum Recommendations in a Liberal Arts Setting

    Get PDF
    This paper examines the integration of the NSF/TCPP Core Curriculum Recommendations in a liberal arts undergraduate setting. We examine how parallel and distributed computing concepts can be incorporated across the breadth of the undergraduate curriculum. As a model of such an integration, changes are proposed to Data Structures and Design and Analysis of Algorithms. These changes were implemented in Design and Analysis of Algorithms and the results were compared to previous iterations of that course taught by the same instructor. The student feedback received shows that the introduction of these topics made the course more engaging and conveyed an adequate introduction to this material

    Building scalable software systems in the multicore era

    Get PDF
    Software systems must face two challenges today: growing complexity and increasing parallelism in the underlying computational models. The problem of increased complexity is often solved by dividing systems into modules in a way that permits analysis of these modules in isolation. The problem of lack of concurrency is often tackled by dividing system execution into tasks that permits execution of these tasks in isolation. The key challenge in software design is to manage the explicit and implicit dependence between modules that decreases modularity. The key challenge for concurrency is to manage the explicit and implicit dependence between tasks that decreases parallelism. Even though these challenges appear to be strikingly similar, current software design practices and languages do not take advantage of this similarity. The net effect is that the modularity and concurrency goals are often tackled mutually exclusively. Making progress towards one goal does not naturally contribute towards the other. My position is that for programmers that are not formally and rigorously trained in the concurrency discipline the safest and most productive way to get scalability in their software is by improving modularity of their software using programming language features and design practices that reconcile modularity and concurrency goals. I briefly discuss preliminary efforts of my group, but we have only touched the tip of the iceberg

    A model-driven approach to teaching concurrency

    Get PDF
    We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so called shared resources), rather than on the specific constructs of some programming language. Using a resource-centric rather than a language-centric approach has some benefits for both teachers and students. Besides the obvious advantage of being independent of the programming language, the models help in the early validation of concurrent software design, provide students and teachers with a lingua franca that greatly simplifies communication at the classroom and during supervision, and help in the automatic generation of tests for the practical assignments. This method has been in use, with slight variations, for some 15 years, surviving changes in the programming language and course length. In this article, we describe the components and structure of the current incarnation of the course?which uses Java as target language?and some tools used to support our method. We provide a detailed description of the different outcomes that the model-driven approach delivers (validation of the initial design, automatic generation of tests, and mechanical generation of code) from a teaching perspective. A critical discussion on the perceived advantages and risks of our approach follows, including some proposals on how these risks can be minimized. We include a statistical analysis to show that our method has a positive impact in the student ability to understand concurrency and to generate correct code

    A learning experience toward the understanding of abstraction-level interactions in parallel applications

    Get PDF
    In the curriculum of a Computer Engineering program, concepts like parallelism, concurrency, consistency, or atomicity are usually addressed in separate courses due to their thoroughness and extension. Isolating such concepts in courses helps students not only to focus on specific aspects, but also to experience the reality of working with modern computer systems, where those concepts are often detached in different abstraction levels. However, due to such an isolation, it exists a risk of inducing to the students an absence of interactions between these concepts, and, by extension, between the different abstraction levels of a system. This paper proposes a learning experience showcasing the interactions between abstraction levels addressed in laboratory sessions of different courses. The driving example is a parallel ray tracer. In the different courses, students implement and assemble components of this application from the algorithmic level of the tracer to the assembly instructions required to guarantee atomicity. Each lab focuses on a single abstraction level, but shows students the interactions with the rest of the levels. Technical results and student learning outcomes through the analysis of surveys validate the proposed experience and confirm the students learning improvement with a more integrated view of the system

    Cross Teaching Parallelism and Ray Tracing: A Project-based Approach to Teaching Applied Parallel Computing

    Get PDF
    Massively parallel Graphics Processing Unit (GPU) hardware has become increasingly powerful, available and affordable. Software tools have also advanced to the point that programmers can write general purpose parallel programs that take advantage of the large number of compute cores available in the hardware. With literally hundreds of compute cores available on a single device, program performance can increase by orders of magnitude. We believe that introducing students to the concepts of parallel programming for massively parallel hardware is of increasing importance in an undergraduate computer science curriculum. Furthermore, we believe that students learn best when given projects that reflect real problems in computer science. This paper describes the experience of integrating two undergraduate computer science courses to enhance student learning in parallel computing concepts. In this cross teaching experience we structured the integration of the courses such that students studying parallel computing worked with students studying advanced rendering for approximately 30% of the quarter long courses. Working in teams on a joint project, both groups of students were able to see the application of parallelization to an existing software project with both the benefits and complications exposed early in the curriculum of both courses. Motivating projects and performance gains are discussed, as well as student survey data on the effectiveness of the learning outcomes. Both performance and survey data indicate a positive gain from the cross teaching experience

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Implementing atomic actions in Ada 95

    Get PDF
    Atomic actions are an important dynamic structuring technique that aid the construction of fault-tolerant concurrent systems. Although they were developed some years ago, none of the well-known commercially-available programming languages directly support their use. This paper summarizes software fault tolerance techniques for concurrent systems, evaluates the Ada 95 programming language from the perspective of its support for software fault tolerance, and shows how Ada 95 can be used to implement software fault tolerance techniques. In particular, it shows how packages, protected objects, requeue, exceptions, asynchronous transfer of control, tagged types, and controlled types can be used as building blocks from which to construct atomic actions with forward and backward error recovery, which are resilient to deserter tasks and task abortion
    corecore