200 research outputs found

    Pervasive Parallel And Distributed Computing In A Liberal Arts College Curriculum

    Get PDF
    We present a model for incorporating parallel and distributed computing (PDC) throughout an undergraduate CS curriculum. Our curriculum is designed to introduce students early to parallel and distributed computing topics and to expose students to these topics repeatedly in the context of a wide variety of CS courses. The key to our approach is the development of a required intermediate-level course that serves as a introduction to computer systems and parallel computing. It serves as a requirement for every CS major and minor and is a prerequisite to upper-level courses that expand on parallel and distributed computing topics in different contexts. With the addition of this new course, we are able to easily make room in upper-level courses to add and expand parallel and distributed computing topics. The goal of our curricular design is to ensure that every graduating CS major has exposure to parallel and distributed computing, with both a breadth and depth of coverage. Our curriculum is particularly designed for the constraints of a small liberal arts college, however, much of its ideas and its design are applicable to any undergraduate CS curriculum

    On the use of a reflective architecture to augment Database Management Systems

    Get PDF
    The Database Management System (DBMS) used to be a commodity software component, with well known standard interfaces and semantics. However, the performance and reliability expectations being placed on DBMSs have increased the demand for a variety add-ons, that augment the functionality of the database in a wide range of deployment scenarios, offering support for features such as clustering, replication, and selfmanagement, among others. The effectiveness of such extensions largely rests on closely matching the actual needs of applications, hence on a wide range of tradeoffs and configuration options out of the scope of traditional client interfaces. A well known software engineering approach to systems with such requirements is reflection. Unfortunately, standard reflective interfaces in DBMSs are very limited (for instance, they often do not support the desired range of atomicity guarantees in a distributed setting). Some of these limitations may be circumvented by implementing reflective features as a wrapper to the DBMS server. Unfortunately, this solutions comes at the expense of a large development effort and significant performance penalty. In this paper we propose a general purpose DBMS reflection architecture and interface, that supports multiple extensions while, at the same time, admitting efficient implementations. We illustrate the usefulness of our proposal with concrete examples, and evaluate its cost and performance under different implementation strategies

    Analytical considerations for transactional cache protocols

    Get PDF
    Since the early nineties transactional cache protocols have been intensively studied in the context of client-server database systems. Research has developed a variety of protocols and compared different aspects of their quality using simulation systems and applying semi-standardized benchmarks. Unfortunately none of the related publications substantiated their experimental findings by thorough analytical considerations. We try to close this gap at least partially by presenting comprensive and highly accurate analytical formulas for quality aspects of two important transactional cache protocols. We consider the non-adaptive variants of the "Callback Read Protocol" (CBR) and the "Optimistic Concurrency Control Protocol" (OCC). The paper studies their cache filling size and the number of messages they produce for the so-called UNIFORM workload. In many cases the cache filling size may considerably differ from a given maximum cache size - a phenomenon which has been overlooked by former publications. Moreover for OCC, we also give a highly accurate formula which forecasts the transaction abortion rate. All formulas are compared against corresponding simulation results in order to validate their correctness

    Global memory management in client-server systems

    Get PDF
    Ankara : Department of Computer Engineering and Information Science and the Institute of Engineering and Science of Bilkent University, 1995.Thesis (Master's) -- Bilkent University, 1995.Includes bibliographical references leaves 79-81.This thesis presents two techniques to iinpro\ e the performance of the global memory management in client-server systems. The proposed memory management techniques, called “Dropping Sent Pages'’ and “Forwarding Sent Pages”, extend the previously proposed techniciues called “Forwarding”, “Hate Hints”, and “Sending Dropped Pages”. The aim of all these techniques is to increase the portion of the database available in the global memory, and thus to reduce disk I/O. The performance of the proposed techniques is experimented using a basic page-server client-server simulation model. The results obtained under different workloads show that the memory management algorithm applying the proposed techniques can exhibit better performance than the algorithms that are based on previous methods.Türkan, YaseminM.S

    Transactions Processing Subsystems for Databases Based On ARIES Write-Ahead Logging for The Client-Server Architecture Approach

    Get PDF
    This paper proposes a formal framework specification that applies an advanced recovery mechanism, functional in a client-server architecture while addressing atomicity and consistency issues. Another palpable issue in using such dominant architectures is recovery. This paper also addresses this issue in context with the client-server architecture using extensions of the original ARIES algorithm and concepts of Software Transaction Memory. This novelty has been successfully implemented and tested for propriety and applicability

    The Database Architectures Research Group at CWI

    Get PDF
    The Database research group at CWI was established in 1985. It has steadily grown from two PhD students to a group of 17 people ultimo 2011. The group is supported by a scientific programmer and a system engineer to keep our machines running. In this short note, we look back at our past and highlight the multitude of topics being addressed
    • …
    corecore