39 research outputs found

    A model and framework for reliable build systems

    Full text link
    Reliable and fast builds are essential for rapid turnaround during development and testing. Popular existing build systems rely on correct manual specification of build dependencies, which can lead to invalid build outputs and nondeterminism. We outline the challenges of developing reliable build systems and explore the design space for their implementation, with a focus on non-distributed, incremental, parallel build systems. We define a general model for resources accessed by build tasks and show its correspondence to the implementation technique of minimum information libraries, APIs that return no information that the application doesn't plan to use. We also summarize preliminary experimental results from several prototype build managers

    Proceedings of the ECSCW'95 Workshop on the Role of Version Control in CSCW Applications

    Full text link
    The workshop entitled "The Role of Version Control in Computer Supported Cooperative Work Applications" was held on September 10, 1995 in Stockholm, Sweden in conjunction with the ECSCW'95 conference. Version control, the ability to manage relationships between successive instances of artifacts, organize those instances into meaningful structures, and support navigation and other operations on those structures, is an important problem in CSCW applications. It has long been recognized as a critical issue for inherently cooperative tasks such as software engineering, technical documentation, and authoring. The primary challenge for versioning in these areas is to support opportunistic, open-ended design processes requiring the preservation of historical perspectives in the design process, the reuse of previous designs, and the exploitation of alternative designs. The primary goal of this workshop was to bring together a diverse group of individuals interested in examining the role of versioning in Computer Supported Cooperative Work. Participation was encouraged from members of the research community currently investigating the versioning process in CSCW as well as application designers and developers who are familiar with the real-world requirements for versioning in CSCW. Both groups were represented at the workshop resulting in an exchange of ideas and information that helped to familiarize developers with the most recent research results in the area, and to provide researchers with an updated view of the needs and challenges faced by application developers. In preparing for this workshop, the organizers were able to build upon the results of their previous one entitled "The Workshop on Versioning in Hypertext" held in conjunction with the ECHT'94 conference. The following section of this report contains a summary in which the workshop organizers report the major results of the workshop. The summary is followed by a section that contains the position papers that were accepted to the workshop. The position papers provide more detailed information describing recent research efforts of the workshop participants as well as current challenges that are being encountered in the development of CSCW applications. A list of workshop participants is provided at the end of the report. The organizers would like to thank all of the participants for their contributions which were, of course, vital to the success of the workshop. We would also like to thank the ECSCW'95 conference organizers for providing a forum in which this workshop was possible

    Implementing Azure Active Directory Integration with an Existing Cloud Service

    Get PDF
    Training Simulator (TraSim) is an online, web-based platform for holding crisis management exercises. It simulates epidemics and other exceptional situations to test the functionality of an organization’s operating instructions in the hour of need. The main objective of this thesis is to further develop the service by delegating its existing authentication and user provisioning mechanisms to a centralized, cloud-based Identity and Access Management (IAM) service. Making use of a centralized access control service is widely known as a Single Sign-On (SSO) implementation which comes with multiple benefits such as increased security, reduced administrative overhead and improved user experience. The objective originates from a customer organization’s request to enable SSO for TraSim. The research mainly focuses on implementing SSO by integrating TraSim with Azure Active Directory (AD) from a wide range of IAM services since it is considered as an industry standard and already utilized by the customer. Anyhow, the complexity of the integration is kept as reduced as possible to retain compatibility with other services besides Azure AD. While the integration is a unique operation with an endless amount of software stacks that a service can build on and multiple IAM services to choose from, this thesis aims to provide a general guideline of how to approach a resembling assignment. Conducting the study required extensive search and evaluation of the available literature about terms such as IAM, client-server communication, SSO, cloud services and AD. The literature review is combined with an introduction to the basic technologies that TraSim is built with to justify the choice of OpenID Connect as the authentication protocol and why it was implemented using the mozilla-django-oidc library. The literature consists of multiple online articles, publications and the official documentation of the utilized technologies. The research uses a constructive approach as it focuses into developing and testing a new feature that is merged into the source code of an already existing piece of software

    Design and implementation of a transaction manager for a relational database

    No full text
    1vf ulti-user database management systems are in great demand because of the information requirements of our modern industrial society. A clear requirement is that database resources be shared by many users at the same time. Transaction management aims to manage concurrent database access by multiple users while preserving the consistency of the database. In this thesis a single-user relational database management system, REQUIEM, is used as a vehicle to investigate improved methods for achieving this. A module, called the REQUIEM Transaction Manager (RTM), is built on top of the original REQUIEM to achieve a multi-user database management system. The design work of the present thesis is founded upon various techniques for transaction management proposed in published literature which are critically assessed and a mechanism which combines appealing features from existing methodologies. The problems of transaction management considered in this thesis are: 1. concurrency control, 2. granularity control, 3. deadlock control, and 4. recovery control. The RTM is also compared with the transaction management facilities of conventional commercial systems such as DB2, INGRES and ORACLE

    Distributed Operating Systems

    Get PDF
    Distributed operating systems have many aspects in common with centralized ones, but they also differ in certain ways. This paper is intended as an introduction to distributed operating systems, and especially to current university research about them. After a discussion of what constitutes a distributed operating system and how it is distinguished from a computer network, various key design issues are discussed. Then several examples of current research projects are examined in some detail, namely, the Cambridge Distributed Computing System, Amoeba, V, and Eden. © 1985, ACM. All rights reserved

    Decentralized information flow control for databases

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 177-194).Privacy and integrity concerns have been mounting in recent years as sensitive data such as medical records, social network records, and corporate and government secrets are increasingly being stored in online systems. The rate of high-profile breaches has illustrated that current techniques are inadequate for protecting sensitive information. Many of these breaches involve databases that handle information for a multitude of individuals, but databases don't provide practical tools to protect those individuals from each other, so that task is relegated to the application. This dissertation describes a system that improves security in a principled way by extending the database system and the application platform to support information flow control. Information flow control has been gaining traction as a practical way to protect information in the contexts of programming languages and operating systems. Recent research advocates the decentralized model for information flow control (DIFC), since it provides the necessary expressiveness to protect data for many individuals with varied security concerns.However, despite the fact that most applications implicated in breaches rely on relational databases, there have been no prior comprehensive attempts to extend DIFC to a database system. This dissertation introduces IFDB, which is a database management system that supports DIFC with minimal overhead. IFDB pioneers the Query by Label model, which provides applications with a simple way to delineate constraints on the confidentiality and integrity of the data they obtain from the database. This dissertation also defines new abstractions for managing information flows in a database and proposes new ways to address covert channels. Finally, the IFDB implementation and case studies with real applications demonstrate that database support for DIFC improves security, is easy for developers to use, and has good performance.by David Andrew Schultz.Ph.D

    The Replica Consistency Problem in Data Grids

    Get PDF
    Fast and reliable data access is a crucial aspect in distributed computing and is often achieved using data replication techniques. In Grid architectures, data are replicated in many nodes of the Grid, and users usually access the "best" replica in terms of availability and network latency. When replicas are modifiable, a change made to one replica will break the consistency with the other replicas that, at that point, become stale. Replica synchronisation protocols exist and are applied in several distributed architectures, for example in distributed databases. Grid middleware solutions provide well established support for replicating data. Nevertheless, replicas are still considered read-only, and no support is provided to the user for updating a replica while maintaining the consistency with the other replicas. In this thesis, done in collaboration with the Italian National Institute of Nuclear Physics (INFN) and the European Organisation for Nuclear Research (CERN), we study the replica consistency problem in Grid computing and propose a service, called CONStanza, that is able to synchronise both files and heterogeneous (different vendors) databases in a Grid environment. We analyse and implement a specific use case that arises in high energy Physics, where conditions databases are replicated using databases of different makes. We provide detailed performance results, and show how CONStanza can be used together with Oracle Streams to provide multitier replication of conditions databases using Oracle and MySQL databases

    Transactional Client-Server Cache Consistency: Alternatives and Performance

    Get PDF
    Client-server database systems based on a page server model can exploit client memory resources by caching copies of pages across transaction boundaries. Caching reduces the need to obtain data from servers or other sites on the network. In order to ensure that such caching does not result in the violation of transaction semantics, a cache consistency maintenance algorithm is required. Many such algorithms have been proposed in the literature and, as all provide the same functionality, performance is a primary concern in choosing among them. In this paper we provide a taxonomy that describes the design space for transactional cache consistency maintenance algorithms and show how proposed algorithms relate to one another. We then investigate the performance of six of these algorithms, and use these results to examine the tradeoffs inherent in the design choices identified in the taxonomy. The insight gained in this manner is then used to reflect upon the characteristics of other algorithms that have been proposed. The results show that the interactions among dimensions of the design space can impact performance in many ways, and that classifications of algorithms as simply Pessimistic" or Optimistic" do not accurately characterize the similarities and differences among the many possible cache consistency algorithms. (Also cross-referenced as UMIACS-TR-95-84

    Contention management for distributed data replication

    Get PDF
    PhD ThesisOptimistic replication schemes provide distributed applications with access to shared data at lower latencies and greater availability. This is achieved by allowing clients to replicate shared data and execute actions locally. A consequence of this scheme raises issues regarding shared data consistency. Sometimes an action executed by a client may result in shared data that may conflict and, as a consequence, may conflict with subsequent actions that are caused by the conflicting action. This requires a client to rollback to the action that caused the conflicting data, and to execute some exception handling. This can be achieved by relying on the application layer to either ignore or handle shared data inconsistencies when they are discovered during the reconciliation phase of an optimistic protocol. Inconsistency of shared data has an impact on the causality relationship across client actions. In protocol design, it is desirable to preserve the property of causality between different actions occurring across a distributed application. Without application level knowledge, we assume an action causes all the subsequent actions at the same client. With application knowledge, we can significantly ease the protocol burden of provisioning causal ordering, as we can identify which actions do not cause other actions (even if they precede them). This, in turn, makes possible the client’s ability to rollback to past actions and to change them, without having to alter subsequent actions. Unfortunately, increased instances of application level causal relations between actions lead to a significant overhead in protocol. Therefore, minimizing the rollback associated with conflicting actions, while preserving causality, is seen as desirable for lower exception handling in the application layer. In this thesis, we present a framework that utilizes causality to create a scheduler that can inform a contention management scheme to reduce the rollback associated with the conflicting access of shared data. Our framework uses a backoff contention management scheme to provide causality preserving for those optimistic replication systems with high causality requirements, without the need for application layer knowledge. We present experiments which demonstrate that our framework reduces clients’ rollback and, more importantly, that the overall throughput of the system is improved when the contention management is used with applications that require causality to be preserved across all actions

    An Introduction to Database Systems

    Get PDF
    This textbook introduces the basic concepts of database systems. These concepts are presented through numerous examples in modeling and design. The material in this book is geared to an introductory course in database systems offered at the junior or senior level of Computer Science. It could also be used in a first year graduate course in database systems, focusing on a selection of the advanced topics in the latter chapters
    corecore