2,638 research outputs found

    Integrity Constraint Checking in Federated Databases

    Get PDF
    A federated database is comprised of multiple interconnected databases that cooperate in an autonomous fashion. Global integrity constraints are very useful in federated databases, but the lack of global queries, global transaction mechanisms, and global concurrency control renders traditional constraint management techniques inapplicable. The paper presents a threefold contribution to integrity constraint checking in federated databases: (1) the problem of constraint checking in a federated database environment is clearly formulated; (2) a family of cooperative protocols for constraint checking is presented; (3) the differences across protocols in the family are analyzed with respect to system requirements, properties guaranteed, and costs involved. Thus, we provide a suite of options with protocols for various environments with specific system capabilities and integrity requirement

    Traffic Optimization in Data Center and Software-Defined Programmable Networks

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Decentralized provenance-aware publishing with nanopublications

    Get PDF
    Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable

    Fault tolerant architectures for integrated aircraft electronics systems

    Get PDF
    Work into possible architectures for future flight control computer systems is described. Ada for Fault-Tolerant Systems, the NETS Network Error-Tolerant System architecture, and voting in asynchronous systems are covered

    Towards self-optimizing frameworks for collaborative systems

    Get PDF
    Two important performance metrics in collaborative systems are local and remote response times. For certain classes of applications, it is possible to meet response time requirements better than existing systems through a new system without requiring hardware, network, or user-interface changes. This self-optimizing system improves response times by automatically making runtime adjustments to three aspects of a collaborative application. One of these aspects is the collaboration architecture. Previous work has shown that dynamically switching architectures at runtime can improve response times; however, no previous work performs the switch automatically. The thesis shows that (a) another important performance parameter is whether multicast or unicast is used to transmit commands, and (b) response times can be noticeably better with multicast than with unicast when transmission costs are high. Traditional architectures, however, support only unicast - a computer that processes input commands must also transmit commands to all other computers. To support multicast, a new bi-architecture model of collaborative systems is introduced in which two separate architectures govern the processing and transmission tasks that each computer must perform. The thesis also shows that another important performance aspect is the order in which a computer performs these tasks. These tasks can be scheduled sequentially or concurrently on a single-core, or in parallel on multiple cores. As the thesis shows, existing single-core policies trade-off noticeable improvements in local (remote) for noticeable degradations in remote (local) response times. A new lazy policy for scheduling these tasks on a single-core is introduced that trades-off an unnoticeable degradation in performance of some users for a much larger noticeable improvement in performance of others. The thesis also shows that on multi-core devices, the tasks should always be scheduled on separate cores. The self-optimizing system adjusts the processing architecture, communication architecture, and scheduling policy based on response time predictions given by a new analytical model. Both the analytical model and the self-optimizing system are validated through simulations and experiments in practical scenarios

    Protocols for Integrity Constraint Checking in Federated Databases

    Get PDF
    A federated database is comprised of multiple interconnected database systems that primarily operate independently but cooperate to a certain extent. Global integrity constraints can be very useful in federated databases, but the lack of global queries, global transaction mechanisms, and global concurrency control renders traditional constraint management techniques inapplicable. This paper presents a threefold contribution to integrity constraint checking in federated databases: (1) The problem of constraint checking in a federated database environment is clearly formulated. (2) A family of protocols for constraint checking is presented. (3) The differences across protocols in the family are analyzed with respect to system requirements, properties guaranteed by the protocols, and processing and communication costs. Thus, our work yields a suite of options from which a protocol can be chosen to suit the system capabilities and integrity requirements of a particular federated database environment

    The Dynamics of Architecture-Governance Configurations: An Assemblage Theory Approach

    Get PDF
    Research on digital infrastructures and platforms studies large-scale systems that are characterized by constant evolution, loosely defined boundaries, and growing complexity. This research demonstrates that evolution is driven by tensions (between stability and change), which are in turn determined by the systems’ architecture and governance structures. This paper argues that architecture and governance are intrinsically related and conceptualizes them as a unified entity that we call an architecture-governance (A-G) configuration. We focus on the dynamics of A-G configurations—i.e., how architecture and governance interact and, in combination, shape the evolution of digital infrastructures, while, at the same time, change as emergent outcomes of the evolution of infrastructures. Toward this end, this paper applies assemblage theory as a lens for conducting a longitudinal study on an electronic prescription infrastructure. We identify three overall A-G configurations corresponding to different phases of the evolution of the infrastructure. This paper makes three contributions. First, we theorize the A-G configuration as an intertwined intermediate-scale entity that represents the form of the infrastructure and simultaneously constitutes an assemblage in its own right. Second, we demonstrate how an A-G configuration and its infrastructure coevolved through a series of interacting stabilization and destabilization processes operating within and across levels. Finally, we argue that tensions driving the evolution of infrastructures are also dynamic and that, accordingly, the focus of study should be on the processes of stabilization and destabilization rather than on stability and change themselves
    corecore