20 research outputs found

    Enterprise Information Integration Using a Peer to Peer Approach

    Get PDF
    The integration of enterprise information systems has unique requirements and frequently posesproblems to business partners. We discuss specific integration issues for micro-sized enterprises onthe special case of independent sales agencies and their suppliers. We argue that the enterpriseinformation systems of those independent enterprises are technically best represented by equal peers.Therefore, we have designed the Peer-To-Peer (P2P) integration architecture VIANA for theintegration of enterprise information systems. Its architecture provides materializing P2P integrationusing optimistic replication. It is applicable to inter- and intraorganizational integration scenarios. Itis accomplished by the propagation of write operations between peers. We argue that this type ofintegration can be realized with no alteration of the participating information systems

    Minimal replication cost for availability

    Get PDF

    Byzantine quorum systems

    Full text link

    Gateway selection in multi-hop wireless networks

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 61-63).This thesis describes the implementation of MultiNAT, an application that attempts to provide the benefits of client multi-homing while requiring minimal client configuration, and the evaluation of a novel link-selection algorithm, called AvMet, that significantly outperforms earlier multi-homing methods. The main motivation behind MultiNAT is the growing popularity of cheap broadband Internet connections, which are still not reliable enough for important applications. The increasing prevalence of wireless networks, with their attendant unpredictability and high rates of loss, is further exacerbating the situation. Recent work has shown that multi-homing can increase both Internet performance as well as the end-to-end availability of Internet services. Most previous solutions have required complicated client configuration or have routed packets through dedicated overlay networks; MultiNAT attempts to provide a simpler solution. MultiNAT automatically forwards connection attempts over all local interfaces and uses the resulting connection establishment times along with link-selection metrics to select which interface to use.(cont.) MultiNAT is able to sustain transfer speeds in excess of 4 megabytes per second, while imposing only an extra 150 microseconds of latency per packet. MultiNAT supports a variety of link-selection metrics, each with its own strengths and weaknesses. The MONET race-based scheme works well in wired networks, but is misled by the unpredictable nature of wireless losses. The ETT metric performs relatively well at finding high-throughput paths in multi-hop wireless networks, but can be incorrect when faced with heavy load. Unfortunately, neither of these metrics address end-to-end performance when packets traverse both wired and wireless networks. To fill this need, we propose AvMet, a link-selection scheme that tracks past connection history in order to improve current predictions. We evaluate AvMet on a variety of network configurations and find that AvMet is not misled by wireless losses. AvMet is able to outperform existing predictors in all network configurations and can imp:rove end-to-end availability by up to half an order of magnitude.by Rohit Navalgund Rao.M.Eng

    Resilient overlay networks

    Get PDF

    Optimistic replication

    Get PDF
    Data replication is a key technology in distributed data sharing systems, enabling higher availability and performance. This paper surveys optimistic replication algorithms that allow replica contents to diverge in the short term, in order to support concurrent work practices and to tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular. Optimistic replication techniques are different from traditional “pessimistic ” ones. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen and reaches agreement on the final contents incrementally. We explore the solution space for optimistic replication algorithms. This paper identifies key challenges facing optimistic replication systems — ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence — and provides a comprehensive survey of techniques developed for addressing these challenges

    Detouring and replication for fast and reliable internet-scale stream processing

    Full text link
    iFlow is a replication-based system that can achieve both fast and reliable processing of high volume data streams on the Internet scale. iFlow uses a low degree of replication in conjunction with detouring techniques to overcome network congestion and outages. Computation over iFlow can be expressed as a graph of operators. To cope with varying system conditions these operators continually migrate in a manner that improves performance and availability at the same time. In this paper, we first provide an overview of our iFlow sys-tem. Next, we detail how our detouring technique works in the face of network failures to provide high availability for time critical applications. The paper also includes a de-scription of our implementation and preliminary evaluation results demonstrating that iFlow outperforms previous solu-tions with less overhead. Finally, the paper concludes with our plans for enhancing replication and detouring capabili-ties

    The viability of IS enhanced knowledge sharing in mission-critical command and control centers

    Get PDF
    Engineering processes such as the maintenance of mission-critical infrastructures are highly unpredictable processes that are vital for everyday life, as well as for national security goals. These processes are categorized as Emergent Knowledge Processes (EKP), organizational processes that are characterized by a changing set of actors, distributed knowledge bases, and emergent knowledge sharing activities where the process itself has no predetermined structure. The research described here utilizes the telecommunications network fault diagnosis process as a specific example of an EKP. The field site chosen for this research is a global undersea telecommunication network where nodes are staffed by trained personnel responsible for maintaining local equipment using Network Management Systems. The overall network coordination responsibilities are handled by a centralized command and control center, or Network Management Center. A formal case study is performed in this global telecommunications network to evaluate the design of an Alarm Correlation Tool (ACT). This work defines a design methodology for an Information System (IS) that can support complex engineering diagnosis processes. As such, a Decision Support System design model is used to iterate through a number of design theories that guide design decisions. Utilizing the model iterations, it is found that IS design theories such as Decision Support Systems (DSS), Expert Systems (ES) and Knowledge Management Systems (KMS) design theories, do not produce systems appropriate for supporting complex engineering processes. A design theory for systems that support EKPs is substituted as the project\u27s driving theory during the final iterations of the DSS Design Model. This design theory poses the use of naive users to support the design process as one of its key principles. The EKP design theory principles are evaluated and addressed to provide feedback to this recently introduced Information System Design Theory. The research effort shows that use of the EKP design theory is also insufficient in designing complex engineering systems. As a result, the main contribution of this work is to augment design theory with a methodology that revolves around the analysis of the knowledge management and control environment as a driving force behind IS design. Finally, the research results show that a model-based knowledge captunng algorithm provides an appropriate vehicle to capture and manipulate experiential engineering knowledge. In addition, it is found that the proposed DSS Design Model assists in the refinement of highly complex system designs. The results also show that the EKP design theory is not sufficient to address all the challenges posed by systems that must support mission-critical infrastructures

    Implementation and evaluation of publicly verifiable proofs of data replication and retrievability for cloud storage

    Get PDF
    Cloud storage is a mature and widely used cloud technology. Behind this, it involves the proofs of the retrievability of files stored on the server, which allows the client to remotely check whether its files are correctly stored on the cloud server. In 2020, Gritti proposed the first publicly verifiable Proofs of Retrievability and Reliability (P-PORR). P-PORR combined Proofs of Retrievability (POR) with Verifiable Delay Functions (VDF) providing a fast verification and allowing anyone to verify the server’s behavior, not just the client. We built a realistic cloud test environment, implemented and tested P- PORR in this environment. This paper describes the implementation process, analyzes the performance of P-PORR from multiple perspectives, such as the total time spent before files are uploaded to the server, verification time and financial considerations. We stand in the perspective of the client and server to discuss the effect of the results. The results show that P-PORR has a similar performance to another PORR, with private verification, called Mirror. It even outperforms Mirror in processing small files. Therefore, P-PORR is suitable for cloud storage services. In the long run, this paper provides a practical application instance of VDF and a performance comparison benchmark for later PORR research
    corecore