242 research outputs found

    Rethinking Consistency Management in Real-time Collaborative Editing Systems

    Get PDF
    Networked computer systems offer much to support collaborative editing of shared documents among users. Increasing concurrent access to shared documents by allowing multiple users to contribute to and/or track changes to these shared documents is the goal of real-time collaborative editing systems (RTCES); yet concurrent access is either limited in existing systems that employ exclusive locking or concurrency control algorithms such as operational transformation (OT) may be employed to enable concurrent access. Unfortunately, such OT based schemes are costly with respect to communication and computation. Further, existing systems are often specialized in their functionality and require users to adopt new, unfamiliar software to enable collaboration. This research discusses our work in improving consistency management in RTCES. We have developed a set of deadlock-free multi-granular dynamic locking algorithms and data structures that maximize concurrent access to shared documents while minimizing communication cost. These algorithms provide a high level of service for concurrent access to the shared document and integrate merge-based or OT-based consistency maintenance policies locally among a subset of the users within a subsection of the document – thus reducing the communication costs in maintaining consistency. Additionally, we have developed client-server and P2P implementations of our hierarchical document management algorithms. Simulations results indicate that our approach achieves significant communication and computation cost savings. We have also developed a hierarchical reduction algorithm that can minimize the space required of RTCES, and this algorithm may be pipelined through our document tree. Further, we have developed an architecture that allows for a heterogeneous set of client editing software to connect with a heterogeneous set of server document repositories via Web services. This architecture supports our algorithms and does not require client or server technologies to be modified – thus it is able to accommodate existing, favored editing and repository tools. Finally, we have developed a prototype benchmark system of our architecture that is responsive to users’ actions and minimizes communication costs

    Mitigating Turnover with Code Review Recommendation: Balancing Expertise, Workload, and Knowledge Distribution

    Get PDF
    Developer turnover is inevitable on software projects and leads to knowledge loss, a reduction in productivity, and an increase in defects. Mitigation strategies to deal with turnover tend to disrupt and increase workloads for developers. In this work, we suggest that through code review recommendation we can distribute knowledge and mitigate turnover with minimal impact on the development process. We evaluate review recommenders in the context of ensuring expertise during review, Expertise, reducing the review workload of the core team, CoreWorkload, and reducing the Files at Risk to turnover, FaR. We find that prior work that assigns reviewers based on file ownership concentrates knowledge on a small group of core developers increasing risk of knowledge loss from turnover by up to 65%. We propose learning and retention aware review recommenders that when combined are effective at reducing the risk of turnover by -29% but they unacceptably reduce the overall expertise during reviews by -26%. We develop the Sophia recommender that suggest experts when none of the files under review are hoarded by developers but distributes knowledge when files are at risk. In this way, we are able to simultaneously increase expertise during review with a ΔExpertise of 6%, with a negligible impact on workload of ΔCoreWorkload of 0.09%, and reduce the files at risk by ΔFaR -28%. Sophia is integrated into GitHub pull requests allowing developers to select an appropriate expert or “learner” based on the context of the review. We release the Sophia bot as well as the code and data for replication purposes

    Testability of a swarm robot using a system of systems approach and discrete event simulation

    Get PDF
    A simulation framework using discrete event system specification (DEVS) and data encoded with Extensible Markup Language (XML) is presented to support agent-in-the-loop (AIL) simulations for large, complex, and distributed systems. A System of Systems (SoS) approach organizes the complex systems hierarchically. AIL simulations provide a necessary step in maintaining model continuity methods to achieve a greater degree of accuracy in systems analysis. The proposed SoS approach enables the simulation and analysis of these independent and cooperative systems by concentrating on the data transferred among systems to achieve interoperability instead of requiring the software modeling of global state spaces. The information exchanged is wrapped in XML to facilitate system integration and interoperability. A Groundscout is deployed as a real agent working cooperatively with virtual agents to form a robotic swarm in an example threat detection scenario. This scenario demonstrates the AIL framework\u27s ability to successfully test a swarm robot for individual performance and swarm behavior. Results of the testing process show an increase of robot team size increases the rate of successfully investigating a threat while critical violations of the algorithm remained low despite packet loss

    Distributed Web Service Coordination for Collaboration Applications and Biological Workflows

    Get PDF
    In this dissertation work, we have investigated the main research thrust of decentralized coordination of workflows over web services. To address distributed workflow coordination, first we have developed “Web Coordination Bonds” as a capable set of dependency modeling primitives that enable each web service to manage its own dependencies. Web bond primitives are as powerful as extended Petri nets and have sufficient modeling and expressive capabilities to model workflow dependencies. We have designed and prototyped our “Web Service Coordination Management Middleware” (WSCMM) system that enhances current web services infrastructure to accommodate web bond enabled web services. Finally, based on core concepts of web coordination bonds and WSCMM, we have developed the “BondFlow” system that allows easy configuration distributed coordination of workflows. The footprint of the BonFlow runtime is 24KB and the additional third party software packages, SOAP client and XML parser, account for 115KB

    The performance and locality tradeoff in BitTorrent-like P2P file-sharing systems

    Get PDF
    The recent surge of large-scale peer-to-peer (P2P) applications has brought huge amounts of P2P traffic, which significantly changes the Internet traffic pattern and increases the traffic-relay cost at the Internet Service Providers (ISPs). To alleviate the stress on networks, localized peer selection has been proposed that advocates neighbor selection within the same network (AS or ISP) to reduce the cross-ISP traffic. Nevertheless, localized peer selection may potentially lead to the downgrade of downloading speed at the peers, rendering a non-negligible tradeoff between the downloading performance and traffic localization in the P2P system. Aiming at effective peer selection strategies that achieve any desired Pareto optimum in face of the tradeoff, in this paper, we characterize the performance and locality tradeoff as a multi-objective b-matching optimization problem. In particular, we first present a generic maximum weight b-matching model that characterizes the tit-for-tat in BitTorrent-like peer selection. We then introduce multiple optimization objectives into the model, which effectively characterize the performance and locality tradeoff using simultaneous objectives to optimize. We also design fully distributed peer selection algorithms that can effectively achieve any desired Pareto optimum of the global multi-objective optimization, that represents a desired tradeoff point between performance and locality in the entire system. Our models and algorithms are supported by rigorous analysis and extensive simulations. ©2010 IEEE.published_or_final_versionThe IEEE International Conference on Communications (ICC 2010), Cape Town, South Africa, 23-27 May 2010. In Proceedings of the IEEE International Conference on Communications, 2010, p. 1-

    A modeling and verification approach to the design of distributed IMA architectures using TTEthernet

    Get PDF
    ABSTRACT: Integrated Modular Avionics (IMA) architectures complemented with Time-Triggered Ethernet (TTEthernet) provides a strong platform to support the design and deployment of distributed avionic software systems. The complexity of the design and continuous integration of such systems can be managed using a model-based methodology. In this paper, we build on top of our extension of the AADL modeling language to model TTEthernet-based distributed systems and leverage model transformations to enable undertaking the verification of the system models produced with this methodology. In particular, we propose to transform the system models to a model suitable for a simulation with DEVS. We illustrate the proposed approach using an example of a navigation and guidance system and we use this example to show the verification of the contention-freedom property of TTEthernet schedule

    Energy-efficiency media access control in wireless ad hoc networks

    Get PDF

    OSIF: A Framework To Instrument, Validate, and Analyze Simulations

    Get PDF
    International audienceIn most existing simulators, the outputs of a simulation run consist either in a simulat ion report generated at the end of the run and summarizing the statistics of interest, or in a (set of) trace file(s) containing raw data samples produced and saved regularly during the run, for later post-processing. In this paper, we address issues related to the management of these data and their on-line processing, such as: (i)~the instrumentation code is mixed in the modeling code; (ii)~the amount of data to be stored may be enormous, and often, a significant part of these data are useless while their collect may consume a significant amount of the computing resources; and (iii)~it is difficult to have confidence in the treatment applied to the data and then make comparisons between studies since each user (model developer) builds its own ad-hoc instrumentation and data processing. In this paper, we propose OSIF, a new component-based instrumentation framework designed to solve the above mentioned issues. OSIF is based on several mature software engineering techniques and frameworks, such as COSMOS, Fractal and its ADL, and AOP
    • 

    corecore