5,694 research outputs found
Proceedings of the ECSCW'95 Workshop on the Role of Version Control in CSCW Applications
The workshop entitled "The Role of Version Control in Computer Supported Cooperative Work Applications" was held on September 10, 1995 in Stockholm, Sweden in conjunction with the ECSCW'95 conference. Version control, the ability to manage relationships between successive instances of artifacts, organize those instances into meaningful structures, and support navigation and other operations on those structures, is an important problem in CSCW applications. It has long been recognized as a critical issue for inherently cooperative tasks such as software engineering, technical documentation, and authoring. The primary challenge for versioning in these areas is to support opportunistic, open-ended design processes requiring the preservation of historical perspectives in the design process, the reuse of previous designs, and the exploitation of alternative designs.
The primary goal of this workshop was to bring together a diverse group of individuals interested in examining the role of versioning in Computer Supported Cooperative Work. Participation was encouraged from members of the research community currently investigating the versioning process in CSCW as well as application designers and developers who are familiar with the real-world requirements for versioning in CSCW. Both groups were represented at the workshop resulting in an exchange of ideas and information that helped to familiarize developers with the most recent research results in the area, and to provide researchers with an updated view of the needs and challenges faced by application developers. In preparing for this workshop, the organizers were able to build upon the results of their previous one entitled "The Workshop on Versioning in Hypertext" held in conjunction with the ECHT'94 conference. The following section of this report contains a summary in which the workshop organizers report the major results of the workshop. The summary is followed by a section that contains the position papers that were accepted to the workshop. The position papers provide more detailed information describing recent research efforts of the workshop participants as well as current challenges that are being encountered in the development of CSCW applications. A list of workshop participants is provided at the end of the report.
The organizers would like to thank all of the participants for their contributions which were, of course, vital to the success of the workshop. We would also like to thank the ECSCW'95 conference organizers for providing a forum in which this workshop was possible
Framework for Real-time collaboration on extensive Data Types using Strong Eventual Consistency
La collaboration en temps réel est un cas spécial de collaboration où les utilisateurs travaillent sur le même élément simultanément et sont au courant des modifications des autres utilisateurs en temps réel. Les données distribuées doivent rester disponibles et consistant tout en étant répartis sur plusieurs systèmes physiques. "Strong Consistency"
est une approche qui crée un ordre total des opérations en utilisant des mécanismes tel que le "locking". Cependant, cela introduit un "bottleneck". Ces dix dernières années, les algorithmes de concurrence ont été étudiés dans le but de garder la convergence de tous les replicas sans utiliser de "locking" ni de synchronisation. "Operational Trans-
formation" et "Conflict-free Replicated Data Types (CRDT)" sont utilisés dans ce but. Cependant, la complexité de ces stratégies les rend compliquées à intégrer dans des logicielles conséquents, comme les éditeurs de modèles, spécialement pour des data structures complexes comme les graphes. Les implémentations actuelles intègrent seulement des data linéaires tel que le texte. Dans ce mémoire, nous présentons CollabServer, un framework pour construire des environnements de collaboration. Il a une implémentation de CRDTs pour des data structures complexes tel que les graphes et donne la possibilité de construire ses propres data structures.Real-time collaboration is a special case of collaboration where users work on the same artefact simultaneously and are aware of each other’s changes in real-time. Shared data should remain available and consistent while dealing with its physically distributed
aspect. Strong Consistency is one approach that enforces a total order of operations using mechanisms, such as locking. This however introduces a bottleneck. In the last decade, algorithms for concurrency control have been studied to keep convergence of all replicas without locking or synchronization. Operational Transformation and Conflict free Replicated Data Types (CRDT) are widely used to achieve this purpose. However, the complexity of these strategies makes it hard to integrate in large software, such as modeling editors, especially for complex data types like graphs. Current implementations only integrate linear data, such as text. In this thesis, we present CollabServer, a framework to build collaborative environments. It features a CRDTs implementation for
complex data types such as graphs and gives possibility to build other data structures
A Comparison of Incremental Triple Graph Grammar Tools
Triple Graph Grammars (TGGs) are a graph-based and visual technique for specifying bidirectional model transformation. TGGs can be used to transform models from scratch (in the batch mode), but the real potential of TGGs lies in propagating updates incrementally. Existing TGG tools differ considerably in their incremental mode concerning underlying algorithms, user-oriented aspects, incremental update capabilities, and formal properties. Indeed, the different foci, strengths, and weaknesses of current TGG tools in the incremental mode are difficult to discern, especially for non-developers. In this paper, we close this gap by (i) identifying a set of criteria for a qualitative comparison of TGG tools in the incremental mode, (ii) comparing three prominent incremental TGG tools with regard to these criteria, and (iii) conducting a quantitative comparison by means of runtime measurements
An Evolutionary Algorithm to Optimize Log/Restore Operations within Optimistic Simulation Platforms
In this work we address state recoverability in advanced optimistic simulation systems by proposing an evolutionary algorithm to optimize at run-time the parameters associated with state log/restore activities. Optimization takes place by adaptively selecting for each simulation object both (i) the best suited log mode (incremental vs non-incremental) and (ii) the corresponding optimal value of the log interval. Our performance optimization approach allows to indirectly cope with hidden effects (e.g., locality) as well as cross-object effects due to the variation of log/restore parameters for different simulation objects (e.g., rollback thrashing). Both of them are not captured by literature solutions based on analytical models of the overhead associated with log/restore tasks. More in detail, our evolutionary algorithm dynamically adjusts the log/restore parameters of distinct simulation objects as a whole, towards a well suited configuration. In such a way, we prevent negative effects on performance due to the biasing of the optimization towards individual simulation objects, which may cause reduced gains (or even decrease) in performance just due to the aforementioned hidden and/or cross-object phenomena. We also present an application-transparent implementation of the evolutionary algorithm within the ROme OpTimistic Simulator (ROOT-Sim), namely an open source, general purpose simulation environment designed according to the optimistic synchronization paradigm
Supporting the grow-and-prune model for evolving software product lines
207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct
Enabling One-Phase Commit (1PC) Protocol for Web Service Atomic Transaction (WS-AT)
Business transactions (a.k.a., business conversations) are series of message exchanges that occur between software applications coordinating to achieve a business objective. Web service has been proven to be a promising technology in supporting business transactions. Business transaction can either be long-running or short-lived. A transaction whether in a database or web service paradigm consists of an “all-or-nothing” property. A transaction could either succeed or fail. Web Service Atomic Transactions (WS-AT) is a specification that currently supports Two-Phase Commit (2PC) protocol in a short-lived transaction. WS-AT is developed by OASIS–a standards development organization. However, not all business process scenarios require a 2PC, in that case, just a One-Phase Commit (1PC) would be sufficient. But unfortunately, WS-AT currently does not support 1PC optimization.
The ideal scenario where 1PC can be used instead of 2PC is when there is only a single participant. Short-lived transactions involving only one participant can commit without requiring initial “prepare” phase. Thus, there is no overhead to check whether the participant is prepared to either commit or rollback. This research focuses on designing a mechanism that can add 1PC support in WS-AT. The technical implementation of this mechanism is developed by using JBoss Transaction API. As a part of this thesis, 1PC mechanism for a single participant scenario was implemented. This mechanism optimizes the web service transaction process in terms of overhead and performance in terms of execution time. The technical implementation solution for 1PC mechanism was evaluated using three different business process scenarios in a controlled experiment as a presence or absence test. Evaluation results show that 1PC mechanism has a lower mean for execution time and performed significantly better than 2PC mechanism. Based on the contributions made by this thesis, we recommend OASIS to consider including 1PC mechanism as a part of the WS-AT specification
Recommended from our members
Knowledge based approach to flexible workflow management systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded the Korea Advanced Institute of Science and Technology (KAIST).Today's business environments are characterized by dynamic and uncertain environments. In order to effectively support business processes in such contexts, workflow management systems must be able to adapt themselves effectively. In this dissertation, the workflow is redefined in
concept and represented with a set of business rules. Business rules play a central role in
organizational workflows in context of cooperation among actors. To achieve business goals, they constrain the flow of works, use of resources, and responsibility mapping between tasks and actors using role concept. Business rules are explicitly modeled in the Knowledge-based Workflow Model (KWM) using frames.
To increase the adaptability of workflow management system, KWM has several distinctive
features. First, it increases expressiveness of workflow model so that exception handling rules
and responsibility mapping rules between tasks and actors as well as task scheduling rules are
explicitly modeled. Secondly, formal definition of KWM enables one to define and to analyze correctness of workflow schema. Knowledge-based approach enables more powerful analysis on workflow schema including checking consistency and compactness of routing rules as well as terminality of a workflow. Thirdly, providing change propagation mechanism which assures
correctness of workflow after the modification of workflow schema increases adaptability.
Change propagation rules for the modification primitives are provided to manage workflow
evolution. On the other hand, metarules that control rules in KWM are used to handle exceptions that occur in a running workflow instance. Workflow participants can easily change workflow schema of a workflow instance with the support of extra rules and a metarule.
Based on KWM, K-WFMS (Knowledge-based WorkFlow Management System) has been implemented in client/server architecture. Inference shell of knowledge-based systems is employed for enactment of business rules and integrated with database systems. From a real application based on the KWM architecture, it has been shown that system performance can increase notably by reducing the number of rules and facts that are used in the course of workflow enactment
FixMiner: Mining Relevant Fix Patterns for Automated Program Repair
Patching is a common activity in software development. It is generally
performed on a source code base to address bugs or add new functionalities. In
this context, given the recurrence of bugs across projects, the associated
similar patches can be leveraged to extract generic fix actions. While the
literature includes various approaches leveraging similarity among patches to
guide program repair, these approaches often do not yield fix patterns that are
tractable and reusable as actionable input to APR systems. In this paper, we
propose a systematic and automated approach to mining relevant and actionable
fix patterns based on an iterative clustering strategy applied to atomic
changes within patches. The goal of FixMiner is thus to infer separate and
reusable fix patterns that can be leveraged in other patch generation systems.
Our technique, FixMiner, leverages Rich Edit Script which is a specialized tree
structure of the edit scripts that captures the AST-level context of the code
changes. FixMiner uses different tree representations of Rich Edit Scripts for
each round of clustering to identify similar changes. These are abstract syntax
trees, edit actions trees, and code context trees. We have evaluated FixMiner
on thousands of software patches collected from open source projects.
Preliminary results show that we are able to mine accurate patterns,
efficiently exploiting change information in Rich Edit Scripts. We further
integrated the mined patterns to an automated program repair prototype,
PARFixMiner, with which we are able to correctly fix 26 bugs of the Defects4J
benchmark. Beyond this quantitative performance, we show that the mined fix
patterns are sufficiently relevant to produce patches with a high probability
of correctness: 81% of PARFixMiner's generated plausible patches are correct.Comment: 31 pages, 11 figure
- …