301 research outputs found

    Proceedings of the ECSCW'95 Workshop on the Role of Version Control in CSCW Applications

    Full text link
    The workshop entitled "The Role of Version Control in Computer Supported Cooperative Work Applications" was held on September 10, 1995 in Stockholm, Sweden in conjunction with the ECSCW'95 conference. Version control, the ability to manage relationships between successive instances of artifacts, organize those instances into meaningful structures, and support navigation and other operations on those structures, is an important problem in CSCW applications. It has long been recognized as a critical issue for inherently cooperative tasks such as software engineering, technical documentation, and authoring. The primary challenge for versioning in these areas is to support opportunistic, open-ended design processes requiring the preservation of historical perspectives in the design process, the reuse of previous designs, and the exploitation of alternative designs. The primary goal of this workshop was to bring together a diverse group of individuals interested in examining the role of versioning in Computer Supported Cooperative Work. Participation was encouraged from members of the research community currently investigating the versioning process in CSCW as well as application designers and developers who are familiar with the real-world requirements for versioning in CSCW. Both groups were represented at the workshop resulting in an exchange of ideas and information that helped to familiarize developers with the most recent research results in the area, and to provide researchers with an updated view of the needs and challenges faced by application developers. In preparing for this workshop, the organizers were able to build upon the results of their previous one entitled "The Workshop on Versioning in Hypertext" held in conjunction with the ECHT'94 conference. The following section of this report contains a summary in which the workshop organizers report the major results of the workshop. The summary is followed by a section that contains the position papers that were accepted to the workshop. The position papers provide more detailed information describing recent research efforts of the workshop participants as well as current challenges that are being encountered in the development of CSCW applications. A list of workshop participants is provided at the end of the report. The organizers would like to thank all of the participants for their contributions which were, of course, vital to the success of the workshop. We would also like to thank the ECSCW'95 conference organizers for providing a forum in which this workshop was possible

    A Configuration Management System for Software Product Lines

    Get PDF
    Software product line engineering (SPLE) is a methodology for developing a family of software products in a particular domain by systematic reuse of shared code in order to improve product quality and reduce development time and cost. Currently, there are no software configuration management (SCM) tools that support software product line evolution. Conventional SCM tools are designed to support single product development. The use of conventional SCM tools forces developers to treat a software product line as a single software project by introducing new programming language constructs or using conditional compilation. We propose a research conguration management prototype called Molhado SPL that is designed specifically to support the evolution of software product lines. Molhado SPL addresses the evolution problem at the configuration level instead of at the code level. We studied the type of operations needed to support the evolution of software product lines and proposed a versioning model and eight cases of change propagation. Molhado SPL supports independent evolution of core assets and products, the sharing of code and the tracking relationships between products and shared code, and the eight cases of change propagation. The Molhado SPL consists of four layers with each layer providing a different type of service. At the heart of Molhado SPL are the versioning model, component object, shared component object, and project objects that allow for independent evolution of products and shared artifacts, for sharing, and for supporting change propagation. Furthermore,they allow product specific changes to shared code without interfering with the core asset that is shared. Products can also introduce product specific assets that only exist in that product. In order to for Molhado SPL to support product line, we implemented XML merging, feature model editing and debugging, and version-aware XML documents. To support merging of XML documents, we implemented a 3-way XML document merging algorithm that uses versioned data structures, change detection, and node identity. To support software product line derivation or modeling of software product line, we implemented support for feature model including editing and debugging. Finally, we created the version-aware XML document framework to support collaborative editing of XML documents without requiring a version repository. The version history is embedded in the documents using XML namespaces, so that the documents remain valid under the XML specification. The version-aware XML framework can also be used to support the exporting of documents from Molhado SPL repository to be edit outside and import back the change history made to the document. We evaluated Molhado SPL with two product lines: a document product line and a the graph data structures product line. This evaluation showed that Molhado SPL supports independently evolution of products and core assets and the eight change propagation cases. We did not evaluate MolhadoSPL in terms of scalability or usability. The main contributions of this dissertation research are: 1) Molhado SPL that supports the evolution of product lines, 2) a fast 3-way XML merge algorithm, 3) a version-aware XML document framework, and 4) a feature model editor and debugger

    A macro-micro system architecture analysis framework applied to Smart Grid meter data management systems by Sooraj Prasannan.

    Get PDF
    Thesis (S.M. in System Design and Management)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 109-111).This thesis proposes a framework for architectural analysis of a system at the Macro and Micro levels. The framework consists of two phases -- Formulation and Analysis. Formulation is made up of three steps -- Identifying the System Boundary, Identifying the Object-Process System levels using the Object-Process Methodology (OPM) and then creating the Dependency Matrix using a Design Structure Matrix (DSM). Analysis is composed of two steps -- Macro-Level and Micro-Level Analysis. Macro-Level analysis identifies the system modules and their interdependencies based on the OPM and DSM clustering analysis and Visibility-Dependency Signature Analysis. The Micro-Level analysis identifies the central components in the system based on the connectivity metrics of Indegree centrality, Outdegeree centrality, Visibility and Dependency. The conclusions are drawn based on simultaneously interpreting the results derived from the Macro-Level and Micro-Level Analysis. Macro-Analysis is vital in terms of comprehending system scalability and functionality. The modules and their interactions influence the scalability of the system while the absence of certain modules within a system might indicate missing system functionality. Micro-Analysis classifies the components in the system based on connectivity and can be used to guide redesign/design efforts. Understanding how the redesign of a particular node will affect the entire system helps in planning and implementation. On the other hand, design Modification/enhancement of nodes with low connectivity can be achieved without affecting the performance or architecture of the entire system. Identifying the highly central nodes also helps the system architect understand whether the system has enough redundancy built in to withstand the failure of the central nodes. Potential system bottlenecks can also be identified by using the micro-level analysis. The proposed framework is applied to two industry leading Smart Grid Meter Data Management Systems. Meter Data Management Systems are the central repository of meter data in the Smart Grid Information Technology Layer. Exponential growth is expected in managing electrical meter data and technology firms are very interested in finding ways to leverage the Smart Information Technology market. The thesis compares the two Meter Data Management System architectures, and proposes a generic Meter Data Management System by combining the strengths of the two architectures while identifying areas of collaboration between firms to leverage this generic architecture.S.M.in System Design and Managemen

    Viestien standardointisysteemin suunnittelu ja toteutus

    Get PDF
    This thesis describes designing and implementing an extension to an existing standardization tool that allows configuring and saving diagnostic messages of an automation system and allows the users to save their changes concurrently with each other. The existing tool is used to configure and save XML template configurations. The XML configurations contain definitions that are similar between automation system configurations. Importing the standards to a system reduces repetitive work. The standardization tool has limitation with saving the changes to standards, when more than one user tries to save their changes to the same standard version. New saving logic is to be created to allow more than one user to edit the same standard version at the same time. First, the target system and the usage of the tool are introduced. Then the goals of the thesis are presented. Next, the concurrency issues are viewed, and current saving logic is presented. Two solutions are described, and one is chosen for further design and implementation. The design of the standardization tool and the new saving logic are introduced next. Then, the implemenation is evaluated and further implementation ideas are presented. Last, the conclusion of the thesis is described

    Parallel Rendering and Large Data Visualization

    Full text link
    We are living in the big data age: An ever increasing amount of data is being produced through data acquisition and computer simulations. While large scale analysis and simulations have received significant attention for cloud and high-performance computing, software to efficiently visualise large data sets is struggling to keep up. Visualization has proven to be an efficient tool for understanding data, in particular visual analysis is a powerful tool to gain intuitive insight into the spatial structure and relations of 3D data sets. Large-scale visualization setups are becoming ever more affordable, and high-resolution tiled display walls are in reach even for small institutions. Virtual reality has arrived in the consumer space, making it accessible to a large audience. This thesis addresses these developments by advancing the field of parallel rendering. We formalise the design of system software for large data visualization through parallel rendering, provide a reference implementation of a parallel rendering framework, introduce novel algorithms to accelerate the rendering of large amounts of data, and validate this research and development with new applications for large data visualization. Applications built using our framework enable domain scientists and large data engineers to better extract meaning from their data, making it feasible to explore more data and enabling the use of high-fidelity visualization installations to see more detail of the data.Comment: PhD thesi
    • …
    corecore