342,142 research outputs found

    Ontology-based domain modelling for consistent content change management

    Get PDF
    Ontology-based modelling of multi-formatted software application content is a challenging area in content management. When the number of software content unit is huge and in continuous process of change, content change management is important. The management of content in this context requires targeted access and manipulation methods. We present a novel approach to deal with model-driven content-centric information systems and access to their content. At the core of our approach is an ontology-based semantic annotation technique for diversely formatted content that can improve the accuracy of access and systems evolution. Domain ontologies represent domain-specific concepts and conform to metamodels. Different ontologies - from application domain ontologies to software ontologies - capture and model the different properties and perspectives on a software content unit. Interdependencies between domain ontologies, the artifacts and the content are captured through a trace model. The annotation traces are formalised and a graph-based system is selected for the representation of the annotation traces

    COMAND - A Distributed Configuration Management Framework

    Get PDF
    Software development is becoming a more and more distributed process, which urgently needs supporting tools in the field of configuration management, software process/w orkflow management, communication and problem tracking. In this paper we present a new distributed software configuration management framework COMAND. It offers high availabilit y through replication and a mechanism to easily change and adapt the project structure to new business needs. To better understand and formally prove some properties of COMAND, we have modeled it in a formal technique based on distributed graph transformations. This formalism provides an intuitive rule-based description technique mainly for the dynamic behavior of the system on an abstract level. We use it here to model the replication subsystem

    Runtime protection via dataflow flattening

    Get PDF
    Software running on an open architecture, such as the PC, is vulnerable to inspection and modification. Since software may process valuable or sensitive information, many defenses against data analysis and modification have been proposed. This paper complements existing work and focuses on hiding data location throughout program execution. To achieve this, we combine three techniques: (i) periodic reordering of the heap, (ii) migrating local variables from the stack to the heap and (iii) pointer scrambling. By essentialy flattening the dataflow graph of the program, the techniques serve to complicate static dataflow analysis and dynamic data tracking. Our methodology can be viewed as a data-oriented analogue of control-flow flattening techniques. Dataflow flattening is useful in practical scenarios like DRM, information-flow protection, and exploit resistance. Our prototype implementation compiles C programs into a binary for which every access to the heap is redirected through a memory management unit. Stack-based variables may be migrated to the heap, while pointer accesses and arithmetic may be scrambled and redirected. We evaluate our approach experimentally on the SPEC CPU2006 benchmark suit

    Software reliability through fault-avoidance and fault-tolerance

    Get PDF
    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down

    An Automated Method for Identifying Inconsistencies within Diagrammatic Software Requirements Specifications

    Get PDF
    The development of large-scale, composite software in a geographically distributed environment is an evolutionary process. Often, in such evolving systems, striving for consistency is complicated by many factors, because development participants have various locations, skills, responsibilities, roles, opinions, languages, terminology and different degrees of abstraction they employ. This naturally leads to many partial specifications or viewpoints. These multiple views on the system being developed usually overlap. From another aspect, these multiple views give rise to the potential for inconsistency. Existing CASE tools do not efficiently manage inconsistencies in distributed development environment for a large-scale project. Based on the ViewPoints framework the WHERE (Web-Based Hypertext Environment for requirements Evolution) toolkit aims to tackle inconsistency management issues within geographically distributed software development projects. Consequently, WHERE project helps make more robust software and support software assurance process. The long term goal of WHERE tools aims to the inconsistency analysis and management in requirements specifications. A framework based on Graph Grammar theory and TCMJAVA toolkit is proposed to detect inconsistencies among viewpoints. This systematic approach uses three basic operations (UNION, DIFFERENCE, INTERSECTION) to study the static behaviors of graphic and tabular notations. From these operations, subgraphs Query, Selection, Merge, Replacement operations can be derived. This approach uses graph PRODUCTIONS (rewriting rules) to study the dynamic transformations of graphs. We discuss the feasibility of implementation these operations. Also, We present the process of porting original TCM (Toolkit for Conceptual Modeling) project from C++ to Java programming language in this thesis. A scenario based on NASA International Space Station Specification is discussed to show the applicability of our approach. Finally, conclusion and future work about inconsistency management issues in WHERE project will be summarized

    Conversion from Tree to Graph Representation of Requirements

    Get PDF
    A procedure and software to implement the procedure have been devised to enable conversion from a tree representation to a graph representation of the requirements governing the development and design of an engineering system. The need for this procedure and software and for other requirements-management tools arises as follows: In systems-engineering circles, it is well known that requirements- management capability improves the likelihood of success in the team-based development of complex systems involving multiple technological disciplines. It is especially desirable to be able to visualize (in order to identify and manage) requirements early in the system- design process, when errors can be corrected most easily and inexpensively

    Generating Requirement Dependency Graph Based on Class Dependency

    Get PDF
    A set of software requirements is an important element in software development. Engineers realize that requirements are interrelated. The interconnections between requirements indicate interdependences between requirements. This interdependence is crucial in decision-making processes of requirement engineering, such as a change management requirement, a version launch plan, and a requirement management. Researchers have been focused on visualizing dependency between requirements, analyzing the impact of changes in software by using changes to UML class diagrams, and predicting bug occurrences based on dependencies between requirements. Previous studies assumed that the requirements dependency information was pre-build by requirements engineer during the previous development process. This paper introduces a method that builds a requirements dependency model. The model was built based on realization associations between requirements and classes in the system design as well as dependencies between classes. The modeling process used semantic similarities between the requirements and the classes. A class is said to have a realization association with a requirement if and only if the semantic similarity is higher than a certain threshold. The output obtained from the dependent software development method was compared with the output produced by annotators. The method reliability was measured by the level of agreement between the method and the annotator using kappa statistical index. The preliminary result shows that the method was fair agreement (0.37) reliable as an annotator when generating requirements dependency graph

    Supporting Information Systems Analysis Through Conceptual Model Query – The Diagramed Model Query Language (DMQL)

    Get PDF
    Analyzing conceptual models such as process models, data models, or organizational charts is useful for several purposes in information systems engineering (e.g., for business process improvement, compliance management, model driven software development, and software alignment). To analyze conceptual models structurally and semantically, so-called model query languages have been put forth. Model query languages take a model pattern and conceptual models as input and return all subsections of the models that match this pattern. Existing model query languages typically focus on a single modeling language and/or application area (such as analysis of execution semantics of process models), are restricted in their expressive power of representing model structures, and/or abstain from graphical pattern specification. Because these restrictions may hamper query languages from propagating into practice, we close this gap by proposing a modeling language-spanning structural model query language based on flexible graph search that, hence, provides high structural expressive power. To address ease-of-use, it allows one to specify model queries using a diagram. In this paper, we present the syntax and the semantics of the diagramed model query language (DMQL), a corresponding search algorithm, an implementation as a modeling tool prototype, and a performance evaluation

    Migrating microservices to graph database

    Get PDF
    Microservice architecture is a popular approach to structuring web backend services. Another emerging trend, after a period of hibernation, is utilizing modern graph database management systems for managing complex, richly connected data. The two approaches have rarely been used in tandem, as microservices emphasize modularization and decoupling of services, while graph data models favor data integration. In this study, literature on microservices and graph databases is reviewed and a synthesis between the two paradigms is presented. Based on the theoretical discussion, a software architecture combining the two elements is formulated and implemented using microservices serving content metadata at Yleisradio, the Finnish national broadcasting company. The architecture design follows the Design Science Research Process model. Finally, the renewed system is evaluated using quantitative and qualitative metrics. The performance of the system is measured using automated API queries and load tests. The new system was compared to an earlier version based on a PostgreSQL database. The tests gave slight indication that the renewed system performed better for complex queries, where a large number of relations were traversed, but worse in terms of throughput under heavy load. Based on the these findings, a number of performance-enhancing optimizations to the system are introduced. Observations and perpectives are also gathered in a project retrospective session. It is concluded that the resulting architecture holds promise for managing complex data rich in relations in a safe manner. In it, the different domains of the knowledge graph are decoupled into distinct named graphs managed by different microservices
    corecore