34 research outputs found

    A process model in platform independent and neutral formal representation for design engineering automation

    Get PDF
    An engineering design process as part of product development (PD) needs to satisfy ever-changing customer demands by striking a balance between time, cost and quality. In order to achieve a faster lead-time, improved quality and reduced PD costs for increased profits, automation methods have been developed with the help of virtual engineering. There are various methods of achieving Design Engineering Automation (DEA) with Computer-Aided (CAx) tools such as CAD/CAE/CAM, Product Lifecycle Management (PLM) and Knowledge Based Engineering (KBE). For example, Computer Aided Design (CAD) tools enable Geometry Automation (GA), PLM systems allow for sharing and exchange of product knowledge throughout the PD lifecycle. Traditional automation methods are specific to individual products and are hard-coded and bound by the proprietary tool format. Also, existing CAx tools and PLM systems offer bespoke islands of automation as compared to KBE. KBE as a design method incorporates complete design intent by including re-usable geometric, non-geometric product knowledge as well as engineering process knowledge for DEA including various processes such as mechanical design, analysis and manufacturing. It has been recognised, through an extensive literature review, that a research gap exists in the form of a generic and structured method of knowledge modelling, both informal and formal modelling, of mechanical design process with manufacturing knowledge (DFM/DFA) as part of model based systems engineering (MBSE) for DEA with a KBE approach. There is a lack of a structured technique for knowledge modelling, which can provide a standardised method to use platform independent and neutral formal standards for DEA with generative modelling for mechanical product design process and DFM with preserved semantics. The neutral formal representation through computer or machine understandable format provides open standard usage. This thesis provides a contribution to knowledge by addressing this gap in two-steps: ‱ In the first step, a coherent process model, GPM-DEA is developed as part of MBSE which can be used for modelling of mechanical design with manufacturing knowledge utilising hybrid approach, based on strengths of existing modelling standards such as IDEF0, UML, SysML and addition of constructs as per author’s Metamodel. The structured process model is highly granular with complex interdependencies such as activities, object, function, rule association and includes the effect of the process model on the product at both component and geometric attributes. ‱ In the second step, a method is provided to map the schema of the process model to equivalent platform independent and neutral formal standards using OWL/SWRL ontology for system development using ProtĂ©gĂ© tool, enabling machine interpretability with semantic clarity for DEA with generative modelling by building queries and reasoning on set of generic SWRL functions developed by the author. Model development has been performed with the aid of literature analysis and pilot use-cases. Experimental verification with test use-cases has confirmed the reasoning and querying capability on formal axioms in generating accurate results. Some of the other key strengths are that knowledgebase is generic, scalable and extensible, hence provides re-usability and wider design space exploration. The generative modelling capability allows the model to generate activities and objects based on functional requirements of the mechanical design process with DFM/DFA and rules based on logic. With the help of application programming interface, a platform specific DEA system such as a KBE tool or a CAD tool enabling GA and a web page incorporating engineering knowledge for decision support can consume relevant part of the knowledgebase

    MDA-Based Reverse Engineering

    Get PDF

    An analysis of safety evidence management with the Structured Assurance Case Metamodel

    Get PDF
    SACM (Structured Assurance Case Metamodel) it a standard for assurance case specification and exchange. It consists of an argumentation metamodel and an evidence metamodel for justifying that a system satisfies certain requirements. For assurance of safety-critical systems, SACM can be used to manage safety evidence and to specify safety cases. The standard is a promising initiative towards harmonizing and improving system assurance practices, but its suitability for safety evidence management needs to be further studied. To this end, this paper studies how SACM 1.1 supports this activity according to requirements from industry and from prior work. We have analysed the notion of evidence in SACM, its evidence lifecycle, the classes and associations of the evidence metamodel, and the link of this metamodel with the argumentation one. As a result, we have identified several improvement opportunities and extension possibilities in SACM

    Semantic Model Alignment for Business Process Integration

    Get PDF
    Business process models describe an enterprise’s way of conducting business and in this form the basis for shaping the organization and engineering the appropriate supporting or even enabling IT. Thereby, a major task in working with models is their analysis and comparison for the purpose of aligning them. As models can differ semantically not only concerning the modeling languages used, but even more so in the way in which the natural language for labeling the model elements has been applied, the correct identification of the intended meaning of a legacy model is a non-trivial task that thus far has only been solved by humans. In particular at the time of reorganizations, the set-up of B2B-collaborations or mergers and acquisitions the semantic analysis of models of different origin that need to be consolidated is a manual effort that is not only tedious and error-prone but also time consuming and costly and often even repetitive. For facilitating automation of this task by means of IT, in this thesis the new method of Semantic Model Alignment is presented. Its application enables to extract and formalize the semantics of models for relating them based on the modeling language used and determining similarities based on the natural language used in model element labels. The resulting alignment supports model-based semantic business process integration. The research conducted is based on a design-science oriented approach and the method developed has been created together with all its enabling artifacts. These results have been published as the research progressed and are presented here in this thesis based on a selection of peer reviewed publications comprehensively describing the various aspects

    Quality of process modeling using BPMN: a model-driven approach

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia InformáticaContext: The BPMN 2.0 specification contains the rules regarding the correct usage of the language’s constructs. Practitioners have also proposed best-practices for producing better BPMN models. However, those rules are expressed in natural language, yielding sometimes ambiguous interpretation, and therefore, flaws in produced BPMN models. Objective: Ensuring the correctness of BPMN models is critical for the automation of processes. Hence, errors in the BPMN models specification should be detected and corrected at design time, since faults detected at latter stages of processes’ development can be more costly and hard to correct. So, we need to assess the quality of BPMN models in a rigorous and systematic way. Method: We follow a model-driven approach for formalization and empirical validation of BPMN well-formedness rules and BPMN measures for enhancing the quality of BPMN models. Results: The rule mining of BPMN specification, as well as recently published BPMN works, allowed the gathering of more than a hundred of BPMN well-formedness and best-practices rules. Furthermore, we derived a set of BPMN measures aiming to provide information to process modelers regarding the correctness of BPMN models. Both BPMN rules, as well as BPMN measures were empirically validated through samples of BPMN models. Limitations: This work does not cover control-flow formal properties in BPMN models, since they were extensively discussed in other process modeling research works. Conclusion: We intend to contribute for improving BPMN modeling tools, through the formalization of well-formedness rules and BPMN measures to be incorporated in those tools, in order to enhance the quality of process modeling outcomes

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    Traceability of Requirements and Software Architecture for Change Management

    Get PDF
    At the present day, software systems get more and more complex. The requirements of software systems change continuously and new requirements emerge frequently. New and/or modified requirements are integrated with the existing ones, and adaptations to the architecture and source code of the system are made. The process of integration of the new/modified requirements and adaptations to the software system is called change management. The size and complexity of software systems make change management costly and time consuming. To reduce the cost of changes, it is important to apply change management as early as possible in the software development cycle. Requirements traceability is considered crucial in change management for establishing and maintaining consistency between software development artifacts. It is the ability to link requirements back to stakeholders’ rationales and forward to corresponding design artifacts, code, and test cases. When changes for the requirements of the software system are proposed, the impact of these changes on other requirements, design elements and source code should be traced in order to determine parts of the software system to be changed. Determining the impact of changes on the parts of development artifacts is called change impact analysis. Change impact analysis is applicable to many development artifacts like requirements documents, detailed design, source code and test cases. Our focus is change impact analysis in requirements and software architecture. The need for change impact analysis is observed in both requirements and software architecture. When a change is introduced to a requirement, the requirements engineer needs to find out if any other requirement related to the changed requirement is impacted. After determining the impacted requirements, the software architect needs to identify the impacted architectural elements by tracing the changed requirements to software architecture. It is hard, expensive and error prone to manually trace impacted requirements and architectural elements from the changed requirements. There are tools and approaches that automate change impact analysis like IBM Rational RequisitePro and DOORS. In most of these tools, traces are just simple relations and their semantics is not considered. Due to the lack of semantics of traces in these tools, all requirements and architectural elements directly or indirectly traced from the changed requirement are candidate impacted. The requirements engineer has to inspect all these candidate impacted requirements and architectural elements to identify changes if there are any. In this thesis we address the following problems which arise in performing change impact analysis for requirements and software architecture. Explosion of impacts in requirements after a change in requirements. In practice, requirements documents are often textual artifacts with implicit structure. Most of the relations among requirements are not given explicitly. There is a lack of precise definition of relations among requirements in most tools and approaches. Due to the lack of semantics of requirements relations, change impact analysis may produce high number of false positive and false negative impacted requirements. A requirements engineer may have to analyze all requirements in the requirements document for a single change. This may result in neglecting the actual impact of a change. Manual, expensive and error prone trace establishment. Considerable research has been devoted to relating requirements and design artifacts with source code. Less attention has been paid to relating Requirements (R) with Architecture (A) by using well-defined semantics of traces. Designing architecture based on requirements is a problem solving process that relies on human experience and creativity, and is mainly manual. The software architect may need to manually assign traces between R&A. Manual trace assignment is time-consuming, expensive and error prone. The assigned traces might be incomplete and invalid. Explosion of impacts in software architecture after a change in requirements. Due to the lack of semantics of traces between R&A, change impact analysis may produce high number of false positive and false negative impacted architectural elements. A software architect may have to analyze all architectural elements in the architecture for a single requirements change. In this thesis we propose an approach that reduces the explosion of impacts in R&A. The approach employs semantic information of traces and is supported by tools. We consider that every relation between software development artifacts or between elements in these artifacts can play the role of a trace for a certain traceability purpose like change impact analysis. We choose Model Driven Engineering (MDE) as a solution platform for our approach. MDE provides a uniform treatment of software artifacts (e.g. requirements documents, software design and test documents) as models. It also enables using different formalisms to reason about development artifacts described as models. To give an explicit structure to requirements documents and treat requirements, architecture and traces in a uniform way, we use metamodels and models with formally defined semantics. The thesis provides the following contributions: A modeling language for definition of requirements models with formal semantics. The language is defined according to the MDE principles by defining a metamodel. It is based on a survey about the most commonly found requirements types and relation types. With this language, the requirements engineer can explicitly specify the requirements and the relations among them. The semantics of these entities is given in First Order Logic (FOL) and allows two activities. First, new relations among requirements can be inferred from the initial set of relations. Second, requirements models can be automatically checked for consistency of the relations. Tool for Requirements Inferencing and Consistency Checking (TRIC) is developed to support both activities. The defined semantics is used in a technique for change impact analysis in requirements models. A change impact analysis technique for requirements using semantics of requirements relations and requirements change types. The technique aims at solving the problem of explosion of impacts in requirements when semantics of requirements relations is missing. The technique uses formal semantics of requirements relations and requirements change types. A classification of requirements changes based on the structure of a textual requirement is given and formalized. The semantics of requirements change types is based on FOL. We support three activities for impact analysis. First, the requirements engineer proposes changes according to the change classification before implementing the actual changes. Second, the requirements engineer indentifies the propagation of the changes to related requirements. The change alternatives in the propagation are determined based on the semantics of change types and requirements relations. Third, possible contradicting changes are identified. We extend TRIC with a support for these activities. The tool automatically determines the change propagation paths, checks the consistency of the changes, and suggests alternatives for implementing the change. A technique that provides trace establishment between R&A by using architecture verification and semantics of traces. It is hard, expensive and error prone to manually establish traces between R&A. We present an approach that provides trace establishment by using architecture verification together with semantics of requirements relations and traces. We use a trace metamodel with commonly used trace types. The semantics of traces is formalized in FOL. Software architectures are expressed in the Architecture Analysis and Design Language (AADL). AADL is provided with a formal semantics expressed in Maude. The Maude tool set allows simulation and verification of architectures. The first way to establish traces is to use architecture verification techniques. A given requirement is reformulated as a property in terms of the architecture. The architecture is executed and a state space is produced. This execution simulates the behavior of the system on the architectural level. The property derived from the requirement is checked by the Maude model checker. Traces are generated between the requirement and the architectural components used in the verification of the property. The second way to establish traces is to use the requirements relations together with the semantics of traces. Requirements relations are reflected in the connections among the traced architectural elements based on the semantics of traces. Therefore, new traces are inferred from existing traces by using requirements relations. We use semantics of requirements relations and traces to both generate/validate traces and generate/validate requirements relations. There is a tool support for our approach. The tool provides the following: (1) generation/validation of traces by using requirements relations and/or verification of architecture, (2) generation/validation of requirements relations by using traces. A change impact analysis technique for software architecture using architecture verification and semantics of traces between R&A. The software architect needs to identify the impacted architectural elements after requirements change. We present a change impact analysis technique for software architecture using architecture verification and semantics of traces. The technique is semi-automatic and requires participation of the software architect. Our technique has two parts. The first part is to identify the architectural elements that implement the system properties to which proposed requirements changes are introduced. By having the formal semantics of requirements relations and traces, we identify which parts of software architecture are impacted by a proposed change in requirements. We have extended TRIC for determining candidate impacted architectural elements. The second part of our technique is to propose possible changes for software architecture when the software architecture does not satisfy the new and/or changed requirements. The technique is based on architecture verification. The output of verification is a counter example if the requirements are not satisfied. The counter example is used with a classification of architectural changes in order to propose changes in the software architecture. These changes produce a new version of the architecture that possibly satisfies the new or the changed requirements

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    Automated Testing: Requirements Propagation via Model Transformation in Embedded Software

    Get PDF
    Testing is the most common activity to validate software systems and plays a key role in the software development process. In general, the software testing phase takes around 40-70% of the effort, time and cost. This area has been well researched over a long period of time. Unfortunately, while many researchers have found methods of reducing time and cost during the testing process, there are still a number of important related issues such as generating test cases from UCM scenarios and validate them need to be researched. As a result, ensuring that an embedded software behaves correctly is non-trivial, especially when testing with limited resources and seeking compliance with safety-critical software standard. It thus becomes imperative to adopt an approach or methodology based on tools and best engineering practices to improve the testing process. This research addresses the problem of testing embedded software with limited resources by the following. First, a reverse-engineering technique is exercised on legacy software tests aims to discover feasible transformation from test layer to test requirement layer. The feasibility of transforming the legacy test cases into an abstract model is shown, along with a forward engineering process to regenerate the test cases in selected test language. Second, a new model-driven testing technique based on different granularity level (MDTGL) to generate test cases is introduced. The new approach uses models in order to manage the complexity of the system under test (SUT). Automatic model transformation is applied to automate test case development which is a tedious, error-prone, and recurrent software development task. Third, the model transformations that automated the development of test cases in the MDTGL methodology are validated in comparison with industrial testing process using embedded software specification. To enable the validation, a set of timed and functional requirement is introduced. Two case studies are run on an embedded system to generate test cases. The effectiveness of two testing approaches are determined and contrasted according to the generation of test cases and the correctness of the generated workflow. Compared to several techniques, our new approach generated useful and effective test cases with much less resources in terms of time and labor work. Finally, to enhance the applicability of MDTGL, the methodology is extended with the creation of a trace model that records traceability links among generated testing artifacts. The traceability links, often mandated by software development standards, enable the support for visualizing traceability, model-based coverage analysis and result evaluation
    corecore