307,172 research outputs found

    Constraint-Based Automatic SBST Generation for RISC-V Processor Families

    Get PDF
    Software-Based Self-Tests (SBST) allow at-speed, native online-testing of processors by running software programs on the processor core, requiring no Design for Testability (DfT) infrastructure. The creation of such SBST programs often requires time-consuming manual labour that is expensive and requires in-depth knowledge of the processor’s architecture to target hard-to-test faults. In contrast, encoding the SBST generation task as a Bounded Model Checking (BMC) problem allows using sophisticated, state-of-the-art BMC solvers to automatically generate an SBST. Constraints for the BMC problem are encoded in a circuit called Validity Checker Module (VCM) and applied during SBST generation.In this paper, we focus on presenting a VCM architecture and a constraint set that allows building SBSTs that make minimal assumptions about the firmware, targeting hard-to-test faults in the ALU and register file of multiple scalar, in-order RISC-V processor families. The VCM architecture consists of a processor-specific mapping layer and a generic constraint set connected via a well-defined interface. The generic constraint set enforces the desired SBST behaviour, including controlling the processor’s pipeline state, memory accesses, and with that executed instructions, register state, and fault propagations. Using a generic constraint set allows for rapid SBST generation targeting new RISC-V processor families while keeping the generic constraints untouched. Lastly, we evaluate this approach on two RISC-V processor families, namely the DarkRISCV and a proprietary, industrial core showing the portability and strength of the approach, allowing for rapidly targeting new processors

    Traceability of Requirements and Software Architecture for Change Management

    Get PDF
    At the present day, software systems get more and more complex. The requirements of software systems change continuously and new requirements emerge frequently. New and/or modified requirements are integrated with the existing ones, and adaptations to the architecture and source code of the system are made. The process of integration of the new/modified requirements and adaptations to the software system is called change management. The size and complexity of software systems make change management costly and time consuming. To reduce the cost of changes, it is important to apply change management as early as possible in the software development cycle. Requirements traceability is considered crucial in change management for establishing and maintaining consistency between software development artifacts. It is the ability to link requirements back to stakeholders’ rationales and forward to corresponding design artifacts, code, and test cases. When changes for the requirements of the software system are proposed, the impact of these changes on other requirements, design elements and source code should be traced in order to determine parts of the software system to be changed. Determining the impact of changes on the parts of development artifacts is called change impact analysis. Change impact analysis is applicable to many development artifacts like requirements documents, detailed design, source code and test cases. Our focus is change impact analysis in requirements and software architecture. The need for change impact analysis is observed in both requirements and software architecture. When a change is introduced to a requirement, the requirements engineer needs to find out if any other requirement related to the changed requirement is impacted. After determining the impacted requirements, the software architect needs to identify the impacted architectural elements by tracing the changed requirements to software architecture. It is hard, expensive and error prone to manually trace impacted requirements and architectural elements from the changed requirements. There are tools and approaches that automate change impact analysis like IBM Rational RequisitePro and DOORS. In most of these tools, traces are just simple relations and their semantics is not considered. Due to the lack of semantics of traces in these tools, all requirements and architectural elements directly or indirectly traced from the changed requirement are candidate impacted. The requirements engineer has to inspect all these candidate impacted requirements and architectural elements to identify changes if there are any. In this thesis we address the following problems which arise in performing change impact analysis for requirements and software architecture. Explosion of impacts in requirements after a change in requirements. In practice, requirements documents are often textual artifacts with implicit structure. Most of the relations among requirements are not given explicitly. There is a lack of precise definition of relations among requirements in most tools and approaches. Due to the lack of semantics of requirements relations, change impact analysis may produce high number of false positive and false negative impacted requirements. A requirements engineer may have to analyze all requirements in the requirements document for a single change. This may result in neglecting the actual impact of a change. Manual, expensive and error prone trace establishment. Considerable research has been devoted to relating requirements and design artifacts with source code. Less attention has been paid to relating Requirements (R) with Architecture (A) by using well-defined semantics of traces. Designing architecture based on requirements is a problem solving process that relies on human experience and creativity, and is mainly manual. The software architect may need to manually assign traces between R&A. Manual trace assignment is time-consuming, expensive and error prone. The assigned traces might be incomplete and invalid. Explosion of impacts in software architecture after a change in requirements. Due to the lack of semantics of traces between R&A, change impact analysis may produce high number of false positive and false negative impacted architectural elements. A software architect may have to analyze all architectural elements in the architecture for a single requirements change. In this thesis we propose an approach that reduces the explosion of impacts in R&A. The approach employs semantic information of traces and is supported by tools. We consider that every relation between software development artifacts or between elements in these artifacts can play the role of a trace for a certain traceability purpose like change impact analysis. We choose Model Driven Engineering (MDE) as a solution platform for our approach. MDE provides a uniform treatment of software artifacts (e.g. requirements documents, software design and test documents) as models. It also enables using different formalisms to reason about development artifacts described as models. To give an explicit structure to requirements documents and treat requirements, architecture and traces in a uniform way, we use metamodels and models with formally defined semantics. The thesis provides the following contributions: A modeling language for definition of requirements models with formal semantics. The language is defined according to the MDE principles by defining a metamodel. It is based on a survey about the most commonly found requirements types and relation types. With this language, the requirements engineer can explicitly specify the requirements and the relations among them. The semantics of these entities is given in First Order Logic (FOL) and allows two activities. First, new relations among requirements can be inferred from the initial set of relations. Second, requirements models can be automatically checked for consistency of the relations. Tool for Requirements Inferencing and Consistency Checking (TRIC) is developed to support both activities. The defined semantics is used in a technique for change impact analysis in requirements models. A change impact analysis technique for requirements using semantics of requirements relations and requirements change types. The technique aims at solving the problem of explosion of impacts in requirements when semantics of requirements relations is missing. The technique uses formal semantics of requirements relations and requirements change types. A classification of requirements changes based on the structure of a textual requirement is given and formalized. The semantics of requirements change types is based on FOL. We support three activities for impact analysis. First, the requirements engineer proposes changes according to the change classification before implementing the actual changes. Second, the requirements engineer indentifies the propagation of the changes to related requirements. The change alternatives in the propagation are determined based on the semantics of change types and requirements relations. Third, possible contradicting changes are identified. We extend TRIC with a support for these activities. The tool automatically determines the change propagation paths, checks the consistency of the changes, and suggests alternatives for implementing the change. A technique that provides trace establishment between R&A by using architecture verification and semantics of traces. It is hard, expensive and error prone to manually establish traces between R&A. We present an approach that provides trace establishment by using architecture verification together with semantics of requirements relations and traces. We use a trace metamodel with commonly used trace types. The semantics of traces is formalized in FOL. Software architectures are expressed in the Architecture Analysis and Design Language (AADL). AADL is provided with a formal semantics expressed in Maude. The Maude tool set allows simulation and verification of architectures. The first way to establish traces is to use architecture verification techniques. A given requirement is reformulated as a property in terms of the architecture. The architecture is executed and a state space is produced. This execution simulates the behavior of the system on the architectural level. The property derived from the requirement is checked by the Maude model checker. Traces are generated between the requirement and the architectural components used in the verification of the property. The second way to establish traces is to use the requirements relations together with the semantics of traces. Requirements relations are reflected in the connections among the traced architectural elements based on the semantics of traces. Therefore, new traces are inferred from existing traces by using requirements relations. We use semantics of requirements relations and traces to both generate/validate traces and generate/validate requirements relations. There is a tool support for our approach. The tool provides the following: (1) generation/validation of traces by using requirements relations and/or verification of architecture, (2) generation/validation of requirements relations by using traces. A change impact analysis technique for software architecture using architecture verification and semantics of traces between R&A. The software architect needs to identify the impacted architectural elements after requirements change. We present a change impact analysis technique for software architecture using architecture verification and semantics of traces. The technique is semi-automatic and requires participation of the software architect. Our technique has two parts. The first part is to identify the architectural elements that implement the system properties to which proposed requirements changes are introduced. By having the formal semantics of requirements relations and traces, we identify which parts of software architecture are impacted by a proposed change in requirements. We have extended TRIC for determining candidate impacted architectural elements. The second part of our technique is to propose possible changes for software architecture when the software architecture does not satisfy the new and/or changed requirements. The technique is based on architecture verification. The output of verification is a counter example if the requirements are not satisfied. The counter example is used with a classification of architectural changes in order to propose changes in the software architecture. These changes produce a new version of the architecture that possibly satisfies the new or the changed requirements

    Software network analyzer for computer network performance measurement planning over heterogeneous services in higher educational institutes

    Get PDF
    In 21st century, convergences of technologies and services in heterogeneous environment have contributed multi-traffic. This scenario will affect computer network on learning system in higher educational Institutes. Implementation of various services can produce different types of content and quality. Higher educational institutes should have a good computer network infrastructure to support usage of various services. The ability of computer network should consist of i) higher bandwidth; ii) proper network design; and iii) higher performance of communication devices/servers. Thus, presently there is no software to plan and help network administrator in determine ability of network services during introduce a new service. Current approach using by network administrators are planning computer network performance via prediction and measure ability of real network performance using hardware/software network analyzer. This approach can influence several problems such as prediction without software always inaccurate and most of the software/hardware network analyzer in real network is limited by network interfaces. Therefore, to encounter these problems, network administrators need software that can plan and measure additional network resources. Thus, this study presents a novel approach for measurement and planning network performance management in heterogeneous environment. The main objectives of this research is to i) identify which approach and problem occurs in Malaysia higher educational institutes; ii) create simulation model that able to plan and measure network performance for various services; iii) software network analyzer development based on simulation model design; and iv) conduct an evaluation of simulation model and software development with real network. These objectives can achieve as follows: i) conduct a survey; ii) select appropriate mathematical formula to create simulation model; iii) select appropriate modeling application for software development; and iv) conduct verification and validation technique for simulation modeling and evaluation of software network analyzer . The results from survey show that most of the network administrators are using hardware/software network analyzer to measure network performance during operational phase. The minimum size of bandwidth capacity has contributed higher network utilization usage. It can generate network congestion and network service failure in higher educational institutes. We create suitable models to evaluate the network performance using Little Law and Queuing theory that can represent similar to hardware/software network analyzer. In order to get accuracy results on the performance of simulation model, we measure (verify and validate) data from lab experiment and real network environment. Development of software network analyzer is based on simulation model architecture. This software will undergo evaluation process using qualitative technique. As a result, this software prototyping can provide a good approximation of real functionality observed in the real network environment. In addition the software is capable of approximating the performance within a minimum error rate. This software network analyzer can significantly enhance to analyze and propose solution on computer network performance. Future work is to develop software network analyzer for planning, suggestion and analyzing computer network performance on wireless transmission (WLAN, WWAN and WiMax) and Ipv6 protocol. We investigate how preparation and planning phases can be applied to heterogeneous environment in order to better utilize network resources. Our software network analyzer prototyping development is based on Fluke Optiview Network Analyzer. Before we develop any software prototype, it should define the following criteria: i) establish prototype objectives; ii) define prototype functionality; iii) develop prototype; and iv) evaluate prototype. Software network analyzer prototype consists of two phases: analyzing computer network performance under optimum condition and without optimum condition. Software network analyzer prototype was developed to measure and predict network activities based on offline condition. We use qualitative technique to measure our software network analyzer prototype to identify this software is able to plan, propose and analyze computer network performance. Evaluation of software network analyzer prototype is based on focus groups in University of Kuala Lumpur (UniKL). Six evaluators are experienced in education and industrial sector will select to complete the survey and interview task. Three evaluators will select who are experienced in industrial sector only, while, another three evaluators experienced in academic and industrial sector. All evaluators have experienced more than 6 to 13 years in network management field. All evaluators need to complete the following task such as: acceptance test, performance test, loading test, network responsive test and repetition test. A common strategy is to design, test, evaluate and then modify the design based on analysis of the prototype.Facultad de Informátic

    Towards a Tool-based Development Methodology for Pervasive Computing Applications

    Get PDF
    Despite much progress, developing a pervasive computing application remains a challenge because of a lack of conceptual frameworks and supporting tools. This challenge involves coping with heterogeneous devices, overcoming the intricacies of distributed systems technologies, working out an architecture for the application, encoding it in a program, writing specific code to test the application, and finally deploying it. This paper presents a design language and a tool suite covering the development life-cycle of a pervasive computing application. The design language allows to define a taxonomy of area-specific building-blocks, abstracting over their heterogeneity. This language also includes a layer to define the architecture of an application, following an architectural pattern commonly used in the pervasive computing domain. Our underlying methodology assigns roles to the stakeholders, providing separation of concerns. Our tool suite includes a compiler that takes design artifacts written in our language as input and generates a programming framework that supports the subsequent development stages, namely implementation, testing, and deployment. Our methodology has been applied on a wide spectrum of areas. Based on these experiments, we assess our approach through three criteria: expressiveness, usability, and productivity

    Architecting specifications for test case generation

    Get PDF
    The Specification and Description Language (SDL) together with its associated tool sets can be used for the generation of Tree and Tabular Combined Notation (TTCN) test cases. Surprisingly, little documentation exists on the optimal way to specify systems so that they can best be used for the generation of tests. This paper, elaborates on the different tool supported approaches that can be taken for test case generation and highlights their advantages and disadvantages. A rule based SDL specification style is then presented that facilitates the automatic generation of tests

    Boundary Objects and their Use in Agile Systems Engineering

    Full text link
    Agile methods are increasingly introduced in automotive companies in the attempt to become more efficient and flexible in the system development. The adoption of agile practices influences communication between stakeholders, but also makes companies rethink the management of artifacts and documentation like requirements, safety compliance documents, and architecture models. Practitioners aim to reduce irrelevant documentation, but face a lack of guidance to determine what artifacts are needed and how they should be managed. This paper presents artifacts, challenges, guidelines, and practices for the continuous management of systems engineering artifacts in automotive based on a theoretical and empirical understanding of the topic. In collaboration with 53 practitioners from six automotive companies, we conducted a design-science study involving interviews, a questionnaire, focus groups, and practical data analysis of a systems engineering tool. The guidelines suggest the distinction between artifacts that are shared among different actors in a company (boundary objects) and those that are used within a team (locally relevant artifacts). We propose an analysis approach to identify boundary objects and three practices to manage systems engineering artifacts in industry

    Experiences modelling and using object-oriented telecommunication service frameworks in SDL

    Get PDF
    This paper describes experiences in using SDL and its associated tools to create telecommunication services by producing and specialising object-oriented frameworks. The chosen approach recognises the need for the rapid creation of validated telecommunication services. It introduces two stages to service creation. Firstly a software expert produces a service framework, and secondly a telecommunications ‘business consultant' specialises the framework by means of graphical tools to rapidly produce services. Here the focus is given to the underlying technology required. In particular, the advantages and disadvantages of SDL and tools for this purpose are highlighted
    • …
    corecore