754,506 research outputs found
Development of a client interface for a methodology independent object-oriented CASE tool : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University
The overall aim of the research presented in this thesis is the development of a prototype CASE Tool user interface that supports the use of arbitrary methodology notations for the construction of small-scale diagrams. This research is part of the larger CASE Tool project, MOOT (Massey's Object Oriented Tool). MOOT is a meta-system with a client-server architecture that provides a framework within which the semantics and syntax of methodologies can be described. The CASE Tool user interface is implemented in Java so it is as portable as possible and has a consistent look and feel. It has been designed as a client to the rest of the MOOT system (which acts as a server). A communications protocol has been designed to support the interaction between the CASE Tool client and a MOOT server. The user interface design of MOOT must support all possible graphical notations. No assumptions about the types of notations that a software engineer may use can be made. MOOT therefore provides a specification language called NDL for the definition of a methodology's syntax. Hence, the MOOT CASE Tool client described in this thesis is a shell that is parameterised by NDL specifications. The flexibility provided by such a high level of abstraction presents significant challenges in terms of designing effective human-computer interaction mechanisms for the MOOT user interface. Functional and non-functional requirements of the client user interface have been identified and applied during the construction of the prototype. A notation specification that defines the syntax for Coad and Yourdon OOA/OOD has been written in NDL and used as a test case. The thesis includes the iterative evaluation and extension of NDL resulting from the prototype development. The prototype has shown that the current approach to NDL is efficacious, and that the syntax and semantics of a methodology description can successfully be separated. The developed prototype has shown that it is possible to build a simple, non-intrusive, and efficient, yet flexible, useable, and helpful interface for meta-CASE tools. The development of the CASE Tool client, through its generic, methodology independent design, has provided a pilot with which future ideas may be explored
Fuzzy Logic-Based System for the Estimation of the Usability Level in User Tests
Starting from the challenge of obtaining the usability level in numerical and linguistic terms within a user test based on the three attributes that define usability, the development of a system based on fuzzy logic is proposed for estimating the level of output usability in a test with users based on the ISO 9241-11 standard. The attributes are effectiveness, efficiency, and satisfaction, which have different metrics and numerical scales. For the development of the system, five methodological stages were defined: characterization of the structure of a usability test, definition of membership functions for the inputs and outputs of the system, design of the inference rules that relate the inputs and outputs, design and implementation of the fuzzy system, and development of the case study. The proposed system was implemented using the FCL (Fuzzy Control Language) and the jFuzzyLogic API that takes as inputs the values calculated for the attributes of effectiveness, efficiency, and satisfaction, and obtains the usability level as output considering the membership functions of the inputs and outputs, as well as a set of inference rules defined by a set of experts. As a case study, the proposed fuzzy system was validated from the results obtained in a usability test with 5 users which was developed on the Sigma ElectrĆ³nica website. From the results obtained in the case study, it could be concluded that the implemented system is adequate in terms of obtaining a level of usability in numerical and linguistic terms in conventional usability tests developed in a usability laboratory considering the attributes of ISO 9241-11
Elaborating validation scenarios based on the context analysis and combinatorial method: Example of the power-efficiency framework innomterics
open5siThis research project is carried out under the support of the Russian Science Foundation Grant No. 19-19-00623.The preliminary task of a project consists of the definition of the scenarios that will guide further development work and validate the results. In this paper, we present an approach for the systematic generation of validation scenarios using a specifically developed taxonomy and combinatorial testing. We applied this approach to our research project for the development of the energy-efficiency evaluation framework named Innometrics. We described in detail all steps for taxonomy creation, generation of abstract validation scenarios, and identification of relevant industrial and academic case studies. We created the taxonomy of the target computer systems and then elaborated test cases using combinatorial testing. The classification criteria were the type of the system, its purpose, enabling hardware components and connectivity technologies, basic design patterns, programming language, and development lifecycle. The combinatorial testing results in 13 cases for one-way test coverage, which was considered enough to create a comprehensive test suite. We defined the case study for each particular scenario. These case studies represent the real industrial, educational, and open-source software development projects that will be used in further work on the Innometrics project.openCiancarini P.; Kruglov A.; Sadovykh A.; Succi G.; Zuev E.Ciancarini P.; Kruglov A.; Sadovykh A.; Succi G.; Zuev E
Recommended from our members
Investigation into an improved modular rule-based testing framework for business rules
Rule testing in scheduling applications is a complex and potentially costly business problem. This thesis reports the outcome of research undertaken to develop a system to describe and test scheduling rules against a set of scheduling data. The overall intention of the research was to reduce commercial scheduling costs by minimizing human domain expert interaction within the scheduling process.
This thesis reports the outcome of research initiated following a consultancy project to develop a system to test driver schedules against the legal driving rules in force in the UK and the EU. One of the greatest challenges faced was interpreting the driving rules and translating them into the chosen programming language. This part of the project took considerable effort to complete the programming, testing and debugging processes. A potential problem then arises if the Department of Transport or the European Union alter or change the driving rules. Considerable software development is likely to be required to support the new rule set.
The approach considered takes into account the need for a modular software component that can be used in not just transport scheduling systems which look at legal driving rules but may also be integrated into other systems that have the need to test temporal rules. The integration of the rule testing component into existing systems is key to making the proposed solution reusable.
The research outcome proposes an alternative approach to rule definition, similar to that of RuleML, but with the addition of rule metadata to provide the ability of describing rules of a temporal nature. The rules can be serialised and deserialised between XML (eXtensible Markup Language) and objects within an object oriented environment (in this case .NET with C#), to provide a means of transmission of the rules over a communication infrastructure. The rule objects can then be compiled into an executable software library, allowing the rules to be tested more rapidly than traditional interpreted rules. Additional support functionality is also defined to provide a means of effectively integrating the rule testing engine into existing applications.
Following the construction of a rule testing engine that has been designed to meet the given requirements, a series of tests were undertaken to determine the effectiveness of the proposed approach. This lead to the implementation of improvements in the caching of constructed work plans to further improve performance. Tests were also carried out into the application of the proposed solution within alternative scheduling domains and to analyse the difference in computational performance and memory usage across system architectures, software frameworks and operating systems, with the support of Mono.
Future work that is expected to follow on from this thesis will likely reside in investigations into the development of graphical design tools for the creation of the rules, improvements in the work plan construction algorithm, parallelisation of elements of the process to take better advantage of multi-core processors and off-loading of the rule testing process onto dedicated or generic computational processors
Automated requirements-driven testing of embedded systems based on use case specifications and timed automata
The complexity of embedded software in safety-critical domains, such as automotive and
avionics, has significantly increased over the years. For most embedded systems, standards require
system testing to explicitly demonstrate that the software meets its functional and safety
requirements. In these domains, system test cases are often manually derived from functional requirements
in natural language plus other design artefacts, like UML statecharts. The definition of
system test cases is therefore time-consuming and error-prone, especially given the quickly rising
complexity of embedded systems.
The benefits of automatic test generation are widely acknowledged today but existing approaches
often require behavioural models that tend to be complex and expensive to produce, and
are thus often not part of development practice.
The work proposed in this dissertation focusses on the automated generation of test cases for
testing the compliance between software and its functional and timing requirements. This dissertation
is inspired by contexts where functional and timing requirements are expressed by means of
use case specifications and timing automata, respectively. This is the development context of our
industrial partner, IEE, an automotive company located in Luxembourg, who provided the case
study used to validate the approach and tool described in this dissertation.
This dissertation presents five main contributions: (1) A set of guidelines for the definition of
functional and timing requirements to enable the automated generation of system test cases. (2) A
technique for the automated generation of functional test cases from requirements elicited in the
form of use case specifications following a prescribed template and natural-language restrictions.
(3) A technique that reuses the automatically generated functional test cases to generate timeliness
test cases from minimal models of the timing requirements of the system. (4) A technique
for the automated generation of oracles for non-deterministic systems whose specifications are
expressed by means of timed automata. In the context of this dissertation, automated oracles for
non-deterministic systems are necessary to evaluate the results of the generated timeliness test
cases. (5) The evaluation of the applicability and effectiveness of the proposed guidelines and
techniques on an industrial case study, a representative automotive embedded system developed
by IEE
Development of a Requirements Classification Scheme for Automated Support of Software Development
Computer hardware technology has advanced to the point where it is feasible to develop software systems that exceed one million lines of code. Software engineering, the discipline concerned with developing these systems, typically includes the following phases in the software development cycle: requirements phase, design phase, implementation phase, test phase and the installation and checkout phase [IEEE, 1983]. This paper is concerned with a classification scheme to support knowledge-based analysis of software requirements. Because requirements in these large systems can become quite numerous and complex, it is a very difficult task to analyze them in order to identify potential problems, i.e., problems that may arise in later phases, such as incompleteness, conflict, ambiguity, and absence of testability. Current software productivity tools such as CASE (Computer-Aided Software Engineering), executable requirements languages, prototyping tools, and test harnesses provide much assistance in later phases of software development, but not much for the requirements phase. It is well-known in software engineering that the most costly and profound software errors are likely to occur at the requirements definition phase, early in the software development cycle. The productivity tools mentioned earlier either work from code which has been developed for the system. or work with user requirements statements which have been translated into a restricted, formal language. There are several problems with translating user requirements into a formal language very early in the project First, it is difficult for the user to understand requirements as they have been translated and interpreted by the systems analyst. Second, the systems analyst may have made wrong assumptions or interpretations in the process of translation. Finally, formal languages do not have any provision for ambiguity, and early in the requirements phase, ambiguities may exist, to be worked out as more is learned about the system
IEEE Standard 1500 Compliance Verification for Embedded Cores
Core-based design and reuse are the two key elements for an efficient system-on-chip (SoC) development. Unfortunately, they also introduce new challenges in SoC testing, such as core test reuse and the need of a common test infrastructure working with cores originating from different vendors. The IEEE 1500 Standard for Embedded Core Testing addresses these issues by proposing a flexible hardware test wrapper architecture for embedded cores, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Several intellectual property providers have already announced IEEE Standard 1500 compliance in both existing and future design blocks. In this paper, we address the problem of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE Standard 1500. This step is mandatory to fully trust the wrapper functionalities in applying the test sequences to the core. We present a systematic methodology to build a verification framework for IEEE Standard 1500 compliant cores, allowing core providers and/or integrators to verify the compliance of their products (sold or purchased) to the standar
Testing conformance of a deterministic implementation against a non-deterministic stream X-machine
Stream X-machines are a formalisation of extended finite state machines that have been used to specify systems. One of the great benefits of using stream X-machines, for the purpose of specification, is the associated test generation technique which produces a test that is guaranteed to determine correctness under certain design for test conditions. This test generation algorithm has recently been extended to the case where the specification is non-deterministic. However, the algorithms for testing from a non-deterministic stream X-machine currently have limitations: either they test for equivalence, rather than conformance or they restrict the source of non-determinism allowed in the specification. This paper introduces a new test generation algorithm that overcomes both of these limitations, for situations where the implementation is known to be deterministic
Static Analysis of Functional Programs
In this paper, the static analysis of programs in the functional programming language Miranda* is described based on two graph models. A new control-flow graph model of Miranda definitions is presented, and a model with four classes of callgraphs. Standard software metrics are applicable to these models. A Miranda front end for Prometrix, Āæ, a tool for the automated analysis of flowgraphs and callgraphs, has been developed. This front end produces the flowgraph and callgraph representations of Miranda programs. Some features of the metric analyser are illustrated with an example program. The tool provides a promising access to standard metrics on functional programs
Recommended from our members
Testing a deterministic implementation against a non-controllable non-deterministic stream X-machine
A stream X-machine is a type of extended finite state machine with an associated development approach that consists of building a system from a set of trusted components. One of the great benefits of using stream X-machines for the purpose of specification is the existence of test generation techniques that produce test suites that are guaranteed to determine correctness as long as certain well-defined conditions hold. One of the conditions that is traditionally assumed to hold is controllability: this insists that all paths through the stream X-machine are feasible. This restrictive condition has recently been weakened for testing from a deterministic stream X-machine. This paper shows how controllability can be replaced by a weaker condition when testing
a deterministic system against a non-deterministic stream X-machine. This paper therefore develops a new, more general, test generation algorithm for testing from a non-deterministic stream X-machine
- ā¦