13,274 research outputs found

    Automatic Acceptance Test Case Generation From Essential Use Cases

    Get PDF
    Requirements validation is a crucial process to determine whether client-stakeholders’ needs and expectations of a product are sufficiently correct and complete. Various requirements validation techniques have been used to evaluate the correctness and quality of requirements, but most of these techniques are tedious, expensive and time consuming. Accordingly, most project members are reluctant to invest their time and efforts in the requirements validation process.Moreover, automated tool supports that promote effective collaboration between the client-stakeholders and the engineers are still lacking. In this paper, we describe a novel approach that combines prototyping and test-based requirements techniques to improve the requirements validation process and promote better communication and collaboration between requirements engineers and clientstakeholders. To justify the potential of this prototype tool, we also present three types of evaluation conducted on the prototpye tool, which are the usability survey, 3-tool comparison analysis and expert reviews

    SensorCloud: Towards the Interdisciplinary Development of a Trustworthy Platform for Globally Interconnected Sensors and Actuators

    Get PDF
    Although Cloud Computing promises to lower IT costs and increase users' productivity in everyday life, the unattractive aspect of this new technology is that the user no longer owns all the devices which process personal data. To lower scepticism, the project SensorCloud investigates techniques to understand and compensate these adoption barriers in a scenario consisting of cloud applications that utilize sensors and actuators placed in private places. This work provides an interdisciplinary overview of the social and technical core research challenges for the trustworthy integration of sensor and actuator devices with the Cloud Computing paradigm. Most importantly, these challenges include i) ease of development, ii) security and privacy, and iii) social dimensions of a cloud-based system which integrates into private life. When these challenges are tackled in the development of future cloud systems, the attractiveness of new use cases in a sensor-enabled world will considerably be increased for users who currently do not trust the Cloud.Comment: 14 pages, 3 figures, published as technical report of the Department of Computer Science of RWTH Aachen Universit

    A Taxonomy for Requirements Engineering and Software Test Alignment

    Full text link
    Requirements Engineering and Software Testing are mature areas and have seen a lot of research. Nevertheless, their interactions have been sparsely explored beyond the concept of traceability. To fill this gap, we propose a definition of requirements engineering and software test (REST) alignment, a taxonomy that characterizes the methods linking the respective areas, and a process to assess alignment. The taxonomy can support researchers to identify new opportunities for investigation, as well as practitioners to compare alignment methods and evaluate alignment, or lack thereof. We constructed the REST taxonomy by analyzing alignment methods published in literature, iteratively validating the emerging dimensions. The resulting concept of an information dyad characterizes the exchange of information required for any alignment to take place. We demonstrate use of the taxonomy by applying it on five in-depth cases and illustrate angles of analysis on a set of thirteen alignment methods. In addition, we developed an assessment framework (REST-bench), applied it in an industrial assessment, and showed that it, with a low effort, can identify opportunities to improve REST alignment. Although we expect that the taxonomy can be further refined, we believe that the information dyad is a valid and useful construct to understand alignment

    The genotype-phenotype relationship in multicellular pattern-generating models - the neglected role of pattern descriptors

    Get PDF
    Background: A deep understanding of what causes the phenotypic variation arising from biological patterning processes, cannot be claimed before we are able to recreate this variation by mathematical models capable of generating genotype-phenotype maps in a causally cohesive way. However, the concept of pattern in a multicellular context implies that what matters is not the state of every single cell, but certain emergent qualities of the total cell aggregate. Thus, in order to set up a genotype-phenotype map in such a spatiotemporal pattern setting one is actually forced to establish new pattern descriptors and derive their relations to parameters of the original model. A pattern descriptor is a variable that describes and quantifies a certain qualitative feature of the pattern, for example the degree to which certain macroscopic structures are present. There is today no general procedure for how to relate a set of patterns and their characteristic features to the functional relationships, parameter values and initial values of an original pattern-generating model. Here we present a new, generic approach for explorative analysis of complex patterning models which focuses on the essential pattern features and their relations to the model parameters. The approach is illustrated on an existing model for Delta-Notch lateral inhibition over a two-dimensional lattice. Results: By combining computer simulations according to a succession of statistical experimental designs, computer graphics, automatic image analysis, human sensory descriptive analysis and multivariate data modelling, we derive a pattern descriptor model of those macroscopic, emergent aspects of the patterns that we consider of interest. The pattern descriptor model relates the values of the new, dedicated pattern descriptors to the parameter values of the original model, for example by predicting the parameter values leading to particular patterns, and provides insights that would have been hard to obtain by traditional methods. Conclusion: The results suggest that our approach may qualify as a general procedure for how to discover and relate relevant features and characteristics of emergent patterns to the functional relationships, parameter values and initial values of an underlying pattern-generating mathematical model

    Methodology for Testing RFID Applications

    Get PDF
    Radio Frequency Identification (RFID) is a promising technology for process automation and beyond that capable of identifying objects without the need for a line-of-sight. However, the trend towards automatic identification of objects also increases the demand for high quality RFID applications. Therefore, research on testing RFID systems and methodical approaches for testing are needed. This thesis presents a novel methodology for the system level test of RFID applications. The approach called ITERA, allows for the automatic generation of tests, defines a semantic model of the RFID system and provides a test environment for RFID applications. The method introduced can be used to gradually transform use cases into a semi-formal test specification. Test cases are then systematically generated, in order to execute them in the test environment. It applies the principle of model based testing from a black-box perspective in combination with a virtual environment for automatic test execution. The presence of RFID tags in an area, monitored by an RFID reader, can be modelled by time-based sets using set-theory and discrete events. Furthermore, the proposed description and semantics can be used to specify RFID systems and their applications, which might also be used for other purposes than testing. The approach uses the Unified Modelling Language to model the characteristics of the system under test. Based on the ITERA meta model test execution paths are extracted directly from activity diagrams and RFID specific test cases are generated. The approach introduced in this thesis allows to reduce the efforts for RFID application testing by systematically generating test cases and the automatic test execution. In combination with meta model and by considering additional parameters, like unreliability factors, it not only satisfies functional testing aspects, but also increases the confidence in the robustness of the tested application. Mixed with the instantly available virtual readers, it has the potential to speed up the development process and decrease the costs - even during the early development phases. ITERA can be used for highly automated testing, reproducible tests and because of the instantly available readers, even before the real environment is deployed. Furthermore, the total control of the RFID environment enables to test applications which might be difficult to test manually. This thesis will explain the motivation and objectives of this new RFID application test methodology. Based on a RFID system analysis it proposes a practical solution on the identified issues. Further, it gives a literature review on testing fundamentals, model based test case generation, the typical components of a RFID system and RFID standards used in industry.Integrative Test-Methodology for RFID Applications (ITERA) - Project: Eurostars!5516 ITERA, FKZ 01QE1105
    corecore