8,899 research outputs found

    Characterisation schema for selecting software testing techniques

    Full text link
    The importance of properly selecting testing techniques is widely accepted in the software engineering community. However, there are chiefly two reasons why the selections now made by software developers cannot be classed as correct. Firstly, they have limited knowledge of all the techniques now available, which means that there are a lot of techniques with which the average developer is unfamiliar. Secondly, the information now available with regard to the different existing testing techniques is mostly procedural (that is, focused on how to use the technique), whereas there is almost no pragmatic information (focused on the result of using the technique). The open problem addressed in this research is precisely how to help developers improve software testing technique selection. A testing technique characterisation schema is proposed to achieve this objective. Being instantiated for multiple techniques, the schema can be used to build a repository containing information on testing techniques. This schema systematically describes all the testing techniques, focusing mainly on pragmatic aspects of the techniques, leading to more objective selections. The proposed characterisation schema is composed of a non-flat set of attributes, grouped around the elements of the testing process to which they refer. These elements are then grouped around the different testing process stages. This logical grouping makes the schema information coherent. An empirical and iterative process was followed to arrive at this schema. An empirical process was used, because testing techniques are not founded on a solid theoretical basis. This means that the schema needs to be founded not only on testing theory, but also on what the different subjects related to software testing know about techniques. It is iterative, because a schema based on the theory now existing on testing will be created. This schema is gradually refined with the knowledge of developers, researchers and experts in the area. The completed characterisation schema was evaluated in two different ways. Firstly, the schema is verified empirically by instantiating it for multiple testing techniques. Secondly, an experiment is carried out, in which the repository created in the earlier verification is put into use to select testing techniques for different projects. Finally, the original contribution of this research is a conceptual tool that can be used by developers to systematically and objectively select the testing techniques to be used in a software project

    Exposing the myth: object-relational impedance mismatch is a wicked problem

    Get PDF
    Addressing a problem of software integration is a fact of life for those involved in software development. The popularity of both object and relational technologies means that they will inevitably be used together. However, the combination of these two technologies introduces problems. These problems are referred to collectively as the object-relational impedance mismatch. A mismatch is addressed using one or more mapping strategies, typically embodied in a pattern. A strategy is concerned with correspondence between the schema of a relational database and an object-oriented program. Such strategies are employed in mapping tools such as Hibernate and TopLink, and reinforce the received wisdom that the problem of object-relational impedance mismatch has been solved. In this paper, we observe that it is not clear whether each strategy, as one possible solution, addresses the cause or a symptom of a mismatch. We argue that the problem is not tame and easily resolved; rather it is complex and wicked. We introduce a catalogue of problem themes that demonstrate the complex nature of the problem and provide a way both to talk about the problem and to understand its complexity. In the future, as software systems become more complex and more connected, it will be important to learn from past endeavours. Our catalogue of problem themes represents a shift, in thinking about the problem of object-relational impedance mismatch, from issues of implementation towards an analysis of cause and effect. Such a shift has implications for those involved in the design of current and future software architectures. Because we have questioned the received wisdom, we are now in a position to work toward an appropriate solution to the problem of object-relational impedance mismatch

    Tools and Technologies for Enabling Characterisation in Synthetic Biology

    Get PDF
    Synthetic Biology represents a movement to utilise biological organisms for novel applications through the use of rigorous engineering principles. These principles rely on a solid and well versed understanding of the underlying biological components and functions (relevant to the application). In order to achieve this understanding, reliable behavioural and contextual information is required (more commonly known as characterisation data). Focussing on lowering the barrier of entry for current research facilities to regularly and easily perform characterisation assays will directly improve the communal knowledge base for Synthetic Biology and enable the further application of rational engineering principles. Whilst characterisation remains a fundamental principle for Synthetic Biology research, the high time costs, subjective measurement protocols, and ambiguous data analysis specifications, deter regular performance of characterisation assays. Vitally, this prevents the valid application of many of the key Synthetic Biology processes that have been derived to improve research yield (with regards to solving application problems) and directly prevent the intended goal of addressing the ad hoc nature of modern research from being realised. Designing new technologies and tools to facilitate rapid ‘hands off’ characterisation assays for research facilities will improve the uptake of characterisation within the research pipeline. To achieve this two core problem areas have been identified that limit current characterisation attempts in conventional research. Therefore, it was the primary aim of this investigation to overcome these two core problems to promote regular characterisation. The first issue identified as preventing the regular use of characterisation assays was the user-intensive methodologies and technologies available to researchers. There is currently no standardised characterisation equipment for assaying samples and the methodologies are heavily dependent on the researcher and their application for successful and complete characterisation. This study proposed a novel high throughput solution to the characterisation problem that was capable of low cost, concurrent, and rapid characterisation of simple biological DNA elements. By combining in vitro transcription-translation with microfluidics a potent solution to the characterisation problem was proposed. By utilising a completely in vitro approach along with excellent control abilities of microfluidic technologies, a prototype platform for high throughput characterisation was developed. The second issue identified was the lack of flexible, versatile software designed specifically for the data handling needs that are quickly arising within the characterisation speciality. The lack of general solutions in this area is problematic because of the increasing amount of data that is both required and generated for the characterisation output to be considered as rigorous and of value. To alleviate this issue a novel framework for laboratory data handling was developed that employs a plugin strategy for data submission and analysis. Employing a plugin strategy improves the shelf life of data handling software by allowing it to grow with the needs of the speciality. Another advantage to this strategy is the increased ability for well documented processing and analysis standards to arise that are available for all researchers. Finally, the software provided a powerful and flexible data storage schema that allowed all currently conceivable characterisation data types to be stored in a well-documented manner. The two solutions identified within this study increase the amount of enabling tools and technologies available to researchers within Synthetic Biology, which in turn will increase the uptake of regular characterisation. Consequently, this will potentially improve the lateral transfer of knowledge between research projects and reduce the need to perform ad hoc experiments to investigate facets of the fundamental biological components being utilised.Open Acces

    A Comparison of State-Based Modelling Tools for Model Validation

    Get PDF
    In model-based testing, one of the biggest decisions taken before modelling is the modelling language and the model analysis tool to be used to model the system under investigation. UML, Alloy and Z are examples of popular state-based modelling languages. In the literature, there has been research about the similarities and the differences between modelling languages. However, we believe that, in addition to recognising the expressive power of modelling languages, it is crucial to detect the capabilities and the weaknesses of analysis tools that parse and analyse models written in these languages. In order to explore this area, we have chosen four model analysis tools: USE, Alloy Analyzer, ZLive and ProZ and observed how modelling and validation stages of MBT are handled by these tools for the same system. Through this experiment, we not only concretise the tasks that form the modelling and validation stages of MBT process, but also reveal how efficiently these tasks are carried out in different tools

    Invest to Save: Report and Recommendations of the NSF-DELOS Working Group on Digital Archiving and Preservation

    Get PDF
    Digital archiving and preservation are important areas for research and development, but there is no agreed upon set of priorities or coherent plan for research in this area. Research projects in this area tend to be small and driven by particular institutional problems or concerns. As a consequence, proposed solutions from experimental projects and prototypes tend not to scale to millions of digital objects, nor do the results from disparate projects readily build on each other. It is also unclear whether it is worthwhile to seek general solutions or whether different strategies are needed for different types of digital objects and collections. The lack of coordination in both research and development means that there are some areas where researchers are reinventing the wheel while other areas are neglected. Digital archiving and preservation is an area that will benefit from an exercise in analysis, priority setting, and planning for future research. The WG aims to survey current research activities, identify gaps, and develop a white paper proposing future research directions in the area of digital preservation. Some of the potential areas for research include repository architectures and inter-operability among digital archives; automated tools for capture, ingest, and normalization of digital objects; and harmonization of preservation formats and metadata. There can also be opportunities for development of commercial products in the areas of mass storage systems, repositories and repository management systems, and data management software and tools.

    Strategies for the intelligent selection of components

    Get PDF
    It is becoming common to build applications as component-intensive systems - a mixture of fresh code and existing components. For application developers the selection of components to incorporate is key to overall system quality - so they want the `best\u27. For each selection task, the application developer will de ne requirements for the ideal component and use them to select the most suitable one. While many software selection processes exist there is a lack of repeatable, usable, exible, automated processes with tool support. This investigation has focussed on nding and implementing strategies to enhance the selection of software components. The study was built around four research elements, targeting characterisation, process, strategies and evaluation. A Post-positivist methodology was used with the Spiral Development Model structuring the investigation. Data for the study is generated using a range of qualitative and quantitative methods including a survey approach, a range of case studies and quasiexperiments to focus on the speci c tuning of tools and techniques. Evaluation and review are integral to the SDM: a Goal-Question-Metric (GQM)-based approach was applied to every Spiral
    • 

    corecore