223,417 research outputs found
Maintenance of Automated Test Suites in Industry: An Empirical study on Visual GUI Testing
Context: Verification and validation (V&V) activities make up 20 to 50
percent of the total development costs of a software system in practice. Test
automation is proposed to lower these V&V costs but available research only
provides limited empirical data from industrial practice about the maintenance
costs of automated tests and what factors affect these costs. In particular,
these costs and factors are unknown for automated GUI-based testing.
Objective: This paper addresses this lack of knowledge through analysis of
the costs and factors associated with the maintenance of automated GUI-based
tests in industrial practice.
Method: An empirical study at two companies, Siemens and Saab, is reported
where interviews about, and empirical work with, Visual GUI Testing is
performed to acquire data about the technique's maintenance costs and
feasibility.
Results: 13 factors are observed that affect maintenance, e.g. tester
knowledge/experience and test case complexity. Further, statistical analysis
shows that developing new test scripts is costlier than maintenance but also
that frequent maintenance is less costly than infrequent, big bang maintenance.
In addition a cost model, based on previous work, is presented that estimates
the time to positive return on investment (ROI) of test automation compared to
manual testing.
Conclusions: It is concluded that test automation can lower overall software
development costs of a project whilst also having positive effects on software
quality. However, maintenance costs can still be considerable and the less time
a company currently spends on manual testing, the more time is required before
positive, economic, ROI is reached after automation
A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?
Background Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools.
Discussion We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see âothersâ in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments.
Conclusion We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process
The Oracle Problem in Software Testing: A Survey
Testing involves examining the behaviour of a system in order to discover potential faults. Given an input for a system, the challenge of distinguishing the corresponding desired, correct behaviour from potentially incorrect behavior is called the âtest oracle problemâ. Test oracle automation is important to remove a current bottleneck that inhibits greater overall test automation. Without test oracle automation, the human has to determine whether observed behaviour is correct. The literature on test oracles has introduced techniques for oracle automation, including modelling, specifications, contract-driven development and metamorphic testing. When none of these is completely adequate, the final source of test oracle information remains the human, who may be aware of informal specifications, expectations, norms and domain specific information that provide informal oracle guidance. All forms of test oracles, even the humble human, involve challenges of reducing cost and increasing benefit. This paper provides a comprehensive survey of current approaches to the test oracle problem and an analysis of trends in this important area of software testing research and practice
Recommended from our members
AUnit - a testing framework for alloy
textWriting declarative models of software designs and analyzing them to detect defects is an effective methodology for developing more dependable software systems. However, writing such models correctly can be challenging for practitioners who may not be proficient in declarative programming, and their models themselves may be buggy. We introduce the foundations of a novel test automation framework, AUnit, which we envision for testing declarative models written in Alloy -- a first-order, relational language that is supported by its SAT-based analyzer. We take inspiration from the success of the family of xUnit frameworks that are used widely in practice for test automation, albeit for imperative or object-oriented programs. The key novelty of our work is to define a basis for unit testing for Alloy, specifically, to define the concepts of test case and test coverage as well as coverage criteria for declarative models. We reduce the problems of declarative test execution and coverage computation to partial evaluation without requiring SAT solving. Our vision is to blend how developers write unit tests in commonly used programming languages with how Alloy users formulate their models in Alloy, thereby facilitating the development and testing of Alloy models for both new Alloy users as well as experts. We illustrate our ideas using a small but complex Alloy model. While we focus on Alloy, our ideas generalize to other declarative languages (such as Z, B, ASM).Electrical and Computer Engineerin
An âon-demandâ Data Communication Architecture for Supplying Multiple Applications from a Single Data Source: An Industrial Application Case Study
A key aspect of automation is the manipulation of feedback sensor data for the automated control of particular process actuators. Often in practice this data can be reused for other applications, such as the live update of a graphical user interface, a fault detection application or a business intelligence process performance engine in real-time. In order for this data to be reused effectively, appropriate data communication architecture must be utilised to provide such functionality. This architecture must accommodate the dependencies of the system and sustain the required data transmission speed to ensure stability and data integrity. Such an architecture is presented in this paper, which shows how the data needs of multiple applications are satisfied from a single source of data. It shows how the flexibility of this architecture enables the integration of additional data sources as the data dependencies grow. This research is based on the development of a fully integrated automation system for the test of fuel controls used on civil transport aircraft engines
Recommended from our members
Overcoming non-determinism in testing smart devices: how to build models of device behaviour
Justification of smart instruments has become an important topic in the nuclear industry. In practice, however, the publicly available artefacts are often the only source of information about the device. Therefore, in many cases independent black-box testing may be the only way to increase the confidence in the device. In this paper we provide a set of recommendations, which we consider to be the best practices for performing black-box assessments. We present our method of testing smart instruments, in which we use the publicly available artefacts only. We present a test harness and describe a method of test automation. We focus on the analysis of test results, which is made particularly complex by the inherent non determinism in the testing of analogue devices. In the paper we analyse the sources of non-determinism, which for instance may arise from inaccuracy in an analogue measurement made by the device when two alternative actions are possible. We propose three alternative ideas on how to build models of device behaviour, which can cope with this kind of non-determinism. We compare and contrast these three solutions, and express our recommendations. Finally, we use a case study, in which a black box assessment of two similar smart instruments is performed to illustrate the differences between the solutions
Pendampingan Kegiatan Lembaga Sertifikasi Kompetensi (LSP) SMK Al-Muhtadin
The problem faced by the newly established professional certification institutions, especially LSP Al-Muhtadin, located in Cipayung, Depok, West Java, is the lack of knowledge of institutional management, therefore this assistance activity aims to improve the skills of production teachers as assessors in carrying out competency test assessments for students majoring automation of office governance (OTKP) and computer network engineering (TKJ) in accordance with BNSP standards and administrative management. The methods used are lectures, practice, and question and answer. From the results of these activities, these activities resulted in enhancment knowledge and understanding of teachers and administrators link to each other to material in the form of theory and practice as well as increased abilities in administrative management and competency test assessment processe
A Bootstrap Theory: the SEMAT Kernel Itself as Runnable Software
The SEMAT kernel is a thoroughly thought generic framework for Software
Engineering system development in practice. But one should be able to test its
characteristics by means of a no less generic theory matching the SEMAT kernel.
This paper claims that such a matching theory is attainable and describes its
main principles. The conceptual starting point is the robustness of the Kernel
alphas to variations in the nature of the software system, viz. to software
automation, distribution and self-evolution. From these and from observed
Kernel properties follows the proposed bootstrap principle: a software system
theory should itself be a runnable software. Thus, the kernel alphas can be
viewed as a top-level ontology, indeed the Essence of Software Engineering.
Among the interesting consequences of this bootstrap theory, the observable
system characteristics can now be formally tested. For instance, one can check
the system completeness, viz. that software system modules fulfill each one of
the system requirements.Comment: 8 pages; 2 figures; Preprint of paper accepted for GTSE'2014
Workshop, within ICSE'2014 Conferenc
Implementasi Neural Network dalam Mengendalikan Input dan Output pada Penyiraman dan Pemupukan Tanaman Otomatis Berbasis IoT
Agriculture is an important sector in human life. However, in practice, agriculture still faces many challenges such as difficulties in optimally controlling watering and fertilizing crops. To overcome this problem, an automatic plant watering and fertilizing system was developed as an alternative solution. This system can help farmers control watering and fertilizing plants automatically and optimally based on soil and plant conditions measured by sensors. In practice, automation systems for watering and fertilizing plants usually still use simple rules based on the experience or theory of farmers. Therefore, the implementation of a neural network in an automated system of watering and fertilizing plants can help predict irrigation needs for plants accurately and control watering and fertilizing automatically. To prove the effectiveness of the proposed method, testing was carried out using the Neuroph Studio application. From the test results, the total error results for the tool in controlling the output are less than 0.01 of the desired output value. These results are good and indicate that the neural network is an effective method of choice as a learning parameter. In addition, by using IoT technology, the automation system can be connected to the internet, so that it can be accessed remotely and monitored in real-time. This makes it easier for users to control the automation system and monitor the state of the plants
- âŠ