32 research outputs found

    Maintenance of Automated Test Suites in Industry: An Empirical study on Visual GUI Testing

    Full text link
    Context: Verification and validation (V&V) activities make up 20 to 50 percent of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project whilst also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation

    Automated Web Application Testing Using Improvement Selenium

    Get PDF
    The basis of this project is to develop an automated web application testing tool using the enhancement of Selenium which is an open source testing tool. The tool should able to handle automated tests with a group of testers. The current testing tools are proprietary tool and not all test engineers expert or have a deep knowledge in programming which is required to create automated test case. The main problem in testing is time consuming where some of testers are still using manual testing. The scope of study is to develop a tool for software tester who focuses on functional testing and the tool can execute test cases on different platforms which are Windows, Linux and Mac. Each platform has different browsers which are Firefox, Internet Explorer and Chrome. A qualitative research is done in this project by interviewing several software test engineers who are familiar with automated testing and a framework has been designed in order to fulfill the requirements and to cater the problems that are currently facing. This framework is developed and coded by using Eclipse IDE, Apache ANT and TestNG. As a result, this testing tool is using the combination of Selenium Grid and TestNG in order to create a template for software tester to use it easily. This framework is compatible to all browsers and platforms that are commonly used by Internet users

    The perceived usability of automated testing tools for mobile applications

    Get PDF
    Mobile application development is a fast-emerging area in software development.The testing of mobile applications is very significant and there are a lot of tools available for testing such applications particularly for Android and iOS.This paper presents the most frequently used automated testing tools for mobile applications.In this study, we found that Android app developers used automated testing tools such as JUnit, MonkeyTalk, Robotium, Appium, and Robolectric.However, they often prefer to test their apps manually, whereas Windows apps developers prefer to use in-house tools such as Visual Studio and Microsoft Test Manager.Both Android and Windows apps developers face many challenges such as time constraints, compatibility issues, lack of exposure, and cumbersome tools, etc. Software testing tools are the key assets of a project that can help improve productivity and software quality.A survey method was used to assess the perceived usability of automated testing tools using forty (40) respondents as participants.The result indicates that JUnit has the highest perceived usability.The study’s result will profit practitioners and researchers in the research and development of usable automated testing tools for mobile applications

    Evaluation of image comparison algorithms as test oracles

    Get PDF
    Televizyon gibi yoğun yazılım içeren gömülü sistemlerin kara kutu testleri, grafik kullanıcı arayüzleri (GKA) aracılığıyla gerçekleştirilmektedir. Bu testlerin otomasyonu kapsamında bir dizi kullanıcı işlemi dışarıdan tetiklenmektedir. Bu sırada, doğru ve yanlış sistem davranışı arasında ayrım yapan ve böylece testlerin geçip geçmediğine karar veren otomatik bir test kâhinine ihtiyaç duyulmaktadır. Bu amaçla yaygın olarak görüntü karşılaştırma araçları kullanılmaktadır. Bu araçlar, gözlenen GKA ile daha önceden kaydedilmiş bir referans GKA ekran görüntüsünü karşılaştırmaktadır. Bu çalışmada, 9 farklı görüntü karşılaştırma aracı bir endüstriyel vaka çalışması ile değerlendirildi. Bir televizyon sisteminin gerçek test çalışmalarından 1000 çift referans ve anlık GKA görüntüsü toplandı ve bu görüntüler başarılı/başarısız test olarak etiketlendirildi. Ayrıca, toplanan veri kümesi görüntülerde meydana gelen piksel kayması, renk tonu/doygunluk farklılığı ve resim gövdesinde esneme (büyüme, küçülme, genişleme, daralma) gibi çeşitli etkilere göre sınıflandırıldı. Ardından, bu veri kümesi ile karşılaştırılan araçlar, doğruluk ve performans açısından değerlendirildi. Araçların parametre değerlerine ve karşılaştırılan görüntülerin tâbi oldukları etkilere göre farklı sonuçlar verdiği görülmüştür. Hazırlanan veri kümesi için en iyi sonuçları veren araç ve bu aracın parametre değerleri tespit edilmiştir.Black box testing of software intensive embedded systems such as TVs is performed via their graphical user interfaces (GUI). A series of user events are triggered for automating these tests. In the meantime, there is a need for a test oracle, which decides if tests pass or fail by differentiating between correct and incorrect system behavior. Image comparison tools are commonly used for this purpose. These tools compare the observed GUI screen during tests with respect to a previously recorded snapshot of a reference GUI screen. In this work, we evaluated 9 image comparison tools with an industrial case study. We collected 1000 pairs of reference and runtime GUI images during test activities performed on a real TV system and we labeled these image pairs as passed and failed tests. In addition, we categorized the data set according to various effects observed on images such as pixel shifting, color saturation and scaling. Then, this data set is used for comparing tools in terms of accuracy and performance. We observed that results are dependent on tool parameters and various image effects that take place. We identified the best tool and its parameter set for the collected data set.Publisher versio

    Trade-off between automated and manual testing: A production possibility curve cost model

    Get PDF
    Testing is always important for Software Quality Assurance (SQA) activities and key cost multiplier in software development. The decision to automate or not to automate a test case is critical. In this paper we discuss the possibility of test automation and in relation to the trade-off between manual and automated test cases. We purpose a Production cost frontier based technique to distinguish the point of automation and manual test within the cost constraints. Our objective is to identify the facts that up to what extant a testing process can be automated. In this paper a cost model is proposed for deciding the proportion of automated and manual testing. The objective is to find best possible combination of these two and production possibility in one type by eliminating the other type of testin

    Creating GUI testing tools using accessibility technologies

    Get PDF
    Abstract Since manual black-box testing of GUI-based APplications (GAPs

    Meeting quality standards for mobile application development in businesses: A framework for cross-platform testing

    Get PDF
    How do you test the same application developed for multiple mobile platforms in an effective way? Companies offering apps have to develop the same features across several platforms in order to reach the majority of potential users. However, verifying that these apps work, as intended across a set of heterogeneous devices and operating systems is not trivial. Manual testing can be performed, but this is time consuming, repetitive and error-prone. Automated tools exist through frameworks, such as Frank and Robotium, however they lack the possibility to run repeated tests across multiple heterogeneous devices. This article presents an extensible architecture and conceptual prototype that showcase and combines parallel cross-platform test execution with performance measurements. In so doing, this work contributes to a quality-assurance process by automating parts of a regression test for mobile cross-platform applications

    Software Testing: An Analysis of the Impacts of Test Automation on Software’s Cost, Quality and Time

    Get PDF
    Software testing is an essential yet time consuming and tedious task in the software development cycle despite the accessibility of most capable quality assurance teams and tools. Test automation is widely being utilised within the software industry to provide increased testing capacity to ensure high product quality and reliability. This thesis will specifically be addressing automated testing whereby test cases are manually written and executed automated. Test automation has its benefits, drawbacks, and impacts on different stages of development. Furthermore, there is often a disconnect between non-technical and technical roles, where non-technical roles (e.g., management) predominantly strive to reduce costs and delivery time whereas technical roles are often driven by quality and completeness. Although it is widely understood that there are challenges with adopting and using automated testing, there is a lack of evidence to understand the different attitudes toward automated testing, focusing specifically on why it is not adopted. In this thesis, the author has surveyed practitioners within the software industry from different roles to determine common trends and draw conclusions. A two-stage approach is presented, comprising of a comprehensive descriptive analysis and the use of Principle Component Analysis (PCA). In total, 81 participants were provided with a series of 22 questions and their responses were compared against job role types and experience levels. In summary, 6 key findings are presented covering expertise, time, cost, tools and techniques, utilisation, organisation and capacity

    Automated Web Application Testing Using Improvement Selenium

    Get PDF
    The basis of this project is to develop an automated web application testing tool using the enhancement of Selenium which is an open source testing tool. The tool should able to handle automated tests with a group of testers. The current testing tools are proprietary tool and not all test engineers expert or have a deep knowledge in programming which is required to create automated test case. The main problem in testing is time consuming where some of testers are still using manual testing. The scope of study is to develop a tool for software tester who focuses on functional testing and the tool can execute test cases on different platforms which are Windows, Linux and Mac. Each platform has different browsers which are Firefox, Internet Explorer and Chrome. A qualitative research is done in this project by interviewing several software test engineers who are familiar with automated testing and a framework has been designed in order to fulfill the requirements and to cater the problems that are currently facing. This framework is developed and coded by using Eclipse IDE, Apache ANT and TestNG. As a result, this testing tool is using the combination of Selenium Grid and TestNG in order to create a template for software tester to use it easily. This framework is compatible to all browsers and platforms that are commonly used by Internet users
    corecore