196,298 research outputs found

    Model checking user interfaces

    Get PDF
    User interfaces are crucial for the success of most software projects. As software grows in complexity there is a similar growth in the user interface com- plexity which leads to bugs which may be difficult to find by means of testing. In this paper we use the method of automated model checking to verify user interfaces with respect to a formal specification. We present an algorithm for the automated abstraction of the user interface model of a given system, which uses asynchronous and interleaving composition of a number of programs. This technique was successful at verifying the user interface of case study and brings us one step forward towards push button verification.peer-reviewe

    Система автоматичного тестування програм під ОС Android

    No full text
    У статті розглядається розробка власного фреймворку автоматизованого тестування програмного інтерфейсу користувача під систему Android. Аналізуються існуючі підходи до автоматизованого тестування програм в інших системах та пропонується метод для розробки власного автоматизованого фреймворку для Android. Отримані результати дають можливість швидко створювати автоматизовані тести програмного інтерфейсу.В статье рассматривается создание собственного фреймворка автоматизированного тестирования программного интерфейса пользователя под систему Android. Анализируются существующие подходы к автоматизированному тестированию программ в разных системах и предлагается метод для разработки собственного автоматизированного фреймворка для Android. Полученные результаты дают возможность быстро создать автоматизированные тесты программного интерфейса.The article discusses development of a self-made software framework for automated user interface testing for the Android OS. Analyzing existing approaches to the automated testing of applications in other systems, as well as proposing a method to develop one’s own automated framework for Android. These results make it possible to easily create automated tests of application interface

    Analisis Perbandingan Tools Pengujian Otomatis Pada Aplikasi Berbasis WEB

    Get PDF
    There are many automated test kits available on the market today, but not all test kits can perform several types of tests according to the test content they have. The purpose of this study is to compare automated testing tools for Graphical User Interface (GUI) based on predetermined parameters and test cases. This study uses Selenium Webdriver and Katalon Studio as comparison tools, and the parameters used in this study are Test Case Execution Time, Delay, Image Based Testing, Scrolling, and Documentation of Execution Results. The DiaryMe application is the object of this research, and the Software Testing Life Cycle (STLC) method is used for the software testing phase. From the results of the analysis and testing carried out, the researcher recommends Katalon Studio as a tool that is suitable for use in terms of ease of use and running test cases in a Web-based application testing activity

    NASA Tech Briefs, August 2007

    Get PDF
    Topics include: Program Merges SAR Data on Terrain and Vegetation Heights; Using G(exp 4)FETs as a Data Router for In-Plane Crossing of Signal Paths; Two Algorithms for Processing Electronic Nose Data; Radiation-Tolerant Dual Data Bus; General-Purpose Front End for Real-Time Data Processing; Nanocomposite Photoelectrochemical Cells; Ultracapacitor-Powered Cordless Drill, Cumulative Timers for Microprocessors; Photocatalytic/Magnetic Composite Particles; Separation and Sealing of a Sample Container Using Brazing; Automated Aerial Refueling Hitches a Ride on AFF; Cobra Probes Containing Replaceable Thermocouples; High-Speed Noninvasive Eye-Tracking System; Detergent-Specific Membrane Protein Crystallization Screens; Evaporation-Cooled Protective Suits for Firefighters; Plasmonic Antenna Coupling for QWIPs; Electronic Tongue Containing Redox and Conductivity Sensors; Improved Heat-Stress Algorithm; A Method of Partly Automated Testing of Software; Rover Wheel-Actuated Tool Interface; and Second-Generation Electronic Nose

    Guiding Random Graphical and Natural User Interface Testing Through Domain Knowledge

    Get PDF
    Users have access to a diverse set of interfaces that can be used to interact with software. Tools exist for automatically generating test data for an application, but the data required by each user interface is complex. Generating realistic data similar to that of a user is difficult. The environment which an application is running inside may also limit the data available, or updates to an operating system can break support for tools that generate test data. Consequently, applications exist for which there are no automated methods of generating test data similar to that which a user would provide through real usage of a user interface. With no automated method of generating data, the cost of testing increases and there is an increased chance of bugs being released into production code. In this thesis, we investigate techniques which aim to mimic users, observing how stored user interactions can be split to generate data targeted at specific states of an application, or to generate different subareas of the data structure provided by a user interface. To reduce the cost of gathering and labelling graphical user interface data, we look at generating randomised screen shots of applications, which can be automatically labelled and used in the training stage of a machine learning model. These trained models could guide a randomised approach at generating tests, achieving a significantly higher branch coverage than an unguided random approach. However, for natural user interfaces, which allow interaction through body tracking, we could not learn such a model through generated data. We find that models derived from real user data can generate tests with a significantly higher branch coverage than a purely random tester for both natural and graphical user interfaces. Our approaches use no feedback from an application during test generation. Consequently, the models are “generating data in the dark”. Despite this, these models can still generate tests with a higher coverage than random testing, but there may be a benefit to inferring the current state of an application and using this to guide data generation

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally

    Semi Automated Partial Credit Grading of Programming Assignments

    Get PDF
    The grading of student programs is a time consuming process. As class sizes continue to grow, especially in entry level courses, manually grading student programs has become an even more daunting challenge. Increasing the difficulty of grading is the needs of graphical and interactive programs such as those used as part of the UNH Computer Science curriculum (and various textbooks). There are existing tools that support the grading of introductory programming assignments (TAME and Web-CAT). There are also frameworks that can be used to test student code (JUnit, Tester, and TestNG). While these programs and frameworks are helpful, they have little or no no support for programs that use real data structures or that have interactive or graphical features. In addition, the automated tests in all these tools provide only “all or nothing” evaluation. This is a significant limitation in many circumstances. Moreover, there is little or no support for dynamic alteration of grading criteria, which means that refactoring of test classes after deployment is not easily done. Our goal is to create a framework that can address these weaknesses. This framework needs to: 1. Support assignments that have interactive and graphical components. 2. Handle data structures in student programs such as lists, stacks, trees, and hash tables. 3. Be able to assign partial credit automatically when the instructor can predict errors in advance. 4. Provide additional answer clustering information to help graders identify and assign consistent partial credit for incorrect output that was not predefined. Most importantly, these tools, collectively called RPM (short for Rapid Program Management), should interface effectively with our current grading support framework without requiring large amounts of rewriting or refactoring of test code
    corecore