1,317 research outputs found
A Unified Approach to Regression Testing for Mobile Apps
Mobile Applications have been widely used in recent years daily all over the world and are essential in our personal lives and at work. Because Mobile Applications update frequently, it is important that developers perform regression testing to ensure their quality. In addition, the Mobile Applications market has been growing rapidly, allowing anyone to write and publish an application without appropriate validation. A need for regression testing has arisen with the growth of different Mobile Apps and the added functionalities and complexities. In this dissertation, we adapted the FSMWeb [14] approach for selective regression testing to allow for selective regression testing of Mobile Apps. We applied rules to classify the original set of tests of the Mobile App into obsolete, retestable, and reusable tests based on the types of changes to the model of Mobile Apps. New tests are added to cover portions that have not been tested.
As regression test suites change, we want to ensure that required tests are included to satisfy testing criteria, but also that redundant tests are removed, so as not to bloat the regression tests suite. In the dissertation, we developed a test case minimization approach for FSMApp, based on concept analysis that removes redundant test cases.
Next, we proposed an approach to prioritize test cases for Mobile Apps. Naturally, it is desirable to select those test cases that are most likely to reveal defects in the App under test. We prioritized test paths for Mobile Apps based on input complexity, since more inputs might be associated with a more complex functionality which in turn would make it more fault-prone.
As we knew, regression testing is an important activity in software maintenance and enhancement. Combining several regression testing techniques can lead to a more efficient and effective regression test suite. In this dissertation, we presented guidelines for combining regression testing approaches based on a systematic approach. We outlined all possible situations that can occur and showed how each of them influences which combination to use.
Also, we validated the newly proposed regression testing approaches for Mobile Apps and the guidelines for combining regression testing approaches via a case study. The results show that FSMApp approaches are applicable, efficient, and effective
HTML5-pohjaisten hybridimobiilisovellusten käyttöliittymätestauksen automatisoinnin hyödyllisyys: Tapaustutkimus
While the first research papers on GUI test automation date back to the 1990s, its use in the software industry is still relatively rare. Traditionally, GUIs have been tested manually only and software automation has focused on the lower levels of testing such as unit testing or integration testing. The main reason for that is the complexity of GUIs compared to the lower software components. However, the manual testing of GUIs is a tedious and time-consuming process that involves repetitive and dull tasks, since the same tests need to be executed repeatedly on every testing iteration of the software under testing.
The main goal with GUI test automation is to automate those steps and by doing so improve the cost-efficiency of the testing process and free the testers’ time on more meaningful and useful tasks. The previous research on GUI test automation reveals contradicting results. Some of the research has found GUI test automation to be both beneficial and cost-efficient and while others have found results suggesting the exact opposite.
The contradicting results from previous research and the unclarity on the benefits, challenges, limitations and impediments of GUI test automation worked as the main driver for this thesis. The research was conducted as a combination of a literature review on the subject and a case study of three HTML5-based hybrid mobile application projects in the mobile development unit of one of the biggest IT companies in Finland.Käyttöliittymätestiautomaation käyttö yrityksissä on verrattain harvinaista, vaikka ensimmäiset tutkimukset aiheesta ovat 1990-luvulta. Perinteisesti käyttöliittymiä on testattu manuaalisesti ja ohjelmistotestausautomaatio keskitetty ohjelmiston alempien tasojen testaamiseen yksikkö- ja integraatiotesteillä. Pääsyy tähän on käyttöliittymien monimutkaisuus verrattuna alemman tason ohjelmistokomponentteihin. Käyttöliittymän testaaminen manuaalisesti on kuitenkin vaivalloinen, aikaa vaativa ja toisteinen prosessi, koska samat testit suoritetaan jokaisella testiajolla.
Käyttöliittymätestauksen automatisoinnin päätavoite on testauksen kustannus-tehokkuuden parantaminen ja testaajien ajan vapauttaminen olennaisempiin tehtäviin. Aikaisemmat tutkimustulokset käyttöliittymätestiautomaatioon liittyen ovat ristiriitaisia. Osa tutkimuksista on todennut käyttöliittymätestiautomaation olevan hyödyllistä ja kustannustehokasta ja osa on päätynyt päinvastaisiin tuloksiin.
Tämän työn päämotivaattoreina toimivat aiemman tutkimuksen ristiriitaiset tulokset ja epäselvyys käyttöliittymätestiautomaation hyödyllisyydestä ja kustannustehokkuudesta. Työn päätavoitteena oli tutkia voiko käyttöliittymätestiautomaation käyttö olla hyödyllistä ja kustannustehokasta. Työ koostuu kirjallisuuskatsauksesta ja kolmen HTML5-pohjaisen hybridimobiilisovelluksen tapaustutkimuksesta testiautomaation hyödyllisyyteen ja kustannustehokkuuteen liittyen
Systems and Methods for Measuring and Improving End-User Application Performance on Mobile Devices
In today's rapidly growing smartphone society, the time users are spending on their smartphones is continuing to grow and mobile applications are becoming the primary medium for providing services and content to users. With such fast paced growth in smart-phone usage, cellular carriers and internet service providers continuously upgrade their infrastructure to the latest technologies and expand their capacities to improve the performance and reliability of their network and to satisfy exploding user demand for mobile data. On the other side of the spectrum, content providers and e-commerce companies adopt the latest protocols and techniques to provide smooth and feature-rich user experiences on their applications.
To ensure a good quality of experience, monitoring how applications perform on users' devices is necessary. Often, network and content providers lack such visibility into the end-user application performance. In this dissertation, we demonstrate that having visibility into the end-user perceived performance, through system design for efficient and coordinated active and passive measurements of end-user application and network performance, is crucial for detecting, diagnosing, and addressing performance problems on mobile devices. My dissertation consists of three projects to support this statement. First, to provide such continuous monitoring on smartphones with constrained resources that operate in such a highly dynamic mobile environment, we devise efficient, adaptive, and coordinated systems, as a platform, for active and passive measurements of end-user performance. Second, using this platform and other passive data collection techniques, we conduct an in-depth user trial of mobile multipath to understand how Multipath TCP (MPTCP) performs in practice. Our measurement study reveals several limitations of MPTCP. Based on the insights gained from our measurement study, we propose two different schemes to address the identified limitations of MPTCP. Last, we show how to provide visibility into the end- user application performance for internet providers and in particular home WiFi routers by passively monitoring users' traffic and utilizing per-app models mapping various network quality of service (QoS) metrics to the application performance.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146014/1/ashnik_1.pd
Recommended from our members
Leveraging the Power of Crowds: Automated Test Report Processing for The Maintenance of Mobile Applications
Crowdsourcing is an emerging distributed problem-solving model combining human and machine computation. It collects intelligence and knowledge from a large and diverse workforce to complete complex tasks. In the software engineering domain, crowdsourced techniques have been adopted to facilitate various tasks, such as design, testing, debugging, development, and so on. Specifically, in crowdsourced testing, crowdsourced workers are given testing tasks to perform and submit their feedback in the form of test reports. One of the key advantages of crowdsourced testing is that it is capable of providing engineers software engineers with domain knowledge and feedback from a large number of real users. Based on diverse software and hardware settings of these users, engineers can bugs that are not caught by traditional quality assurance techniques. Such benefits are particularly ideal for mobile application testing, which needs rapid development-and-deployment iterations and support diverse execution environments. However, crowdsourced testing naturally generates an overwhelming number of crowdsourced test reports, and inspecting such a large number of reports becomes a time-consuming yet inevitable task. This dissertation presents a series of techniques, tools and experiments to assist in crowdsourced report processing. These techniques are designed for improving this task in multiple aspects: 1. prioritizing crowdsourced report to assist engineers in finding as many unique bugs as possible, and as quickly as possible; 2. grouping crowdsourced report to assist engineers in identifying the representative ones in a short time; 3. summarizing the duplicate reports to provide engineers with a concise and accurate understanding of a group of reports; In the first step, I present a text-analysis-based technique to prioritize test reports for manual inspection. This technique leverages two key strategies: (1) a diversity strategy to help developers inspect a wide variety of test reports and to avoid duplicates and wasted effort on falsely classified faulty behavior, and (2) a risk-assessment strategy to help developers identify test reports that may be more likely to be fault-revealing based on past observations.Together, these two strategies form our technique to prioritize test reports in crowdsourced testing. Moreover, in the mobile testing domain, test reports often consist of more screenshots and shorter descriptive text, and thus text-analysis-based techniques may be ineffective or inapplicable. The shortage and ambiguity of natural-language text information and the well-defined screenshots of activity views within mobile applications motivate me to propose a novel technique based on using image understanding for multi-objective test-report prioritization. This technique employs the Spatial Pyramid Matching (SPM) technique to measure the similarity of the screenshots, and apply the natural-language processing technique to measure the distance between the text of test reports. Next, I design and implement CTRAS: a novel approach to leveraging duplicates to enrich the content of bug descriptions and improve the efficiency of inspecting these reports. CTRAS is capable of automatically aggregating duplicates based on both textual information and screenshots, and further summarizes the duplicate test reports into a comprehensive and comprehensible report.I validate all of these techniques on industrial data by collaborating with several companies. The results show my techniques can improve both the efficiency and effectiveness of crowdsourced test report processing. Also, I suggest settings for different usage scenarios and discuss future research directions
Dispersity-Based Test Case Prioritization
With real-world projects, existing test case prioritization (TCP) techniques have limitations when applied to them, because these techniques require certain information to be made available before they can be applied. For example, the family of input-based TCP techniques are based on test case values or test script strings; other techniques use test coverage, test history, program structure, or requirements information. Existing techniques also cannot guarantee to always be more effective than random prioritization (RP) that does not have any precondition. As a result, RP remains the most applicable and most fundamental TCP technique. In this thesis, we propose a new TCP technique, and mainly aim at studying the Effectiveness, Actual execution time for failure detection, Efficiency and Applicability of the new approach
Recommended from our members
Reinforcement Learning-Based Test Case Generation with Test Suite Prioritization for Android Application Testing
This dissertation introduces a hybrid strategy for automated testing of Android applications that combines reinforcement learning and test suite prioritization. These approaches aim to improve the effectiveness of the testing process by employing reinforcement learning algorithms, namely Q-learning and SARSA (State-Action-Reward-State-Action), for automated test case generation. The studies provide compelling evidence that reinforcement learning techniques hold great potential in generating test cases that consistently achieve high code coverage; however, the generated test cases may not always be in the optimal order. In this study, novel test case prioritization methods are developed, leveraging pairwise event interactions coverage, application state coverage, and application activity coverage, so as to optimize the rates of code coverage specifically for SARSA-generated test cases. Additionally, test suite prioritization techniques are introduced based on UI element coverage, test case cost, and test case complexity to further enhance the ordering of SARSA-generated test cases. Empirical investigations demonstrate that applying the proposed test suite prioritization techniques to the test suites generated by the reinforcement learning algorithm SARSA improved the rates of code coverage over original orderings and random orderings of test cases
Gene Regulatory Network Analysis and Web-based Application Development
Microarray data is a valuable source for gene regulatory network analysis. Using earthworm microarray data analysis as an example, this dissertation demonstrates that a bioinformatics-guided reverse engineering approach can be applied to analyze time-series data to uncover the underlying molecular mechanism. My network reconstruction results reinforce previous findings that certain neurotransmitter pathways are the target of two chemicals - carbaryl and RDX. This study also concludes that perturbations to these pathways by sublethal concentrations of these two chemicals were temporary, and earthworms were capable of fully recovering. Moreover, differential networks (DNs) analysis indicates that many pathways other than those related to synaptic and neuronal activities were altered during the exposure phase.
A novel differential networks (DNs) approach is developed in this dissertation to connect pathway perturbation with toxicity threshold setting from Live Cell Array (LCA) data. Findings from this proof-of-concept study suggest that this DNs approach has a great potential to provide a novel and sensitive tool for threshold setting in chemical risk assessment. In addition, a web-based tool “Web-BLOM” was developed for the reconstruction of gene regulatory networks from time-series gene expression profiles including microarray and LCA data. This tool consists of several modular components: a database, the gene network reconstruction model and a user interface. The Bayesian Learning and Optimization Model (BLOM), originally implemented in MATLAB, was adopted by Web-BLOM to provide an online reconstruction of large-scale gene regulation networks. Compared to other network reconstruction models, BLOM can infer larger networks with compatible accuracy, identify hub genes and is much more computationally efficient
- …