16 research outputs found
A survey of the use of crowdsourcing in software engineering
The term 'crowdsourcing' was initially introduced in 2006 to describe an emerging distributed problem-solving model by online workers. Since then it has been widely studied and practiced to support software engineering. In this paper we provide a comprehensive survey of the use of crowdsourcing in software engineering, seeking to cover all literature on this topic. We first review the definitions of crowdsourcing and derive our definition of Crowdsourcing Software Engineering together with its taxonomy. Then we summarise industrial crowdsourcing practice in software engineering and corresponding case studies. We further analyse the software engineering domains, tasks and applications for crowdsourcing and the platforms and stakeholders involved in realising Crowdsourced Software Engineering solutions. We conclude by exposing trends, open issues and opportunities for future research on Crowdsourced Software Engineering
A Survey on the Usability and User Experience of the Open Community Web Portals
Web-based portals enable a new communication paradigm that could provide variety of benefits and support to both the customers and companies. Customers can have continuous access to the services,
information, support, and payments on the portal with the possibility of
personalisation. This paper presents a survey on the usability and user
experience studies relevant to open community web portals and information sharing platforms. The objective of the work presented in this paper was to produce an overview of how literature reported on usability in relation to information sharing web portals. A systematic mapping method has been applied to identify and quantify primary studies focusing on the usability and user experience of the open community web portals
Data-Driven Usability Refactoring: Tools and Challenges
Usability has long been recognized as an important software quality attribute and it has become essential in web application development and maintenance. However, it is still hard to integrate usability evaluation and improvement practices in the software development process. Moreover, these practices are usually unaffordable for small to medium-sized companies. In this position paper we propose an approach and tools to allow the crowd of web users participate in the process of usability evaluation and repair. Since we use the refactoring technique for usability improvement, we introduce the notion of “data-driven refactoring”: use data from the mass of users to learn about refactoring opportunities, plus also about refactoring effectiveness. This creates an improvement cycle where some refactorings may be discarded while others introduced, depending on their evaluated success. The paper also discusses some of the challenges that we foresee ahead
Recommended from our members
Leveraging the Power of Crowds: Automated Test Report Processing for The Maintenance of Mobile Applications
Crowdsourcing is an emerging distributed problem-solving model combining human and machine computation. It collects intelligence and knowledge from a large and diverse workforce to complete complex tasks. In the software engineering domain, crowdsourced techniques have been adopted to facilitate various tasks, such as design, testing, debugging, development, and so on. Specifically, in crowdsourced testing, crowdsourced workers are given testing tasks to perform and submit their feedback in the form of test reports. One of the key advantages of crowdsourced testing is that it is capable of providing engineers software engineers with domain knowledge and feedback from a large number of real users. Based on diverse software and hardware settings of these users, engineers can bugs that are not caught by traditional quality assurance techniques. Such benefits are particularly ideal for mobile application testing, which needs rapid development-and-deployment iterations and support diverse execution environments. However, crowdsourced testing naturally generates an overwhelming number of crowdsourced test reports, and inspecting such a large number of reports becomes a time-consuming yet inevitable task. This dissertation presents a series of techniques, tools and experiments to assist in crowdsourced report processing. These techniques are designed for improving this task in multiple aspects: 1. prioritizing crowdsourced report to assist engineers in finding as many unique bugs as possible, and as quickly as possible; 2. grouping crowdsourced report to assist engineers in identifying the representative ones in a short time; 3. summarizing the duplicate reports to provide engineers with a concise and accurate understanding of a group of reports; In the first step, I present a text-analysis-based technique to prioritize test reports for manual inspection. This technique leverages two key strategies: (1) a diversity strategy to help developers inspect a wide variety of test reports and to avoid duplicates and wasted effort on falsely classified faulty behavior, and (2) a risk-assessment strategy to help developers identify test reports that may be more likely to be fault-revealing based on past observations.Together, these two strategies form our technique to prioritize test reports in crowdsourced testing. Moreover, in the mobile testing domain, test reports often consist of more screenshots and shorter descriptive text, and thus text-analysis-based techniques may be ineffective or inapplicable. The shortage and ambiguity of natural-language text information and the well-defined screenshots of activity views within mobile applications motivate me to propose a novel technique based on using image understanding for multi-objective test-report prioritization. This technique employs the Spatial Pyramid Matching (SPM) technique to measure the similarity of the screenshots, and apply the natural-language processing technique to measure the distance between the text of test reports. Next, I design and implement CTRAS: a novel approach to leveraging duplicates to enrich the content of bug descriptions and improve the efficiency of inspecting these reports. CTRAS is capable of automatically aggregating duplicates based on both textual information and screenshots, and further summarizes the duplicate test reports into a comprehensive and comprehensible report.I validate all of these techniques on industrial data by collaborating with several companies. The results show my techniques can improve both the efficiency and effectiveness of crowdsourced test report processing. Also, I suggest settings for different usage scenarios and discuss future research directions
Multi-objective Search-based Mobile Testing
Despite the tremendous popularity of mobile applications, mobile testing still relies heavily on manual testing. This thesis presents mobile test automation approaches based on multi-objective search. We introduce three approaches: Sapienz (for native Android app testing), Octopuz (for hybrid/web JavaScript app testing) and Polariz (for using crowdsourcing to support search-based mobile testing). These three approaches represent the primary scientific and technical contributions of the thesis. Since crowdsourcing is, itself, an emerging research area, and less well understood than search-based software engineering, the thesis also provides the first comprehensive survey on the use of crowdsourcing in software testing (in particular) and in software engineering (more generally). This survey represents a secondary contribution. Sapienz is an approach to Android testing that uses multi-objective search-based testing to automatically explore and optimise test sequences, minimising their length, while simultaneously maximising their coverage and fault revelation. The results of empirical studies demonstrate that Sapienz significantly outperforms both the state-of-the-art technique Dynodroid and the widely-used tool, Android Monkey, on all three objectives. When applied to the top 1,000 Google Play apps, Sapienz found 558 unique, previously unknown crashes. Octopuz reuses the Sapienz multi-objective search approach for automated JavaScript testing, aiming to investigate whether it replicates the Sapienz’ success on JavaScript testing. Experimental results on 10 real-world JavaScript apps provide evidence that Octopuz significantly outperforms the state of the art (and current state of practice) in automated JavaScript testing. Polariz is an approach that combines human (crowd) intelligence with machine (computational search) intelligence for mobile testing. It uses a platform that enables crowdsourced mobile testing from any source of app, via any terminal client, and by any crowd of workers. It generates replicable test scripts based on manual test traces produced by the crowd workforce, and automatically extracts from these test traces, motif events that can be used to improve search-based mobile testing approaches such as Sapienz
Automatic detection of usability smells in web applications
Usability assessment of web applications continues to be an expensive and often neglected practice. While large companies are able to spare resources for studying and improving usability in their products, smaller businesses often divert theirs in other aspects. To help these cases, researches have devised automatic approaches for user interaction analysis, and there are commercial services that offer automated usability statistics at relatively low fees. However, most existing approaches still fall short in specifying the usability problems concretely enough to identify and suggest solutions. In this work we describe usability smells of user interaction, i.e., hints of usability problems on running web applications, and the process in which they can be identified by analyzing user interaction events. We also describe USF, the tool that implements the process in a fully automated way with minimum setup effort. USF analyses user interaction events on-the-fly, discovers usability smells and reports them together with a concrete solution in terms of a usability refactoring, providing usability advice for deployed web applications
Supporting Web-based and Crowdsourced Evaluations of Data Visualizations
User studies play a vital role in data visualization research because they help measure the strengths and weaknesses of different visualization techniques quantitatively. In addition, they provide insight into what makes one technique more effective than another; and they are used to validate research contributions in the field of information visualization. For example, a new algorithm, visual encoding, or interaction technique is not considered a contribution unless it has been validated to be better than the state of the art and its competing alternatives or has been validated to be useful to intended users. However, conducting user studies is challenging, time consuming, and expensive.
User studies generally requires careful experimental designs, iterative refinement, recruitment of study participants, careful management of participants during the run of the studies, accurately collecting user responses, and expertise in statistical analysis of study results. There are several variables that are taken into consideration which can impact user study outcome if not carefully managed. Hence the process of conducting user studies successfully can take several weeks to months.
In this dissertation, we investigated how to design an online framework that can reduce the overhead involved in conducting controlled user studies involving web-based visualizations. Our main goal in this research was to lower the overhead of evaluating data visualizations quantitatively through user studies. To this end, we leveraged current research opportunities to provide a framework design that reduces the overhead involved in designing and running controlled user studies of data visualizations. Specifically, we explored the design and implementation of an open-source framework and an online service (VisUnit) that allows visualization designers to easily configure user studies for their web-based data visualizations, deploy user studies online, collect user responses, and analyze incoming results automatically. This allows evaluations to be done more easily, cheaply, and frequently to rapidly test hypotheses about visualization designs.
We evaluated the effectiveness of our framework (VisUnit) by showing that it can be used to replicate 84% of 101 controlled user studies published in IEEE Information Visualization conferences between 1995 and 2015. We evaluated the efficiency of VisUnit by showing that graduate students can use it to design sample user studies in less than an hour.
Our contributions are two-fold: first, we contribute a flexible design and implementation that facilitates the creation of a wide range of user studies with limited effort; second, we provide an evaluation of our design that shows that it can be used to replicate a wide range of user studies, can be used to reduce the time evaluators spend on user studies, and can be used to support new research
Automatic Detection of Usability Smells in Web Applications
Usability assessment of web applications continues to be an expensive and often neglected practice. While large companies are able to spare resources for studying and improving usability in their products, smaller businesses often divert theirs in other aspects. To help these cases, researches have devised automatic approaches for user interaction analysis, and there are commercial services that offer automated usability statistics at relatively low fees. However, most existing approaches still fall short in specifying the usability problems concretely enough to identify and suggest solutions. In this work we describe usability smells of user interaction, i.e., hints of usability problems on running web applications, and the process in which they can be iden tified by analyzing user interaction events. We also describe USF, the tool that implements the process in a fully automated way with minimum setup effort. USF analyses user interaction events on-the-fly, discovers usability smells and reports them together with a concrete solution in terms of a usability refactoring, providing usability advice for deployed web applications.Laboratorio de Investigación y Formación en Informática Avanzad