548 research outputs found
Recommended from our members
Leveraging the Power of Crowds: Automated Test Report Processing for The Maintenance of Mobile Applications
Crowdsourcing is an emerging distributed problem-solving model combining human and machine computation. It collects intelligence and knowledge from a large and diverse workforce to complete complex tasks. In the software engineering domain, crowdsourced techniques have been adopted to facilitate various tasks, such as design, testing, debugging, development, and so on. Specifically, in crowdsourced testing, crowdsourced workers are given testing tasks to perform and submit their feedback in the form of test reports. One of the key advantages of crowdsourced testing is that it is capable of providing engineers software engineers with domain knowledge and feedback from a large number of real users. Based on diverse software and hardware settings of these users, engineers can bugs that are not caught by traditional quality assurance techniques. Such benefits are particularly ideal for mobile application testing, which needs rapid development-and-deployment iterations and support diverse execution environments. However, crowdsourced testing naturally generates an overwhelming number of crowdsourced test reports, and inspecting such a large number of reports becomes a time-consuming yet inevitable task. This dissertation presents a series of techniques, tools and experiments to assist in crowdsourced report processing. These techniques are designed for improving this task in multiple aspects: 1. prioritizing crowdsourced report to assist engineers in finding as many unique bugs as possible, and as quickly as possible; 2. grouping crowdsourced report to assist engineers in identifying the representative ones in a short time; 3. summarizing the duplicate reports to provide engineers with a concise and accurate understanding of a group of reports; In the first step, I present a text-analysis-based technique to prioritize test reports for manual inspection. This technique leverages two key strategies: (1) a diversity strategy to help developers inspect a wide variety of test reports and to avoid duplicates and wasted effort on falsely classified faulty behavior, and (2) a risk-assessment strategy to help developers identify test reports that may be more likely to be fault-revealing based on past observations.Together, these two strategies form our technique to prioritize test reports in crowdsourced testing. Moreover, in the mobile testing domain, test reports often consist of more screenshots and shorter descriptive text, and thus text-analysis-based techniques may be ineffective or inapplicable. The shortage and ambiguity of natural-language text information and the well-defined screenshots of activity views within mobile applications motivate me to propose a novel technique based on using image understanding for multi-objective test-report prioritization. This technique employs the Spatial Pyramid Matching (SPM) technique to measure the similarity of the screenshots, and apply the natural-language processing technique to measure the distance between the text of test reports. Next, I design and implement CTRAS: a novel approach to leveraging duplicates to enrich the content of bug descriptions and improve the efficiency of inspecting these reports. CTRAS is capable of automatically aggregating duplicates based on both textual information and screenshots, and further summarizes the duplicate test reports into a comprehensive and comprehensible report.I validate all of these techniques on industrial data by collaborating with several companies. The results show my techniques can improve both the efficiency and effectiveness of crowdsourced test report processing. Also, I suggest settings for different usage scenarios and discuss future research directions
Given Enough Eyeballs, all Bugs are Shallow - A Literature Review for the Use of Crowdsourcing in Software Testing
Over the last years, the use of crowdsourcing has gained a lot of attention in the domain of software engineering. One key aspect of software development is the testing of software. Literature suggests that crowdsourced software testing (CST) is a reliable and feasible tool for manifold kinds of testing. Research in CST made great strides; however, it is mostly unstructured and not linked to traditional software testing practice and terminology. By conducting a literature review of traditional and crowdsourced software testing literature, this paper delivers two major contributions. First, it synthesizes the fields of crowdsourcing research and traditional software testing. Second, the paper gives a comprehensive overview over findings in CST-research and provides a classification into different software testing types
PREM: Prestige Network Enhanced Developer-Task Matching for Crowdsourced Software Development
Many software organizations are turning to employ crowdsourcing to augment their software production. For current practice of crowdsourcing, it is common to see a mass number of tasks posted on software crowdsourcing platforms, with little guidance for task selection. Considering that crowd developers may vary greatly in expertise, inappropriate developer-task matching will harm the quality of the deliverables. It is also not time-efficient for developers to discover their most appropriate tasks from vast open call requests. We propose an approach called PREM, aiming to appropriately match between developers and tasks. PREM automatically learns from the developers’ historical task data. In addition to task preference, PREM considers the competition nature of crowdsourcing by constructing developers’ prestige network. This differs our approach from previous developer recommendation methods that are based on task and/or individual features. Experiments are conducted on 3 TopCoder datasets with 9,191 tasks in total. Our experimental results show that reasonable accuracies are achievable (63%, 46%, 36% for the 3 datasets respectively, when matching 5 developers to each task) and the constructed prestige network can help improve the matching results
A Machine Learning Approach for Classifying Textual Data in Crowdsourcing
Crowdsourcing represents an innovative approach that allows companies to engage a diverse network of people over the internet and use their collective creativity, expertise, or workforce for completing tasks that have previously been performed by dedicated employees or contractors. However, the process of reviewing and filtering the large amount of solutions, ideas, or feedback submitted by a crowd is a latent challenge. Identifying valuable inputs and separating them from low quality contributions that cannot be used by the companies is time-consuming and cost-intensive. In this study, we build upon the principles of text mining and machine learning to partially automatize this process. Our results show that it is possible to explain and predict the quality of crowdsourced contributions based on a set of textual features. We use these textual features to train and evaluate a classification algorithm capable of automatically filtering textual contributions in crowdsourcing
Evolving technologies for disaster management in U.S. Cities
Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 63-65).The rapid development of modem technology has increased access to and reliance on sophisticated communication and real time technology. These technologies, which have become embedded within everyday life, have significant implications for government agencies - particularly within the field of disaster management. This paper draws on the evolution of disaster research, the history of disaster management in the US, literature on emerging uses of social media technology, and interviews from 24 emergency management offices in the US to examine three questions: 1) What types of technology are cities currently using in disaster management, 2) Which factors are most influential in determining how cities select emergency management technology, and 3) How can future technology development better address the needs of emergency managers? Several conclusions and observations emerged from analysis of the current literature and interview data. First, technology is primarily used by city disaster management agencies in the preparedness and response phrases of the disaster cycle. These technologies can be grouped into communications, data management, and simulation technologies. Cities are already operating on web-based platforms and are, in many cases, tentatively experimenting with the use of social media as a one-way broadcasting system rather than a bi-directional platform to gather information from the general public. Second, while various factors impact technology adoption, funding, the support of a political champion, and legal concerns stand out in particular. In addition to these adoption factors, cities are also currently facing a number of challenges including general interoperability, changing government-public relations, and increasingly mobile populations. Future technology needs must work to address these issues through the development and adoption of open standards, strengthening data integration capacities. Cities much also better leverage both existing and new forms of communication to build the level of trust needed to both reduce vulnerability and increase resilience.by Vanessa Mei-Yee Ng.M.C.P
Continuous, Evolutionary and Large-Scale: A New Perspective for Automated Mobile App Testing
Mobile app development involves a unique set of challenges including device
fragmentation and rapidly evolving platforms, making testing a difficult task.
The design space for a comprehensive mobile testing strategy includes features,
inputs, potential contextual app states, and large combinations of devices and
underlying platforms. Therefore, automated testing is an essential activity of
the development process. However, current state of the art of automated testing
tools for mobile apps poses limitations that has driven a preference for manual
testing in practice. As of today, there is no comprehensive automated solution
for mobile testing that overcomes fundamental issues such as automated oracles,
history awareness in test cases, or automated evolution of test cases.
In this perspective paper we survey the current state of the art in terms of
the frameworks, tools, and services available to developers to aid in mobile
testing, highlighting present shortcomings. Next, we provide commentary on
current key challenges that restrict the possibility of a comprehensive,
effective, and practical automated testing solution. Finally, we offer our
vision of a comprehensive mobile app testing framework, complete with research
agenda, that is succinctly summarized along three principles: Continuous,
Evolutionary and Large-scale (CEL).Comment: 12 pages, accepted to the Proceedings of 33rd IEEE International
Conference on Software Maintenance and Evolution (ICSME'17
- …