3,525 research outputs found
The Crowd in Requirements Engineering: The Landscape and Challenges
Crowd-based requirements engineering (CrowdRE) could significantly change RE. Performing RE activities such as elicitation with the crowd of stakeholders turns RE into a participatory effort, leads to more accurate requirements, and ultimately boosts software quality. Although any stakeholder in the crowd can contribute, CrowdRE emphasizes one stakeholder group whose role is often trivialized: users. CrowdRE empowers the management of requirements, such as their prioritization and segmentation, in a dynamic, evolved style through collecting and harnessing a continuous flow of user feedback and monitoring data on the usage context. To analyze the large amount of data obtained from the crowd, automated approaches are key. This article presents current research topics in CrowdRE; discusses the benefits, challenges, and lessons learned from projects and experiments; and assesses how to apply the methods and tools in industrial contexts. This article is part of a special issue on Crowdsourcing for Software Engineering
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
A Review on the Applications of Crowdsourcing in Human Pathology
The advent of the digital pathology has introduced new avenues of diagnostic
medicine. Among them, crowdsourcing has attracted researchers' attention in the
recent years, allowing them to engage thousands of untrained individuals in
research and diagnosis. While there exist several articles in this regard,
prior works have not collectively documented them. We, therefore, aim to review
the applications of crowdsourcing in human pathology in a semi-systematic
manner. We firstly, introduce a novel method to do a systematic search of the
literature. Utilizing this method, we, then, collect hundreds of articles and
screen them against a pre-defined set of criteria. Furthermore, we crowdsource
part of the screening process, to examine another potential application of
crowdsourcing. Finally, we review the selected articles and characterize the
prior uses of crowdsourcing in pathology
Collaborating with the Crowd for Software Requirements Engineering: A Literature Review
Requirements engineering (RE) represents a decisive success factor in software development. The novel approach of crowd-based RE seeks to overcome shortcomings of traditional RE practices such as the resource intensiveness and selection bias of stakeholder workshops or interviews. Two streams of research on crowd-based RE can be observed in literature: data-driven approaches that extract requirements from user feedback or analytics data and collaborative approaches in which requirements are collectively developed by a crowd of software users. As yet, research surveying the state of crowd-based RE does not put particular emphasis on collaborative approaches, despite collaborative crowdsourcing being particularly suited for joint ideation and complex problem-solving tasks. Addressing this gap, we conduct a structured literature review to identify the RE activities supported by collaborative crowd-based approaches. Our research provides a systematic overview of the domain of collaborative crowd-based RE and guides researchers and practitioners in increasing user involvement in RE
Organizational memory: the role of business intelligence to leverage the application of collective knowledge
Nowadays, the major challenge to organizations managers is that they must make appropriate decisions in a turbulent environment while it is hard to recognize whether information is good or bad, because actions resulting from wrong decisions may place the organization at risk of survive. That is why organizations managers try to avoid making wrong decisions.
In order to improve this, managers should use collective knowledge and experiences shared through Organizational Memory (OM) effectively to reduce the rate of unsuccessful decision making. In this sense, Business Intelligence (BI) tools allow managers to improve the effectiveness of decision making and problem solving.
In the light of these motivations, the aim of this chapter is to comprehend the role of BI systems in supporting OM effectively in real context of crowdsourcing academic initiative called CrowdUM.This work is financed by Fundos FEDER through the Programa Operacional Fatores de Competitividade - COMPETE and Fundos Nacionais through FCT – Fundação para a Ciência e Tecnologia under the Project: FCOMP-01-0124-FEDER-02267
Design of Automatic User Identification Framework in Crowdsourcing Requirements Engineering : User Mapping and System Architecture
The requirement elicitation is the initial stage of requirement engineering where information collected from users. The process are significantly determined by the quality and quantity of information collected. The crowdsourcing is a method of information gathering from many users. The number and variety of users in the crowdsourcing are both advantages and challenges in the elicitation process. This study purposes a framework for user identification that consists of user mapping and architecture system. The identification process consists of 8 main states, start with defining context, user target and scope determination, data source determination, user data collection, data pre-processing, feature selection, data classification and user identification. The results of this study is an initial state for development of an automated tool for user identification to elicit requirement through crowdsourcing. By the framework can be generated the user classification, which can be used to apply the appropriate method for gathering information in elicitation process
Translating Video Recordings of Mobile App Usages into Replayable Scenarios
Screen recordings of mobile applications are easy to obtain and capture a
wealth of information pertinent to software developers (e.g., bugs or feature
requests), making them a popular mechanism for crowdsourced app feedback. Thus,
these videos are becoming a common artifact that developers must manage. In
light of unique mobile development constraints, including swift release cycles
and rapidly evolving platforms, automated techniques for analyzing all types of
rich software artifacts provide benefit to mobile developers. Unfortunately,
automatically analyzing screen recordings presents serious challenges, due to
their graphical nature, compared to other types of (textual) artifacts. To
address these challenges, this paper introduces V2S, a lightweight, automated
approach for translating video recordings of Android app usages into replayable
scenarios. V2S is based primarily on computer vision techniques and adapts
recent solutions for object detection and image classification to detect and
classify user actions captured in a video, and convert these into a replayable
test scenario. We performed an extensive evaluation of V2S involving 175 videos
depicting 3,534 GUI-based actions collected from users exercising features and
reproducing bugs from over 80 popular Android apps. Our results illustrate that
V2S can accurately replay scenarios from screen recordings, and is capable of
reproducing 89% of our collected videos with minimal overhead. A case
study with three industrial partners illustrates the potential usefulness of
V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software
Engineering (ICSE'20), 13 page
- …