76 research outputs found

    Canary: Extracting Requirements-Related Information from Online Discussions

    Get PDF
    Online discussions about software applications generate a large amount of requirements-related information. This information can potentially be usefully applied in requirements engineering; however currently, there are few systematic approaches for extracting such information. To address this gap, we propose Canary, an approach for extracting and querying requirements-related information in online discussions. The highlight of our approach is a high-level query language that combines aspects of both requirements and discussion in online forums. We give the semantics of the query language in terms of relational databases and SQL. We demonstrate the usefulness of the language using examples on real data extracted from online discussions. Our approach relies on human annotations of online discussions. We highlight the subtleties involved in interpreting the content in online discussions and the assumptions and choices we made to effectively address them. We demonstrate the feasibility of generating high-quality annotations by obtaining them from lay Amazon Mechanical Turk users

    Enhancing Creativity as Innovation via Asynchronous Crowdwork

    Get PDF
    Synchronous, face-to-face interactions such as brainstorming are considered essential for creative tasks (the old normal). However, face-to-face interactions are difficult to arrange because of the diverse locations and conflicting availability of people - a challenge made more prominent by work-from-home practices during the COVID-19 pandemic (the new normal). In addition, face-to-face interactions are susceptible to cognitive interference. We employ crowdsourcing as an avenue to investigate creativity in asynchronous, online interactions. We choose product ideation, a natural task for the crowd since it requires human insight and creativity into what product features would be novel and useful. We compare the performance of solo crowd workers with asynchronous teams of crowd workers formed without prior coordination. Our findings suggest that, first, crowd teamwork yields fewer but more creative ideas than solo crowdwork. The enhanced team creativity results when Second, cognitive interference, known to inhibit creativity in face-to-face teams, may not be significant in crowd teams. Third, teamwork promotes better achievement emotions for crowd workers. These findings provide a basis for trading off creativity, quantity, and worker happiness in setting up crowdsourcing workflows for product ideation. </p

    SoSharP:Recommending Sharing Policies in Multiuser Privacy Scenarios

    Get PDF

    Canary:an Interactive and Query-Based Approach to Extract Requirements from Online Forums

    Get PDF
    Interactions among stakeholders and engineers is key to Requirements engineering (RE). Increasingly, such interactions take place online, producing large quantities of qualitative (natural language) and quantitative (e.g., votes) data. Although a rich source of requirements-related information, extracting such information from online forums can be nontrivial.We propose Canary, a tool-assisted approach, to facilitate systematic extraction of requirements-related information from online forums via high-level queries. Canary (1) adds structure to natural language content on online forums using an annotation schema combining requirements and argumentation ontologies, (2) stores the structured data in a relational database, and (3) compiles high-level queries in Canary syntax to SQL queries that can be run on the relational database.We demonstrate key steps in Canary workflow, including (1) extracting raw data from online forums, (2) applying annotations to the raw data, and (3) compiling and running interesting Canary queries that leverage the social aspect of the data

    Do Differences in Values Influence Disagreements in Online Discussions?

    Full text link
    Disagreements are common in online discussions. Disagreement may foster collaboration and improve the quality of a discussion under some conditions. Although there exist methods for recognizing disagreement, a deeper understanding of factors that influence disagreement is lacking in the literature. We investigate a hypothesis that differences in personal values are indicative of disagreement in online discussions. We show how state-of-the-art models can be used for estimating values in online discussions and how the estimated values can be aggregated into value profiles. We evaluate the estimated value profiles based on human-annotated agreement labels. We find that the dissimilarity of value profiles correlates with disagreement in specific cases. We also find that including value information in agreement prediction improves performance.Comment: Accepted as main paper at EMNLP 202

    Crowd-Informed Goal Models

    Get PDF
    A topic of recent interest is how to apply crowdsourced information toward producing better software requirements. A research question that has received little attention so far is how to leverage crowdsourced information toward creating better-informed models of requirements. In this paper, we contribute a method following which information in online discussions may be leveraged toward constructing goal models. A salient feature of our method is that it applies high-level queries to draw out potentially relevant information from discussions. We also give a subjective logic-based method for deriving an ordering of the goals based on the amount of supporting and rebutting evidence in the discussions. Such an ordering can potentially be applied toward prioritizing goals for implementation. © 2018 IEEE

    Reason Against the Machine: Future Directions for Mass Online Deliberation

    Full text link
    Designers of online deliberative platforms aim to counter the degrading quality of online debates. Support technologies such as machine learning and natural language processing open avenues for widening the circle of people involved in deliberation, moving from small groups to "crowd" scale. Numerous design features of large-scale online discussion systems allow larger numbers of people to discuss shared problems, enhance critical thinking, and formulate solutions. We review the transdisciplinary literature on the design of digital mass deliberation platforms and examine the commonly featured design aspects (e.g., argumentation support, automated facilitation, and gamification) that attempt to facilitate scaling up. We find that the literature is largely focused on developing technical fixes for scaling up deliberation, but may neglect the more nuanced requirements of high quality deliberation. Current design research is carried out with a small, atypical segment of the world's population, and much research is still needed on how to facilitate and accommodate different genders or cultures in deliberation, how to deal with the implications of pre-existing social inequalities, how to build motivation and self-efficacy in certain groups, and how to deal with differences in cognitive abilities and cultural or linguistic differences. Few studies bridge disciplines between deliberative theory, design and engineering. As a result, scaling up deliberation will likely advance in separate systemic siloes. We make design and process recommendations to correct this course and suggest avenues for future researchComment: Adjusting title and abstract to arxiv metadat
    • …
    corecore