1 research outputs found

    Canary:methodology for using social media to inform requirements modeling

    Get PDF
    Online discussions about software applications generate a large amount of requirementsrelated information. Social media serves as an extensive repository of user interaction related to software applications. Users discuss application features and express their sentiments about them in both qualitative (usually in natural language) and quantitative ways (for example, via votes). This information can potentially be usefully applied in requirements engineering; however currently, there are few systematic approaches for extracting such information. To address this gap, I applied a three-fold research approach in exploring interesting aspects of social media that can be useful to RE, pioneering a methodology for query based extraction of RE-related information from social media, and the systematic methodology for enriching established goal models with information extracted using Canary queries. First, a study of interaction among users about Google Maps on the forum Reddit. I highlight important artifacts relevant to requirements in these interactions. I discuss goal modeling as an archetypal requirements modeling approach and use that as a basis for enhancing requirements modeling with notions that capture user interaction. To back up my observations I systematically collect, annotate, and present empirical data on the structure and value of online discussion about software applications. Second, Canary, an approach for extracting and querying requirements-related information in online discussions. The highlight of my approach is a high-level query language that combines aspects of both requirements and discussion in online forums. I give the semantics of the query language in terms of relational databases and SQL. I demonstrate the usefulness of the language using examples on real data extracted from online discussions. My approach relies on human annotations of online discussions. I highlight the subtleties involved in interpreting the content in online discussions and the assumptions and choices I made to effectively address them. I demonstrate the feasibility of generating high-accuracy annotations by obtaining them from lay Amazon Mechanical Turk users. A topic of recent interest is how to apply crowdsourced information toward producing better software requirements. A research question that has received little attention so far is how to leverage crowdsourced information toward creating better-informed models of requirements. Third, I contribute a method following which information in online discussions may be leveraged toward constructing goal models. A salient feature of my method is that it applies high-level queries to draw out potentially relevant information from discussions. I also give a subjective logic-based method for deriving an ordering of the goals based on the amount of supporting and rebutting evidence in the discussions. Such an ordering can potentially be applied toward prioritizing goals for implementation
    corecore