35,193 research outputs found

    RoseMatcher: Identifying the Impact of User Reviews on App Updates

    Full text link
    Release planning for mobile apps has recently become an area of active research. Prior research concentrated on app analysis based on app release notes in App Store, or tracking user reviews to support app evolution with issue trackers. However, as a platform for development teams to communicate with users, Apple Store has not been studied for detecting the relevance between release notes and user reviews. In this paper, we introduce RoseMatcher, an automatic approach to match relevant user reviews with app release notes, and identify matched pairs with high confidence. We collected 944 release notes and 1,046,862 user reviews from 5 mobile apps in the Apple App Store as research data, and evaluated the effectiveness and accuracy of RoseMatcher. Our evaluation shows that RoseMatcher can reach a hit ratio of 0.718 for identifying relevant matched pairs. We further conducted manual labelling and content analysis on 984 relevant matched pairs, and defined 8 roles user reviews play in app update according to the relationship between release notes and user reviews in the relevant matched pairs. The study results show that release notes tend to respond and solve feature requests, bug reports, and complaints raised in user reviews, while user reviews also tend to give positive, negative, and constructive feedback on app updates. Additionally, in the time dimension, the relevant reviews of release notes tend to be posed in a small period of time before and after the release of release notes. In the matched pairs, the time interval between the post time of release notes and user reviews reaches a maximum of three years and an average of one year. These findings indicate that the development teams do adopt user reviews when updating apps, and users show their interest in app release notes.Comment: 18 pages, 7 figure

    User Review-Based Change File Localization for Mobile Applications

    Get PDF
    In the current mobile app development, novel and emerging DevOps practices (e.g., Continuous Delivery, Integration, and user feedback analysis) and tools are becoming more widespread. For instance, the integration of user feedback (provided in the form of user reviews) in the software release cycle represents a valuable asset for the maintenance and evolution of mobile apps. To fully make use of these assets, it is highly desirable for developers to establish semantic links between the user reviews and the software artefacts to be changed (e.g., source code and documentation), and thus to localize the potential files to change for addressing the user feedback. In this paper, we propose RISING (Review Integration via claSsification, clusterIng, and linkiNG), an automated approach to support the continuous integration of user feedback via classification, clustering, and linking of user reviews. RISING leverages domain-specific constraint information and semi-supervised learning to group user reviews into multiple fine-grained clusters concerning similar users' requests. Then, by combining the textual information from both commit messages and source code, it automatically localizes potential change files to accommodate the users' requests. Our empirical studies demonstrate that the proposed approach outperforms the state-of-the-art baseline work in terms of clustering and localization accuracy, and thus produces more reliable results.Comment: 15 pages, 3 figures, 8 table

    Overcoming Language Dichotomies: Toward Effective Program Comprehension for Mobile App Development

    Full text link
    Mobile devices and platforms have become an established target for modern software developers due to performant hardware and a large and growing user base numbering in the billions. Despite their popularity, the software development process for mobile apps comes with a set of unique, domain-specific challenges rooted in program comprehension. Many of these challenges stem from developer difficulties in reasoning about different representations of a program, a phenomenon we define as a "language dichotomy". In this paper, we reflect upon the various language dichotomies that contribute to open problems in program comprehension and development for mobile apps. Furthermore, to help guide the research community towards effective solutions for these problems, we provide a roadmap of directions for future work.Comment: Invited Keynote Paper for the 26th IEEE/ACM International Conference on Program Comprehension (ICPC'18

    Disparity between the Programmatic Views and the User Perceptions of Mobile Apps

    Get PDF
    User perception in any mobile-app ecosystem, is represented as user ratings of apps. Unfortunately, the user ratings are often biased and do not reflect the actual usability of an app. To address the challenges associated with selection and ranking of apps, we need to use a comprehensive and holistic view about the behavior of an app. In this paper, we present and evaluate Trust based Rating and Ranking (TRR) approach. It relies solely on an apps' internal view that uses programmatic artifacts. We compute a trust tuple (Belief, Disbelief, Uncertainty - B, D, U) for each app based on the internal view and use it to rank the order apps offering similar functionality. Apps used for empirically evaluating the TRR approach are collected from the Google Play Store. Our experiments compare the TRR ranking with the user review-based ranking present in the Google Play Store. Although, there are disparities between the two rankings, a slightly deeper investigation indicates an underlying similarity between the two alternatives

    Translating Video Recordings of Mobile App Usages into Replayable Scenarios

    Full text link
    Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing ≈\approx 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software Engineering (ICSE'20), 13 page
    • …
    corecore