37,281 research outputs found

    Rakenduste kasutajaarvustustest informatsiooni kaevandamine tarkvara arendustegevuste soodustamiseks

    Get PDF
    Kasutajate vajaduste ja ootuste hindamine on arendajate jaoks oluline oma tarkvararakenduste kvaliteedi parandamiseks. Mobiilirakenduste platvormidele sisestatud arvustused on kasulikuks infoallikaks kasutajate pidevalt muutuvate vajaduste hindamiseks. Igapäevaselt rakenduste platvormidele esitatud arvustuste suur maht nõuab aga automaatseid meetodeid neist kasuliku info leidmiseks. Arvustuste automaatseks liigitamiseks, nt veateatis või uue funktsionaalsuse küsimine, saab kasutada teksti klassifitseerimismudeleid. Rakenduse funktsioonide automaatne kaevandamine arvustustest aitab teha kokkuvõtteid kasutajate meelsusest rakenduse olemasolevate funktsioonide osas. Kõigepealt eksperimenteerime erinevate tekstiklassifitseerimise mudelitega ning võrdleme lihtsaid, leksikaalseid tunnuseid kasutavaid mudeleid keerukamatega, mis kasutavad rikkalikke lingvistilisi tunnuseid või mis põhinevad tehisnärvivõrkudel. Erinevate faktorite mõju uurimiseks funktsioonide kaevandamise meetoditele me teeme kõigepealt kindlaks erinevate meetodite baastaseme täpsuse rakendades neid samades eksperimentaalsetes tingimustes. Seejärel võrdleme neid meetodeid erinevates tingimustes, varieerides treenimiseks kasutatud annoteeritud andmestikke ning hindamismeetodeid. Kuna juhendatud masinõppel baseeruvad kaevandamismeetodid on võrreldes reeglipõhistega tundlikumad (1) andmete märgendamisel kasutatud annoteerimisjuhistele ning (2) märgendatatud andmestiku suurusele, siis uurisime nende faktorite mõju juhendatud masinõppe kontekstis ja pakkusime välja uued annoteerimisjuhised, mis võivad aidata funktsioonide kaevandamise täpsust parandada. Käesoleva doktoritöö projekti tulemusel valmis ka kontseptuaalne tööriist, mis võimaldab konkureerivaid rakendusi omavahel võrrelda. Tööriist kombineerib arvustuse tekstide klassifitseerimise ja rakenduse funktsioonide kaevandamise meetodid. Tööriista hinnanud kümme tarkvaraarendajat leidsid, et sellest võib olla kasu rakenduse kvaliteedi parandamiselFor app developers, it is important to continuously evaluate the needs and expectations of their users to improve app quality. User reviews submitted to app marketplaces are regarded as a useful information source to re-access evolving user needs. The large volume of user reviews received every day requires automatic methods to find such information in user reviews. Text classification models can be used to categorize review information into types such as feature requests and bug reports, while automatic app feature extraction from user reviews can help in summarizing users’ sentiments at the level of app features. For classifying review information, we perform experiments to compare the performance of simple models using only lexical features to models with rich linguistic features and models built on deep learning architectures, i.e., Convolutional Neural Network (CNN). To investigate factors influencing the performance of automatic app feature extraction methods, i.e. rule-based and supervised machine learning, we first establish a baseline in a single experimental setting and then compare the performances in different experimental settings (i.e., varying annotated datasets and evaluation methods). Since the performance of supervised feature extraction methods is more sensitive than rule- based methods to (1) guidelines used to annotate app features in user reviews and (2) the size of the annotated data, we investigate their impact on the performance of supervised feature extraction models and suggest new annotation guidelines that have the potential to improve feature extraction performance. To make the research results of the thesis project also applicable for non-experts, we developed a proof-of-concept tool for comparing competing apps. The tool combines review classification and app feature extraction methods and has been evaluated by ten developers from industry who perceived it useful for improving the app quality.  https://www.ester.ee/record=b529379

    User Review-Based Change File Localization for Mobile Applications

    Get PDF
    In the current mobile app development, novel and emerging DevOps practices (e.g., Continuous Delivery, Integration, and user feedback analysis) and tools are becoming more widespread. For instance, the integration of user feedback (provided in the form of user reviews) in the software release cycle represents a valuable asset for the maintenance and evolution of mobile apps. To fully make use of these assets, it is highly desirable for developers to establish semantic links between the user reviews and the software artefacts to be changed (e.g., source code and documentation), and thus to localize the potential files to change for addressing the user feedback. In this paper, we propose RISING (Review Integration via claSsification, clusterIng, and linkiNG), an automated approach to support the continuous integration of user feedback via classification, clustering, and linking of user reviews. RISING leverages domain-specific constraint information and semi-supervised learning to group user reviews into multiple fine-grained clusters concerning similar users' requests. Then, by combining the textual information from both commit messages and source code, it automatically localizes potential change files to accommodate the users' requests. Our empirical studies demonstrate that the proposed approach outperforms the state-of-the-art baseline work in terms of clustering and localization accuracy, and thus produces more reliable results.Comment: 15 pages, 3 figures, 8 table
    corecore