7,919 research outputs found

    Overcoming Language Dichotomies: Toward Effective Program Comprehension for Mobile App Development

    Full text link
    Mobile devices and platforms have become an established target for modern software developers due to performant hardware and a large and growing user base numbering in the billions. Despite their popularity, the software development process for mobile apps comes with a set of unique, domain-specific challenges rooted in program comprehension. Many of these challenges stem from developer difficulties in reasoning about different representations of a program, a phenomenon we define as a "language dichotomy". In this paper, we reflect upon the various language dichotomies that contribute to open problems in program comprehension and development for mobile apps. Furthermore, to help guide the research community towards effective solutions for these problems, we provide a roadmap of directions for future work.Comment: Invited Keynote Paper for the 26th IEEE/ACM International Conference on Program Comprehension (ICPC'18

    LLM for Test Script Generation and Migration: Challenges, Capabilities, and Opportunities

    Full text link
    This paper investigates the application of large language models (LLM) in the domain of mobile application test script generation. Test script generation is a vital component of software testing, enabling efficient and reliable automation of repetitive test tasks. However, existing generation approaches often encounter limitations, such as difficulties in accurately capturing and reproducing test scripts across diverse devices, platforms, and applications. These challenges arise due to differences in screen sizes, input modalities, platform behaviors, API inconsistencies, and application architectures. Overcoming these limitations is crucial for achieving robust and comprehensive test automation. By leveraging the capabilities of LLMs, we aim to address these challenges and explore its potential as a versatile tool for test automation. We investigate how well LLMs can adapt to diverse devices and systems while accurately capturing and generating test scripts. Additionally, we evaluate its cross-platform generation capabilities by assessing its ability to handle operating system variations and platform-specific behaviors. Furthermore, we explore the application of LLMs in cross-app migration, where it generates test scripts across different applications and software environments based on existing scripts. Throughout the investigation, we analyze its adaptability to various user interfaces, app architectures, and interaction patterns, ensuring accurate script generation and compatibility. The findings of this research contribute to the understanding of LLMs' capabilities in test automation. Ultimately, this research aims to enhance software testing practices, empowering app developers to achieve higher levels of software quality and development efficiency.Comment: Accepted by the 23rd IEEE International Conference on Software Quality, Reliability, and Security (QRS 2023

    An Analysis of the Microsoft 365 Cloud Migration Process, its Alternatives, and Results

    Get PDF
    This study follows the decision making process of comparing a traditional business software stack to cloud alternatives, comparing different cloud platforms, and planning a migration. It addresses specific workloads of an example company in the financial services industry and how the tools in a Microsoft 365 subscription support that work. The process of transferring an existing Exchange server and its users to Azure is thoroughly detailed, as is the logic behind certain crucial decisions that are part of that procedure. A calculation of real-world savings is also provided. The resulting paper is usable as both a reference and guide for making responsible plans on the subject of cloud migratio

    Make LLM a Testing Expert: Bringing Human-like Interaction to Mobile GUI Testing via Functionality-aware Decisions

    Full text link
    Automated Graphical User Interface (GUI) testing plays a crucial role in ensuring app quality, especially as mobile applications have become an integral part of our daily lives. Despite the growing popularity of learning-based techniques in automated GUI testing due to their ability to generate human-like interactions, they still suffer from several limitations, such as low testing coverage, inadequate generalization capabilities, and heavy reliance on training data. Inspired by the success of Large Language Models (LLMs) like ChatGPT in natural language understanding and question answering, we formulate the mobile GUI testing problem as a Q&A task. We propose GPTDroid, asking LLM to chat with the mobile apps by passing the GUI page information to LLM to elicit testing scripts, and executing them to keep passing the app feedback to LLM, iterating the whole process. Within this framework, we have also introduced a functionality-aware memory prompting mechanism that equips the LLM with the ability to retain testing knowledge of the whole process and conduct long-term, functionality-based reasoning to guide exploration. We evaluate it on 93 apps from Google Play and demonstrate that it outperforms the best baseline by 32% in activity coverage, and detects 31% more bugs at a faster rate. Moreover, GPTDroid identify 53 new bugs on Google Play, of which 35 have been confirmed and fixed.Comment: Accepted by IEEE/ACM International Conference on Software Engineering 2024 (ICSE 2024). arXiv admin note: substantial text overlap with arXiv:2305.0943

    Designing mobile language learning with Arabic speaking migrants

    Get PDF
    Learning the language is crucial to be included in a new society. For migrants, the smartphone is a commonly used device for staying connected, which could also be used for language learning purposes. This research concerns mobile literacy with newly arrived Arabic speaking migrants to Sweden and the use of mobile learning as a means for integration. The purpose is to investigate how mobile technology can be designed to support migrants\u27 language learning process. The research concerns technology development where versions of a mobile application (app) are explored from a bottom-up perspective with Arabic speaking migrants. A qualitative method approach is applied, built on design principles focusing on the construction of situated artefacts and evaluation of performance. The results show that intuitive design and engaging content with connections to everyday social situations play important parts in sustaining motivation to engage with an app
    • …
    corecore