63 research outputs found

    Improving the Efficiency of Mobile User Interface Development through Semantic and Data-Driven Analyses

    Get PDF
    Having millions of mobile applications from Google Play and Apple's App store, the smartphone is becoming a necessity in our life. People could access a wide variety of services by using the mobile application, between which user interfaces (UIs) work as an important proxy.A well-designed UI makes an application easy, practical, and efficient to use. However, due to the rapid application iteration speed and the shortage of UI designers, developers are required to design the UIs and implement them in a short time.As a result, they may be unaware of or compromise some important factors related to usability and accessibility during the process of developing user interfaces of mobile applications.Therefore, efficient and useful tools are needed to enhance the efficiency of the development of user interfaces. In this thesis, I proposed three techniques to improve the efficiency of designing and developing user interfaces through semantic and data-driven analyses. First, I proposed a UI design search engine to help designers or developers quickly create trendy and practical UI designs by exposing them to UI designs in real applications. I collected a large-scale UI design dataset by automatically exploring UIs from top-downloaded Android applications, and designed an image autoencoder-based UI design engine to enable finer-grained UI design search. Second, during the process of understanding the real UIs implementation, I found that existing applications have a severe accessibility issue of lacking labels for image-based buttons. Such an issue will hinder the blind users to access the key functionalities on UIs. As blind users need to rely on screen readers to read content on UIs, it requires the developers to set up appropriate labels for image-based buttons.Therefore, I proposed LabelDroid, which aims to automatically generate labels (i.e., the content description) of image-based buttons while developers implement UIs. Finally, as the above techniques all require the view hierarchical information, which contains the bounds and type of contained elements, to achieve the goal, it is essential to generalize these techniques to a broader scope. For example, UIs in the design-sharing platforms do not have any metadata about the elements. To do this, I conducted the first large-scale empirical study on evaluating existing object detection methods of detecting elements in UIs. By understanding the unique characteristics of UI elements and UIs, I proposed a hybrid method to boost the accuracy and precision of detecting elements on user interfaces. Such a fundamental method can be beneficial to many downstream applications, such as UI design search, UI code generation, and UI testing. In conclusion, I proposed three techniques to enhance the efficiency of designing and developing the user interfaces on mobile applications through semantic and data-driven analyses. Such methods could easily generalize to a broader scope, such as user interfaces of desktop apps and websites.I expect my proposed techniques and the understanding of user interfaces can facilitate the following research

    Owl Eyes: Spotting UI Display Issues via Visual Understanding

    Full text link
    Graphical User Interface (GUI) provides a visual bridge between a software application and end users, through which they can interact with each other. With the development of technology and aesthetics, the visual effects of the GUI are more and more attracting. However, such GUI complexity posts a great challenge to the GUI implementation. According to our pilot study of crowdtesting bug reports, display issues such as text overlap, blurred screen, missing image always occur during GUI rendering on different devices due to the software or hardware compatibility. They negatively influence the app usability, resulting in poor user experience. To detect these issues, we propose a novel approach, OwlEye, based on deep learning for modelling visual information of the GUI screenshot. Therefore, OwlEye can detect GUIs with display issues and also locate the detailed region of the issue in the given GUI for guiding developers to fix the bug. We manually construct a large-scale labelled dataset with 4,470 GUI screenshots with UI display issues and develop a heuristics-based data augmentation method for boosting the performance of our OwlEye. The evaluation demonstrates that our OwlEye can achieve 85% precision and 84% recall in detecting UI display issues, and 90% accuracy in localizing these issues. We also evaluate OwlEye with popular Android apps on Google Play and F-droid, and successfully uncover 57 previously-undetected UI display issues with 26 of them being confirmed or fixed so far.Comment: Accepted to 35th IEEE/ACM International Conference on Automated Software Engineering (ASE 20

    Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains

    Full text link
    The emergence of foundation models, such as large language models (LLMs) GPT-4 and text-to-image models DALL-E, has opened up numerous possibilities across various domains. People can now use natural language (i.e. prompts) to communicate with AI to perform tasks. While people can use foundation models through chatbots (e.g., ChatGPT), chat, regardless of the capabilities of the underlying models, is not a production tool for building reusable AI services. APIs like LangChain allow for LLM-based application development but require substantial programming knowledge, thus posing a barrier. To mitigate this, we propose the concept of AI chain and introduce the best principles and practices that have been accumulated in software engineering for decades into AI chain engineering, to systematise AI chain engineering methodology. We also develop a no-code integrated development environment, Prompt Sapper, which embodies these AI chain engineering principles and patterns naturally in the process of building AI chains, thereby improving the performance and quality of AI chains. With Prompt Sapper, AI chain engineers can compose prompt-based AI services on top of foundation models through chat-based requirement analysis and visual programming. Our user study evaluated and demonstrated the efficiency and correctness of Prompt Sapper.Comment: 23 pages, 5 figures, accepted to TOSEM 202

    Let's Chat to Find the APIs: Connecting Human, LLM and Knowledge Graph through AI Chain

    Full text link
    API recommendation methods have evolved from literal and semantic keyword matching to query expansion and query clarification. The latest query clarification method is knowledge graph (KG)-based, but limitations include out-of-vocabulary (OOV) failures and rigid question templates. To address these limitations, we propose a novel knowledge-guided query clarification approach for API recommendation that leverages a large language model (LLM) guided by KG. We utilize the LLM as a neural knowledge base to overcome OOV failures, generating fluent and appropriate clarification questions and options. We also leverage the structured API knowledge and entity relationships stored in the KG to filter out noise, and transfer the optimal clarification path from KG to the LLM, increasing the efficiency of the clarification process. Our approach is designed as an AI chain that consists of five steps, each handled by a separate LLM call, to improve accuracy, efficiency, and fluency for query clarification in API recommendation. We verify the usefulness of each unit in our AI chain, which all received high scores close to a perfect 5. When compared to the baselines, our approach shows a significant improvement in MRR, with a maximum increase of 63.9% higher when the query statement is covered in KG and 37.2% when it is not. Ablation experiments reveal that the guidance of knowledge in the KG and the knowledge-guided pathfinding strategy are crucial for our approach's performance, resulting in a 19.0% and 22.2% increase in MAP, respectively. Our approach demonstrates a way to bridge the gap between KG and LLM, effectively compensating for the strengths and weaknesses of both.Comment: Accepted on ASE'202

    Let's Discover More API Relations: A Large Language Model-based AI Chain for Unsupervised API Relation Inference

    Full text link
    APIs have intricate relations that can be described in text and represented as knowledge graphs to aid software engineering tasks. Existing relation extraction methods have limitations, such as limited API text corpus and affected by the characteristics of the input text.To address these limitations, we propose utilizing large language models (LLMs) (e.g., GPT-3.5) as a neural knowledge base for API relation inference. This approach leverages the entire Web used to pre-train LLMs as a knowledge base and is insensitive to the context and complexity of input texts. To ensure accurate inference, we design our analytic flow as an AI Chain with three AI modules: API FQN Parser, API Knowledge Extractor, and API Relation Decider. The accuracy of the API FQN parser and API Relation Decider module are 0.81 and 0.83, respectively. Using the generative capacity of the LLM and our approach's inference capability, we achieve an average F1 value of 0.76 under the three datasets, significantly higher than the state-of-the-art method's average F1 value of 0.40. Compared to CoT-based method, our AI Chain design improves the inference reliability by 67%, and the AI-crowd-intelligence strategy enhances the robustness of our approach by 26%

    Neottia maolanensis, a replacement name for Neottia bifida M.N.Wang (Orchidaceae)

    Get PDF
    According to Articles 53.1 of the International Code of Nomenclature for Algae, Fungi, and Plants (Shenzhen Code), Neottia bifida M.N.Wang (as 'bifidus'; PhytoKeys 229: 222, 2023) is an illegitimate name, and hence a new name Neottia maolanensis M. N. Wang is proposed here

    Releasing 9.6 wt% of H2 from Mg(NH2)2–3LiH–NH3BH3 through mechanochemical reaction

    Get PDF
    AbstractBall milling the mixture of Mg(NH2)2, LiH and NH3BH3 in a molar ratio of 1:3:1 results in the direct liberation of 9.6 wt% H2 (11 equiv. H), which is superior to binary systems such as LiH–AB (6 equiv. H), AB–Mg(NH2)2 (No H2 release) and LiH–Mg(NH2)2 (4 equiv. H), respectively. The overall dehydrogenation is a three-step process in which LiH firstly reacts with AB to yield LiNH2BH3 and LiNH2BH3 further reacts with Mg(NH2)2 to form LiMgBN3H3. LiMgBN3H3 subsequently interacts with additional 2 equivalents of LiH to form Li3BN2 and MgNH as well as hydrogen

    Structure and morphology of microporous carbon membrane materials derived from poly(phthalazinone ether sulfone ketone)

    No full text
    A novel polymeric precursor, poly(phthalazinone ether sulfone ketone) (PPESK), was used to prepare microporous carbon membranes by carbonization at 950 C. The structure and morphology of the microporous carbon membrane materials were characterized by X-ray diffraction, Raman spectrometry, transmission electron microscopy, scanning electron microscopy and nitrogen adsorption techniques. The results illustrate that PPESK is a promising carbon membrane precursor materials, which results in well-developed microporosity after carbonization treatment. The pore structure of carbon membranes derived from PPESK consists of two kinds of pores: ultramicropore centering at 0.56 nm and supermicropore centering at 0.77 nm. Graphitic structure and turbostratic carbon coexist in the as-prepared carbon membranes, of which the interlayer d spacing, the microcrystal size La and the stacking height Lc are 0.357, 3.91 and 4.39 nm, respectively. For PPESK, the oxidative stabilization prior to the carbonization is beneficial to the preparation of the carbon membrane with high gas separation performance, which helps to shift the pore size distribution to smaller pore width and to inhibit the growth of crystallites in the carbon matrix
    • …
    corecore