510 research outputs found

    Supporting Multiple Stakeholders in Agile Development

    Get PDF
    Agile software development practices require several stakeholders with different kinds of expertise to collaborate while specifying requirements, designing and modeling software, and verifying whether developers have implemented requirements correctly. We studied 112 requirements engineering (RE) tools from academia and the features of 13 actively maintained behavior-driven development (BDD) tools, which support various stakeholders in specifying and verifying the application behavior. Overall, we found that there is a growing tool specialization targeted towards a specific type of stakeholders. Particularly with BDD tools, we found no adequate support for non-technical stakeholders —- they are required to use an integrated development environment (IDE) —- which is not adapted to suit their expertise. We argue that employing separate tools for requirements specification, modeling, implementation, and verification is counter-productive for agile development. Such an approach makes it difficult to manage associated artifacts and support rapid implementation and feedback loops. To avoid dispersion of requirements and other software-related artifacts among separate tools, establish traceability between requirements and the application source code, and streamline a collaborative software development workflow, we propose to adapt an IDE as an agile development platform. With our approach, we provide in-IDE graphical interfaces to support non-technical stakeholders in creating and maintaining requirements concurrently with the implementation. With such graphical interfaces, we also guide non-technical stakeholders through the object-oriented design process and support them in verifying the modeled behavior. This approach has two advantages: (i) compared with employing separate tools, creating and maintaining requirements directly within a development platform eliminates the necessity to recover trace links, and (ii) various natively created artifacts can be composed into stakeholder-specific interactive live in-IDE documentation. These advantages have a direct impact on how various stakeholders collaborate with each other, and allow for rapid feedback, which is much desired in agile practices. We exemplify our approach using the Glamorous Toolkit IDE. Moreover, the discussed building blocks can be implemented in any IDE with a rich-enough graphical engine and reflective capabilities

    Development of GUI test coverage analysis and enforcement tools

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200

    Supporting multiple stakeholders in agile development

    Get PDF
    Agile software development practices require several stakeholders with different kinds of expertise to collaborate while specifying requirements, designing, and modelling software, and verifying whether developers have implemented requirements correctly. We studied 112 requirements engineering (RE) tools from academia and the features of 13 actively maintained behavior-driven development (BDD) tools, which support various stakeholders in specifying and verifying the application behavior. Overall, we found that there is a growing tool specialization targeted towards a specific type of stakeholders. Particularly with BDD tools, we found no adequate support for non-technical stakeholders-- they are required to use an integrated development environment (IDE)-- which is not adapted to suit their expertise. We argue that employing separate tools for requirements specification, modelling, implementation, and verification is counterproductive for agile development. Such an approach makes it difficult to manage associated artifacts and support rapid implementation and feedback loops. To avoid dispersion of requirements and other software-related artifacts among separate tools, establish traceability between requirements and the application source code, and streamline a collaborative software development workflow, we propose to adapt an IDE as an agile development platform. With our approach, we provide in-IDE graphical interfaces to support non-technical stakeholders in creating and maintaining requirements concurrently with the implementation. With such graphical interfaces, we also guide non-technical stakeholders through the object-oriented design process and support them in verifying the modelled behavior. This approach has two advantages: (i) compared with employing separate tools, creating, and maintaining requirements directly within a development platform eliminates the necessity to recover trace links, and (ii) various natively created artifacts can be composed into stakeholder-specific interactive live in-IDE documentation. These advantages have a direct impact on how various stakeholders collaborate with each other, and allow for rapid feedback, which is much desired in agile practices. We exemplify our approach using the Glamorous Toolkit IDE. Moreover, the discussed building blocks can be implemented in any IDE with a rich-enough graphical engine and reflective capabilities

    Test Cases Evolution of Mobile Applications: Model Driven Approach

    Get PDF
    AELOS_HCERES2020 , NAOMOD_HCERES2020Mobile Applications Developers, with large freedom given to them, focus on satisfying market requirements and on pleasing consumer’s desires. They are forced to be creative and productive in a short period of time. As a result, billions of powerful mobile applications are displayed every day. Therefore, every mobile application needs to continually change and make an incremental evolution in order to survive and preserve its ranking among the top applications in the market. Mobile apps Testers hold a heavy responsibility on their shoulders, the intrinsic nature of agile swift change of mobile apps pushes them to be meticulous, to be aware that things can be different at any time, and to be prepared for unpredicted crashes. Therefore, starting the generation or the creation of test cases from scratch and selecting each time the overridden or the overloaded test cases is a tedious operation. In software testing the time allocated for testing and correcting defects is important for every software development (regularly half the time). This time can be reduced by the introduction of tools and the adoption of new testing methods. In the field of mobile development, new concerns should be taken into account; among the most important ones are the heterogeneity of execution environments and the fragmentation of terminals which have different impacts on the functionality, performance, and connectivity. This project studies the evolution of mobile applications and its impact on the evolution of test cases from their creation until their expiration stage. A detailed case study of a native open source Android application is provided; describing many aspects of design, development, testing in addition to the analysis of the process of mobile apps evolution. This project based on model driven engineering approach where the models are serialized using the standard XMI. It presents a protocol for the adaptation of test cases under certain restrictions

    Automated blackbox GUI specifications enhancement and test data generation

    Get PDF
    Applications with a Graphical User Interface (GUI) front-end are ubiquitous nowadays. While automated model-based approaches have been shown to be effective in testing of such applications, most existing techniques produce many infeasible event sequences used as GUI test cases. This happens primarily because the behavioral specifications of the GUI under test are ignored. In this dissertation we present an automated framework that reveals an important set of state-based constraints among GUI events based on infeasible (i.e., unexecutable or partially executable) test cases of a GUI test suite. GUIDiVa, an iterative algorithm at the core of our framework, enumerates all possible constraint violations as potential reasons for test case failure, on the failed event of an infeasible test case. It then selects and adds the most promising constraints of each iteration to a final set based on the Validity Weight of constraints. The results of empirical studies on both seeded and nine non-trivial open-source study subjects show that our framework is capable of capturing important aspects of GUI behavior in the form of state-based event constraints, while considerably reducing the number of insfeasible test cases. The second part of this dissertation deals with the problem of automatic generation of relevant test data for parameterized GUI events (i.e., events associated with widgets that accept user inputs such as textboxes and textareas). Current techniques either manipulate the source code of the application under test (AUT) to generate the test data, or blindly use a set of random string values. We propose a novel way to generate the test data by exploiting the information provided in the GUI structure to extract a set of key identifiers for each parameterized GUI widget. These identifiers are used to compose appropriate online search phrases and collect relevant test data from the Internet. The results of an empirical study on five GUI-based applications show that the proposed approach is applicable and results in execution of some hard-to-cover branches in the subject programs. The proposed technique works from a black-box perspective and is entirely independent from GUI modeling and event sequence generation, thus it does not require source code access and offers the possibility of being integrated with existing GUI testing frameworks

    Automating Software Development for Mobile Computing Platforms

    Get PDF
    Mobile devices such as smartphones and tablets have become ubiquitous in today\u27s computing landscape. These devices have ushered in entirely new populations of users, and mobile operating systems are now outpacing more traditional desktop systems in terms of market share. The applications that run on these mobile devices (often referred to as apps ) have become a primary means of computing for millions of users and, as such, have garnered immense developer interest. These apps allow for unique, personal software experiences through touch-based UIs and a complex assortment of sensors. However, designing and implementing high quality mobile apps can be a difficult process. This is primarily due to challenges unique to mobile development including change-prone APIs and platform fragmentation, just to name a few. in this dissertation we develop techniques that aid developers in overcoming these challenges by automating and improving current software design and testing practices for mobile apps. More specifically, we first introduce a technique, called Gvt, that improves the quality of graphical user interfaces (GUIs) for mobile apps by automatically detecting instances where a GUI was not implemented to its intended specifications. Gvt does this by constructing hierarchal models of mobile GUIs from metadata associated with both graphical mock-ups (i.e., created by designers using photo-editing software) and running instances of the GUI from the corresponding implementation. Second, we develop an approach that completely automates prototyping of GUIs for mobile apps. This approach, called ReDraw, is able to transform an image of a mobile app GUI into runnable code by detecting discrete GUI-components using computer vision techniques, classifying these components into proper functional categories (e.g., button, dropdown menu) using a Convolutional Neural Network (CNN), and assembling these components into realistic code. Finally, we design a novel approach for automated testing of mobile apps, called CrashScope, that explores a given android app using systematic input generation with the intrinsic goal of triggering crashes. The GUI-based input generation engine is driven by a combination of static and dynamic analyses that create a model of an app\u27s GUI and targets common, empirically derived root causes of crashes in android apps. We illustrate that the techniques presented in this dissertation represent significant advancements in mobile development processes through a series of empirical investigations, user studies, and industrial case studies that demonstrate the effectiveness of these approaches and the benefit they provide developers

    Answering questions about archived, annotated meetings

    Get PDF
    Retrieving information from archived meetings is a new domain of information retrieval that has received increasing attention in the past few years. Search in spontaneous spoken conversations has been recognized as more difficult than text-based document retrieval because meeting discussions contain two levels of information: the content itself, i.e. what topics are discussed, but also the argumentation process, i.e. what conflicts are resolved and what decisions are made. To capture the richness of information in meetings, current research focuses on recording meetings in Smart-Rooms, transcribing meeting discussion into text and annotating discussion with semantic higher-level structures to allow for efficient access to the data. However, it is not yet clear what type of user interface is best suited for searching and browsing such archived, annotated meetings. Content-based retrieval with keyword search is too naive and does not take into account the semantic annotations on the data. The objective of this thesis is to assess the feasibility and usefulness of a natural language interface to meeting archives that allows users to ask complex questions about meetings and retrieve episodes of meeting discussions based on semantic annotations. The particular issues that we address are: the need of argumentative annotation to answer questions about meetings; the linguistic and domain-specific natural language understanding techniques required to interpret such questions; and the use of visual overviews of meeting annotations to guide users in formulating questions. To meet the outlined objectives, we have annotated meetings with argumentative structure and built a prototype of a natural language understanding engine that interprets questions based on those annotations. Further, we have performed two sets of user experiments to study what questions users ask when faced with a natural language interface to annotated meeting archives. For this, we used a simulation method called Wizard of Oz, to enable users to express questions in their own terms without being influenced by limitations in speech recognition technology. Our experimental results show that technically it is feasible to annotate meetings and implement a deep-linguistic NLU engine for questions about meetings, but in practice users do not consistently take advantage of these features. Instead they often search for keywords in meetings. When visual overviews of the available annotations are provided, users refer to those annotations in their questions, but the complexity of questions remains simple. Users search with a breadth-first approach, asking questions in sequence instead of a single complex question. We conclude that natural language interfaces to meeting archives are useful, but that more experimental work is needed to find ways to incent users to take advantage of the expressive power of natural language when asking questions about meetings

    Improving the Efficiency of Mobile User Interface Development through Semantic and Data-Driven Analyses

    Get PDF
    Having millions of mobile applications from Google Play and Apple's App store, the smartphone is becoming a necessity in our life. People could access a wide variety of services by using the mobile application, between which user interfaces (UIs) work as an important proxy.A well-designed UI makes an application easy, practical, and efficient to use. However, due to the rapid application iteration speed and the shortage of UI designers, developers are required to design the UIs and implement them in a short time.As a result, they may be unaware of or compromise some important factors related to usability and accessibility during the process of developing user interfaces of mobile applications.Therefore, efficient and useful tools are needed to enhance the efficiency of the development of user interfaces. In this thesis, I proposed three techniques to improve the efficiency of designing and developing user interfaces through semantic and data-driven analyses. First, I proposed a UI design search engine to help designers or developers quickly create trendy and practical UI designs by exposing them to UI designs in real applications. I collected a large-scale UI design dataset by automatically exploring UIs from top-downloaded Android applications, and designed an image autoencoder-based UI design engine to enable finer-grained UI design search. Second, during the process of understanding the real UIs implementation, I found that existing applications have a severe accessibility issue of lacking labels for image-based buttons. Such an issue will hinder the blind users to access the key functionalities on UIs. As blind users need to rely on screen readers to read content on UIs, it requires the developers to set up appropriate labels for image-based buttons.Therefore, I proposed LabelDroid, which aims to automatically generate labels (i.e., the content description) of image-based buttons while developers implement UIs. Finally, as the above techniques all require the view hierarchical information, which contains the bounds and type of contained elements, to achieve the goal, it is essential to generalize these techniques to a broader scope. For example, UIs in the design-sharing platforms do not have any metadata about the elements. To do this, I conducted the first large-scale empirical study on evaluating existing object detection methods of detecting elements in UIs. By understanding the unique characteristics of UI elements and UIs, I proposed a hybrid method to boost the accuracy and precision of detecting elements on user interfaces. Such a fundamental method can be beneficial to many downstream applications, such as UI design search, UI code generation, and UI testing. In conclusion, I proposed three techniques to enhance the efficiency of designing and developing the user interfaces on mobile applications through semantic and data-driven analyses. Such methods could easily generalize to a broader scope, such as user interfaces of desktop apps and websites.I expect my proposed techniques and the understanding of user interfaces can facilitate the following research
    • …
    corecore