20 research outputs found

    Improving program comprehension by answering questions (keynote)

    Full text link

    GUI Testing Using Computer Vision

    Get PDF
    Testing a GUI's visual behavior typically requires human testers to interact with the GUI and to observe whether the expected results of interaction are presented. This paper presents a new approach to GUI testing using computer vision for testers to automate their tasks. Testers can write a visual test script that uses images to specify which GUI components to interact with and what visual feedback to be observed. Testers can also generate visual test scripts by demonstration. By recording both input events and screen images, it is possible to extract the images of components interacted with and the visual feedback seen by the demonstrator, and generate a visual test script automatically. We show that a variety of GUI behavior can be tested using this approach. Also, we show how this approach can facilitate good testing practices such as unit testing, regression testing, and test-driven development.National Science Foundation (U.S.). (Grant number IIS-0447800)Quanta Computer (Firm) (TParty project

    My IoT Puzzle: Debugging IF-THEN Rules Through the Jigsaw Metaphor

    Get PDF
    End users can nowadays define applications in the format of IF-THEN rules to personalize their IoT devices and online services. Along with the possibility to compose such applications, however, comes the need to debug them, e.g., to avoid unpredictable and dangerous behaviors. In this context, different questions are still unexplored: which visual languages are more appropriate for debugging IF-THEN rules? Which information do end users need to understand, identify, and correct errors? To answer these questions, we first conducted a literature analysis by reviewing previous works on end-user debugging, with the aim of extracting design guidelines. Then, we developed My IoT Puzzle, a tool to compose and debug IF-THEN rules based on the Jigsaw metaphor. My IoT Puzzle interactively assists users in the debugging process with different real-time feedback, and it allows the resolution of conflicts by providing textual and graphical explanations. An exploratory study with 6 participants preliminary confirms the effectiveness of our approach, showing that the usage of the Jigsaw metaphor, along with real-time feedback and explanations, helps users understand and fix conflicts among IF-THEN rules

    TurKit: Human Computation Algorithms on Mechanical Turk

    Get PDF
    Mechanical Turk (MTurk) provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function call. However, various systems challenges make this difficult in practice, and most uses of MTurk post large numbers of independent tasks. TurKit is a toolkit for prototyping and exploring algorithmic human computation, while maintaining a straight-forward imperative programming style. We present the crash-and-rerun programming model that makes TurKit possible, along with a variety of applications for human computation algorithms. We also present case studies of TurKit used for real experiments across different fields.Xerox CorporationNational Science Foundation (U.S.) (Grant No. IIS- 0447800)Quanta ComputerMassachusetts Institute of Technology. Center for Collective Intelligenc

    How Do Web-Active End-User Programmers Forage?

    Get PDF
    Web-active end-user programmers spend substantial time and cognitive effort seeking information while debugging web mashups, which are platforms for creating web applications by combining data and functionality from two or more different sources. The debugging on these platforms is challenging as end user programmers need to forage within the mashup environment to find bugs and on the web to forage for the solution to those bugs. To understand the foraging behavior of end-user programmers when debugging, we used information forging theory. Information foraging theory helps understand how users forage for information and has been successfully used to understand and model user behavior when foraging through documents, the web, user interfaces, and programming environments. Through the lens of information foraging theory, we analyzed the data from a controlled lab study of eight web-active end-user programmers. The programmers completed two debugging tasks using the Yahoo! Pipes web mashup environment. On analyzing the data, we identified three types of cues: clear, fuzzy, and elusive. Clear cues helped participants to find and fix bugs with ease while fuzzy and elusive cues led to useless foraging. We also identified the strategies used by the participants when finding and fixing bugs. Our results give us a better understanding of the programming behavior of web-active end-users and can inform researchers and professionals how to create better support for the debugging process. Further, this study methodology can be adapted by researchers to understand other aspects of programming such as implementing, reusing, and maintaining code

    Swarm Debugging: the Collective Intelligence on Interactive Debugging

    Get PDF
    One of the most important tasks in software maintenance is debugging. To start an interactive debugging session, developers usually set breakpoints in an integrated development environment and navigate through different paths in their debuggers. We started our work by asking what debugging information is useful to share among developers and study two pieces of information: breakpoints (and their locations) and sessions (debugging paths). To answer our question, we introduce the Swarm Debugging concept to frame the sharing of debugging information, the Swarm Debugging Infrastructure (SDI) with which practitioners and researchers can collect and share data about developers’ interactive debugging sessions, and the Swarm Debugging Global View (GV) to display debugging paths. Using the SDI, we conducted a large study with professional developers to understand how developers set breakpoints. Using the GV, we also analyzed professional developers in two studies and collected data about their debugging sessions. Our observations and the answers to our research questions suggest that sharing and visualizing debugging data can support debugging activities

    Swarm Debugging: the Collective Intelligence on Interactive Debugging

    Get PDF
    One of the most important tasks in software maintenance is debugging. To start an interactive debugging session, developers usually set breakpoints in an integrated development environment and navigate through different paths in their debuggers. We started our work by asking what debugging information is useful to share among developers and study two pieces of information: breakpoints (and their locations) and sessions (debugging paths). To answer our question, we introduce the Swarm Debugging concept to frame the sharing of debugging information, the Swarm Debugging Infrastructure (SDI) with which practitioners and researchers can collect and share data about developers’ interactive debugging sessions, and the Swarm Debugging Global View (GV) to display debugging paths. Using the SDI, we conducted a large study with professional developers to understand how developers set breakpoints. Using the GV, we also analyzed professional developers in two studies and collected data about their debugging sessions. Our observations and the answers to our research questions suggest that sharing and visualizing debugging data can support debugging activities
    corecore