5 research outputs found

    ScreenTrack: Using a Visual History of a Computer Screen to Retrieve Documents and Web Pages

    Full text link
    Computers are used for various purposes, so frequent context switching is inevitable. In this setting, retrieving the documents, files, and web pages that have been used for a task can be a challenge. While modern applications provide a history of recent documents for users to resume work, this is not sufficient to retrieve all the digital resources relevant to a given primary document. The histories currently available do not take into account the complex dependencies among resources across applications. To address this problem, we tested the idea of using a visual history of a computer screen to retrieve digital resources within a few days of their use through the development of ScreenTrack. ScreenTrack is software that captures screenshots of a computer at regular intervals. It then generates a time-lapse video from the captured screenshots and lets users retrieve a recently opened document or web page from a screenshot after recognizing the resource by its appearance. A controlled user study found that participants were able to retrieve requested information more quickly with ScreenTrack than under the baseline condition with existing tools. A follow-up study showed that the participants used ScreenTrack to retrieve previously used resources and to recover the context for task resumption.Comment: CHI 2020, 10 pages, 7 figure

    Inference of development activities from interaction with uninstrumented applications

    Get PDF
    Studying developers’ behavior in software development tasks is crucial for designing effective techniques and tools to support developers’ daily work. In modern software development, developers frequently use different applications including IDEs, Web Browsers, documentation software (such as Office Word, Excel, and PDF applications), and other tools to complete their tasks. This creates significant challenges in collecting and analyzing developers’ behavior data. Researchers usually instrument the software tools to log developers’ behavior for further studies. This is feasible for studies on development activities using specific software tools. However, instrumenting all software tools commonly used in real work settings is difficult and requires significant human effort. Furthermore, the collected behavior data consist of low-level and fine-grained event sequences, which must be abstracted into high-level development activities for further analysis. This abstraction is often performed manually or based on simple heuristics. In this paper, we propose an approach to address the above two challenges in collecting and analyzing developers’ behavior data. First, we use our ActivitySpace framework to improve the generalizability of the data collection. ActivitySpace uses operating-system level instrumentation to track developer interactions with a wide range of applications in real work settings. Secondly, we use a machine learning approach to reduce the human effort to abstract low-level behavior data. Specifically, considering the sequential nature of the interaction data, we propose a Condition Random Field (CRF) based approach to segment and label the developers’ low-level actions into a set of basic, yet meaningful development activities. To validate the generalizability of the proposed data collection approach, we deploy the ActivitySpace framework in an industry partner’s company and collect the real working data from ten professional developers’ one-week work in three actual software projects. The experiment with the collected data confirms that with initial human-labeled training data, the CRF model can be trained to infer development activities from low-level actions with reasonable accuracy within and across developers and software projects. This suggests that the machine learning approach is promising in reducing the human efforts required for behavior data analysis.This work was partially supported by NSFC Program (No. 61602403 and 61572426)

    Comparing episodic and semantic interfaces for task boundary identification

    No full text

    Comparing Episodic and Semantic Interfaces for Task Boundary Identification

    No full text
    Multi-tasking is a common activity for computer users. Many recent approaches to help support a user in multi-tasking require the user to indicate the start (and at least implicitly) end points of tasks manually. Although there has been some work aimed at inferring the boundaries of a user’s tasks, it is not yet robust enough to replace the manual approach. Unfortunately with the manual approach, a user can sometimes forget to identify a task boundary, leading to erroneous information being associated with a task or appropriate information being missed. These problems degrade the effectiveness of the multi-tasking support. In this thesis, we describe two interfaces we designed to support task boundary identification. One interface stresses the use of episodic memory for recalling the boundary of a task; the other stresses the use of semantic memory. We investigate these interfaces in the context of software development. We report on an exploratory study of the use of these two interfaces by twelve programmers. We found that the programmers determined task boundaries more accurately with the episodic memory-based interface and that this interface was also strongly preferred. i

    Comparing episodic and semantic interfaces for task boundary identification

    No full text
    Multi-tasking is a common activity for computer users. Many recent approaches to help support a user in multi-tasking require the user to indicate the start (and at least implicitly) end points of tasks manually. Although there has been some work aimed at inferring the boundaries of a user's tasks, it is not yet robust enough to replace the manual approach. Unfortunately with the manual approach, a user can sometimes forget to identify a task boundary, leading to erroneous information being associated with a task or appropriate information being missed. These problems degrade the effectiveness of the multi-tasking support. In this thesis, we describe two interfaces we designed to support task boundary identification. One interface stresses the use of episodic memory for recalling the boundary of a task; the other stresses the use of semantic memory. We investigate these interfaces in the context of software development. We report on an exploratory study of the use of these two interfaces by twelve programmers. We found that the programmers determined task boundaries more accurately with the episodic memory-based interface and that this interface was also strongly preferred.Science, Faculty ofComputer Science, Department ofGraduat
    corecore