16 research outputs found

    ScreenTrack: Using a Visual History of a Computer Screen to Retrieve Documents and Web Pages

    Full text link
    Computers are used for various purposes, so frequent context switching is inevitable. In this setting, retrieving the documents, files, and web pages that have been used for a task can be a challenge. While modern applications provide a history of recent documents for users to resume work, this is not sufficient to retrieve all the digital resources relevant to a given primary document. The histories currently available do not take into account the complex dependencies among resources across applications. To address this problem, we tested the idea of using a visual history of a computer screen to retrieve digital resources within a few days of their use through the development of ScreenTrack. ScreenTrack is software that captures screenshots of a computer at regular intervals. It then generates a time-lapse video from the captured screenshots and lets users retrieve a recently opened document or web page from a screenshot after recognizing the resource by its appearance. A controlled user study found that participants were able to retrieve requested information more quickly with ScreenTrack than under the baseline condition with existing tools. A follow-up study showed that the participants used ScreenTrack to retrieve previously used resources and to recover the context for task resumption.Comment: CHI 2020, 10 pages, 7 figure

    Why are smartphones disruptive? An empirical study of smartphone use in real-life contexts

    Get PDF
    Notifications are one of the core functionalities of smartphones. Previous research suggests they can be a major disruption to the professional and private lives of users. This paper presents evidence from a mixed-methods study using first-person wearable video cameras, comprising 200 h of audio-visual first-person, and self-confrontation interview footage with 1130 unique smartphone interactions (N = 37 users), to situate and analyse the disruptiveness of notifications in real-world contexts. We show how smartphone interactions are driven by a complex set of routines and habits users develop over time. We furthermore observe that while the duration of interactions varies, the intervals between interactions remain largely invariant across different activity and location contexts, and for being alone or in the company of others. Importantly, we find that 89% of smartphone interactions are initiated by users, not by notifications. Overall this suggests that the disruptiveness of smartphones is rooted within learned user behaviours, not devices

    Interrupted by Your Pupil:An Interruption Management System Based on Pupil Dilation

    Get PDF
    Interruptions are prevalent in everyday life and can be very disruptive. An important factor that affects the level of disruptiveness is the timing of the interruption: Interruptions at low-workload moments are known to be less disruptive than interruptions at high-workload moments. In this study, we developed a task-independent interruption management system (IMS) that interrupts users at low-workload moments in order to minimize the disruptiveness of interruptions. The IMS identifies low-workload moments in real time by measuring users? pupil dilation, which is a well-known indicator of workload. Using an experimental setup we showed that the IMS succeeded in finding the optimal moments for interruptions and marginally improved performance. Because our IMS is task-independent?it does not require a task analysis?it can be broadly applied

    Diminished Control in Crowdsourcing: An Investigation of Crowdworker Multitasking Behavior

    Get PDF
    Obtaining high-quality data from crowds can be difficult if contributors do not give tasks sufficient attention. Attention checks are often used to mitigate this problem, but, because the roots of inattention are poorly understood, checks often compel attentive contributors to complete unnecessary work. We investigated a potential source of inattentiveness during crowdwork: multitasking. We found that workers switched to other tasks every five minutes, on average. There were indications that increasing switch frequency negatively affected performance. To address this, we tested an intervention that encouraged workers to stay focused on our task after multitasking was detected. We found that our intervention reduced the frequency of task-switching. It also improves on existing attention checks because it does not place additional demands on workers who are already focused. Our approach shows that crowds can help to overcome some of the limitations of laboratory studies by affording access to naturalistic multitasking behavior

    Understanding and Developing Models for Detecting and Differentiating Breakpoints during Interactive Tasks

    No full text
    The ability to detect and differentiate breakpoints during task execution is critical for enabling defer-to-breakpoint policies within interruption management. In this work, we examine the feasibility of building statistical models that can detect and differentiate three granularities (types) of perceptually meaningful breakpoints during task execution, without having to recognize the underlying tasks. We collected ecological samples of task execution data, and asked observers to review the interaction in the collected videos and identify any perceived breakpoints and their type. Statistical methods were applied to learn models that map features of the interaction to each type of breakpoint. Results showed that the models were able to detect and differentiate breakpoints with reasonably high accuracy across tasks. Among many uses, our resulting models can enable interruption management systems to better realize defer-to-breakpoint policies for interactive, free-form tasks

    Improving Usability of Mobile Applications Through Speculation and Distraction Minimization

    Full text link
    We live in a world where mobile computing systems are increasingly integrated with our day-to-day activities. People use mobile applications virtually everywhere they go, executing them on mobile devices such as smartphones, tablets, and smart watches. People commonly interact with mobile applications while performing other primary tasks such as walking and driving (e.g., using turn-by-turn directions while driving a car). Unfortunately, as an application becomes more mobile, it can experience resource scarcity (e.g., poor wireless connectivity) that is atypical in a traditional desktop environment. When critical resources become scarce, the usability of the mobile application deteriorates significantly. In this dissertation, I create system support that enables users to interact smoothly with mobile applications when wireless network connectivity is poor and when the user’s attention is limited. First, I show that speculative execution can mitigate user-perceived delays in application responsiveness caused by high-latency wireless network connectivity. I focus on cloud-based gaming, because the smooth usability of such application is highly dependent on low latency. User studies have shown that players are sensitive to as little as 60 ms of additional latency and are aggravated at latencies in excess of 100ms. For cloud-based gaming, which relies on powerful servers to generate high-graphics quality gaming content, a slow network frustrates the user, who must wait a long time to see input actions reflected in the game. I show that by predicting the user’s future gaming inputs and by performing visual misprediction compensation at the client, cloud-based gaming can maintain good usability even with 120 ms of network latency. Next, I show that the usability of mobile applications in an attention-limited environment (i.e., driving a vehicle) can be improved by automatically checking whether interfaces meet best-practice guidelines and by adding attention-aware scheduling of application interactions. When a user is driving, any application that demands too much attention is an unsafe distraction. I first develop a model checker that systematically explores all reachable screens for an application and determines whether the application conforms to best-practice vehicular UI guidelines. I find that even well- known vehicular applications (e.g., Google Maps and TomTom) can often demand too much of the driver’s attention. Next, I consider the case where applications run in the background and initiate interactions with the driver. I show that by quantifying the driver’s available attention and the attention demand of an interaction, real-time scheduling can be used to prevent attention overload in varying driving conditions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136989/1/kyminlee_1.pd

    Data cleaning techniques for software engineering data sets

    Get PDF
    Data quality is an important issue which has been addressed and recognised in research communities such as data warehousing, data mining and information systems. It has been agreed that poor data quality will impact the quality of results of analyses and that it will therefore impact on decisions made on the basis of these results. Empirical software engineering has neglected the issue of data quality to some extent. This fact poses the question of how researchers in empirical software engineering can trust their results without addressing the quality of the analysed data. One widely accepted definition for data quality describes it as `fitness for purpose', and the issue of poor data quality can be addressed by either introducing preventative measures or by applying means to cope with data quality issues. The research presented in this thesis addresses the latter with the special focus on noise handling. Three noise handling techniques, which utilise decision trees, are proposed for application to software engineering data sets. Each technique represents a noise handling approach: robust filtering, where training and test sets are the same; predictive filtering, where training and test sets are different; and filtering and polish, where noisy instances are corrected. The techniques were first evaluated in two different investigations by applying them to a large real world software engineering data set. In the first investigation the techniques' ability to improve predictive accuracy in differing noise levels was tested. All three techniques improved predictive accuracy in comparison to the do-nothing approach. The filtering and polish was the most successful technique in improving predictive accuracy. The second investigation utilising the large real world software engineering data set tested the techniques' ability to identify instances with implausible values. These instances were flagged for the purpose of evaluation before applying the three techniques. Robust filtering and predictive filtering decreased the number of instances with implausible values, but substantially decreased the size of the data set too. The filtering and polish technique actually increased the number of implausible values, but it did not reduce the size of the data set. Since the data set contained historical software project data, it was not possible to know the real extent of noise detected. This led to the production of simulated software engineering data sets, which were modelled on the real data set used in the previous evaluations to ensure domain specific characteristics. These simulated versions of the data set were then injected with noise, such that the real extent of the noise was known. After the noise injection the three noise handling techniques were applied to allow evaluation. This procedure of simulating software engineering data sets combined the incorporation of domain specific characteristics of the real world with the control over the simulated data. This is seen as a special strength of this evaluation approach. The results of the evaluation of the simulation showed that none of the techniques performed well. Robust filtering and filtering and polish performed very poorly, and based on the results of this evaluation they would not be recommended for the task of noise reduction. The predictive filtering technique was the best performing technique in this evaluation, but it did not perform significantly well either. An exhaustive systematic literature review has been carried out investigating to what extent the empirical software engineering community has considered data quality. The findings showed that the issue of data quality has been largely neglected by the empirical software engineering community. The work in this thesis highlights an important gap in empirical software engineering. It provided clarification and distinctions of the terms noise and outliers. Noise and outliers are overlapping, but they are fundamentally different. Since noise and outliers are often treated the same in noise handling techniques, a clarification of the two terms was necessary. To investigate the capabilities of noise handling techniques a single investigation was deemed as insufficient. The reasons for this are that the distinction between noise and outliers is not trivial, and that the investigated noise cleaning techniques are derived from traditional noise handling techniques where noise and outliers are combined. Therefore three investigations were undertaken to assess the effectiveness of the three presented noise handling techniques. Each investigation should be seen as a part of a multi-pronged approach. This thesis also highlights possible shortcomings of current automated noise handling techniques. The poor performance of the three techniques led to the conclusion that noise handling should be integrated into a data cleaning process where the input of domain knowledge and the replicability of the data cleaning process are ensured.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore