3,028 research outputs found
Diminished Control in Crowdsourcing: An Investigation of Crowdworker Multitasking Behavior
Obtaining high-quality data from crowds can be difficult if contributors do not give tasks sufficient attention. Attention checks are often used to mitigate this problem, but, because the roots of inattention are poorly understood, checks often compel attentive contributors to complete unnecessary work. We investigated a potential source of inattentiveness during crowdwork: multitasking. We found that workers switched to other tasks every five minutes, on average. There were indications that increasing switch frequency negatively affected performance. To address this, we tested an intervention that encouraged workers to stay focused on our task after multitasking was detected. We found that our intervention reduced the frequency of task-switching. It also improves on existing attention checks because it does not place additional demands on workers who are already focused. Our approach shows that crowds can help to overcome some of the limitations of laboratory studies by affording access to naturalistic multitasking behavior
Looking Up Information in Email: Feedback on Visit Durations Discourages Distractions
Data entry often involves looking up information from email. Task switching to email can be disruptive, and people can get distracted and forget to return to their primary task. In this paper, we investigate whether giving people feedback on how long they are away has any effect on the duration and number of their switches. An online experiment was conducted in which participants had to enter numeric codes into an online spreadsheet. They had to look up these codes in an email sent to their personal email upon starting the experiment. People who were shown how long they were away for made shorter switches, were faster to complete the task and made fewer data entry errors. This suggests feedback on switching duration may make people more aware of their switching behaviour, and assist users in maintaining focus on their main task
Watching movies on netflix: Investigating the effect of screen size on viewer immersion
Film and television content is moving out of the living room and onto mobile devices - viewers are now watching when and where it suits them, on devices of differing sizes. This freedom is convenient, but could lead to differing experiences across devices. Larger screens are often believed to be favourable, e.g. to watch films or sporting events. This is partially supported in the literature, which shows that larger screens lead to greater presence and more intense physiological responses. However, a more broadly-defined measure of experience, such as that of immersion from computer games research, has not been studied. In this study, 19 participants watched content on three different screens and reported their immersion level via questionnaire. Results showed that the 4.5-inch phone screen elicited lower immersion scores when compared to the 13-inch laptop and 30-inch monitor, but there was no difference when comparing the two larger screens. This suggests that very small screens lead to reduced immersion, but after a certain size the effect is less pronounced
Home is Where the Lab is: A Comparison of Online and Lab Data From a Time-sensitive Study of Interruption
While experiments have been run online for some time with positive results, there are still outstanding questions about the kinds of tasks that can be successfully deployed to remotely situated online participants. Some tasks, such as menu selection, have worked well but these do not represent the gamut of tasks that interest HCI researchers. In particular, we wondered whether long-lasting, time-sensitive tasks that require continuous concentration could work successfully online, given the confounding effects that might accompany the online deployment of such a task. We ran an archetypal interruption experiment both online and in the lab to investigate whether studies demonstrating such characteristics might be more vulnerable to a loss of control than the short, time-insensitive studies that are representative of the majority of previous online studies. Statistical comparisons showed no significant differences in performance on a number of dimensions. However, there were issues with data quality that stemmed from participants misunderstanding the task. Our findings suggest that long-lasting experiments using time-sensitive performance measures can be run online but that care must be taken when introducing participants to experimental procedures
Assessing the Viability of Online Interruption Studies
Researchers have been collecting data online since the early days of the Internet and as technology improves, increasing numbers of traditional experiments are being run online. However, there are still questions about the kinds of experiments that work online, particularly over experiments with time-sensitive performance measures. We are interested in one time-sensitive measure specifically, the time taken to resume a task following an interruption. We ran participants through an archetypal interruption study online and in the lab. Statistical comparisons showed no significant differences in the time it took to resume following an interruption. However, there were issues with data quality that stemmed from participant confusion about the task. Our findings have implications for experiments that assess time-sensitive performance measures in tasks that require continuous attention
Batching, Error Checking and Data Collecting: Understanding Data Entry in a Financial Office
Data entry is a core computing activity performed by office workers every day. Prior research on this topic has tended to study data entry in controlled lab environments. In this paper, we interviewed nine financial administrators from two large universities to learn about their practices for conducting data entry work. We found that financial information often has to be retrieved from multiple electronic and paper sources, and involves briefly keeping items in memory when switching between sources. Interviewees reported that they batched a lot of data entry tasks into a single session to complete the work quickly, and mitigated the risk of data entry errors by time-consuming practices of double-checking. However, prior lab studies suggest that double-checking is a poor strategy as it takes time and people are poor at spotting errors. This work has implications for how future data entry research should be conducted
Shortlinks and tiny keyboards: a systematic exploration of design trade-offs in link shortening services
Link-shortening services save space and make the manual entry of URLs less onerous. Short links are often included on printed materials so that people using mobile devices can quickly enter URLs. Although mobile transcription is a common use-case, link-shortening services generate output that is poorly suited to entry on mobile devices: links often contain numbers and capital letters that require time consuming mode switches on touch screen keyboards. With the aid of computational modeling, we identified problems with the output of a link-shortening service, bit.ly. Based on the results of this modeling, we hypothesized that longer links that are optimized for input on mobile keyboards would improve link entry speeds compared to shorter links that required keyboard mode switches. We conducted a human performance study that confirmed this hypothesis. Finally, we applied our method to a selection of different non-word mobile data-entry tasks. This work illustrates the need for service design to fit the constraints of the devices people use to consume services
Research Methods for HCI: Understanding People Using Interactive Technologies
This course will provide an introduction to methods used in Human-Computer Interaction (HCI) research. An equal focus will be given to both the quantitative and qualitative research traditions used to understand people and interactional contexts. We shall discuss these major philosophical traditions along with their contemporary framings (e.g., in-the-wild research and Interaction Science). By the end of the course attendees will have a detailed understanding of how to select and apply methods to address a range of problems that are of concern to contemporary HCI researchers
Understanding people: A course on qualitative and quantitative HCI research methods
This course will provide an introduction to methods used in Human-Computer Interaction (HCI) research. An equal focus will be given to both the quantitative and qualitative research traditions used to understand people and interactional contexts. We shall discuss these major research traditions along with their contemporary framings (e.g., in-the-wild research and Interaction Science). By the end of the course attendees will have a detailed understanding of how to select and apply methods to address a range of problems that are of concern to contemporary HCI researchers
Providing Self-Aware Systems with Reflexivity
We propose a new type of self-aware systems inspired by ideas from
higher-order theories of consciousness. First, we discussed the crucial
distinction between introspection and reflexion. Then, we focus on
computational reflexion as a mechanism by which a computer program can inspect
its own code at every stage of the computation. Finally, we provide a formal
definition and a proof-of-concept implementation of computational reflexion,
viewed as an enriched form of program interpretation and a way to dynamically
"augment" a computational process.Comment: 12 pages plus bibliography, appendices with code description, code of
the proof-of-concept implementation, and examples of executio
- …
