83 research outputs found
A Study of Snippet Length and Informativeness: Behaviour, Performance and User Experience
The design and presentation of a Search Engine Results Page (SERP) has been subject to much research. With many contemporary aspects of the SERP now under scrutiny, work still remains in investigating more traditional SERP components, such as the result summary. Prior studies have examined a variety of different aspects of result summaries, but in this paper we investigate the influence of result summary length on search behaviour, performance and user experience. To this end, we designed and conducted a within-subjects experiment using the TREC AQUAINT news collection with 53 participants. Using Kullback-Leibler distance as a measure of information gain, we examined result summaries of different lengths and selected four conditions where the change in information gain was the greatest: (i) title only; (ii) title plus one snippet; (iii) title plus two snippets; and (iv) title plus four snippets. Findings show that participants broadly preferred longer result summaries, as they were perceived to be more informative. However, their performance in terms of correctly identifying relevant documents was similar across all four conditions. Furthermore, while the participants felt that longer summaries were more informative, empirical observations suggest otherwise; while participants were more likely to click on relevant items given longer summaries, they also were more likely to click on non-relevant items. This shows that longer is not necessarily better, though participants perceived that to be the case - and second, they reveal a positive relationship between the length and informativeness of summaries and their attractiveness (i.e. clickthrough rates). These findings show that there are tensions between perception and performance when designing result summaries that need to be taken into account
Recommended from our members
Imagining Artificial Intelligence Applications with People with Visual Disabilities Using Tactile Ideation
There has been a surge in artificial intelligence (AI) technologies co-opted by or designed for people with visual disabilities. Researchers and engineers have pushed technical boundaries in areas such as computer vision, natural language processing, location inference, and wearable computing. But what do people with visual disabilities imagine as their own technological future? To explore this question, we developed and carried out tactile ideation workshops with participants in the UK and India. Our participants generated a large and diverse set of ideas, most focusing on ways to meet needs related to social interaction. In some cases, this was a matter of recognizing people. In other cases, they wanted to be able to participate in social situations without foregrounding their disability. It was striking that this finding was consistent across UK and India despite substantial cultural and infrastructural differences. In this paper, we describe a new technique for working with people with visual disabilities to imagine new technologies that are tuned to their needs and aspirations. Based on our experience with these workshops, we provide a set of social dimensions to consider in the design of new AI technologies: social participation, social navigation, social maintenance, and social independence. We offer these social dimensions as a starting point to forefront users' social needs and desires as a more deliberate consideration for assistive technology design
Hacking Blind Navigation
Independent navigation in unfamiliar and complex environments is a major challenge for blind people. This challenge motivates a multi-disciplinary effort in the CHI community aimed at developing assistive technologies to support the orientation and mobility of blind people, including related disciplines such as accessible computing, cognitive sciences, computer vision, and ubiquitous computing. This workshop intends to bring these communities together to increase awareness on recent advances in blind navigation assistive technologies, benefit from diverse perspectives and expertises, discuss open research challenges, and explore avenues for multi-disciplinary collaborations. Interactions are fostered through a panel on Open Challenges and Avenues for Interdisciplinary Collaboration, Minute-Madness presentations, and a Hands-On Session where workshop participants can hack (design or prototype) new solutions to tackle open research challenges. An expected outcome is the emergence of new collaborations and research directions that can result in novel assistive technologies to support independent blind navigation
Disability Design and Innovation in Low Resource Settings: Addressing Inequality through HCI
Approximately 15% of the world's population has a disability and 80% live in low resource-settings, often in situations of severe social isolation. Technology is often inaccessible or inappropriately designed, hence unable to fully respond to the needs of people with disabilities living in low resource settings. Also lack of awareness of technology contributes to limited access. This workshop will be a call to arms for researchers in HCI to engage with people with disabilities in low resourced settings to understand their needs and design technology that is both accessible and culturally appropriate. We will achieve this through sharing of research experiences, and exploration of challenges encountered when planning HCI4D studies featuring participants with disabilities. Thanks to the contributions of all attendees, we will build a roadmap to support researchers aiming to leverage post-colonial and participatory approaches for the development of accessible and empowering technology with truly global ambitions
Effects of valent image-based secondary tasks on verbal working memory
Two experiments examined if exposure to emotionally valent image-based secondary tasks introduced at different points of a free recall working memory (WM) task impair memory performance. Images from the International Affective Picture System (IAPS) varied in the degree of negative or positive valance (mild, moderate, strong) and were positioned at low, moderate and high WM load points with participants rating them based upon perceived valence. As predicted, and based on previous research and theory, the higher the degree of negative (Experiment 1) and positive (Experiment 2) valence and the higher the WM load when a secondary task was introduced, the greater the impairment to recall. Secondary task images with strong negative valance were more disruptive than negative images with lower valence at moderate and high WM load task points involving encoding and/or rehearsal of primary task words (Experiment 1). This was not the case for secondary tasks involving positive images (Experiment 2), although participant valence ratings for positive IAPS images classified as moderate and strong were in fact very similar. Implications are discussed in relation to research and theory on task interruption and attentional narrowing and literature concerning the effects of emotive stimuli on cognition
What makes re-finding information difficult? A study of email re-finding
Re-nding information that has been seen or accessed before is a task which can be relatively straight-forward, but often it can be extremely challenging, time-consuming and frustrating. Little is known, however, about what makes one re-finding task harder or easier than another. We performed a user study to learn about the contextual factors that influence users' perception of task diculty in the context of re-finding email messages. 21 participants were issued re-nding tasks to perform on their own personal collections. The participants' responses to questions about the tasks combined with demographic data and collection statistics for the experimental population provide a rich basis to investigate the variables that can influence the perception of diculty. A logistic regression model was developed to examine the relationships be- tween variables and determine whether any factors were associated with perceived task diculty. The model reveals strong relationships between diculty and the time lapsed since a message was read, remembering when the sought-after email was sent, remembering other recipients of the email, the experience of the user and the user's ling strategy. We discuss what these findings mean for the design of re-nding interfaces and future re-finding research
Studying How Health Literacy Influences Attention during Online Information Seeking
Health literacy affects how people understand health information and, therefore, should be considered by search engines in health searches. In this work, we analyze how the level of health literacy is related to the eye movements of users searching the web for health information. We performed a user study with 30 participants that were asked to search online in the context of three work task situations defined by the authors. Their eye interactions with the Search Results Page and the Result Pages were logged using an eye-tracker and later analyzed. When searching online for health information, people with adequate health literacy spend more time and have more fixations on Search Result Pages. In this type of page, they also pay more attention to the results' hyperlink and snippet and click in more results too. In Result Pages, adequate health literacy users spend more time analyzing textual content than people with lower health literacy. We found statistical differences in terms of clicks, fixations, and time spent that could be used as a starting point for further research. That we know of, this is the first work to use an eye-tracker to explore how users with different health literacy search online for health-related information. As traditional instruments are too intrusive to be used by search engines, an automatic prediction of health literacy would be very useful for this type of system
Recommended from our members
Disability-first Dataset Creation: Lessons from Constructing a Dataset for Teachable Object Recognition with Blind and Low Vision Data Collectors
Artificial Intelligence (AI) for accessibility is a rapidly growing area, requiring datasets that are inclusive of the disabled users thatassistive technology aims to serve. We offer insights from a multi-disciplinary project that constructed a dataset for teachable objectrecognition with people who are blind or low vision. Teachable object recognition enables users to teach a model objects that are ofinterest to them, e.g., their white cane or own sunglasses, by providing example images or videos of objects. In this paper, we make thefollowing contributions: 1) a disability-first procedure to support blind and low vision data collectors to produce good quality data,using video rather than images; 2) a validation and evolution of this procedure through a series of data collection phases and 3) a set ofquestions to orient researchers involved in creating datasets toward reflecting on the needs of their participant community
Beyond “yesterday’s tomorrow”: future-focused mobile interaction design by and for emergent users
Mobile and ubiquitous computing researchers have long envisioned future worlds for users in developed regions. Steered by such visions, they have innovated devices and services exploring the value of alternative propositions with and for individuals, groups and communities. Meanwhile, such radical and long-term explorations are uncommon for what have been termed emergent users; users, that is, for whom advanced technologies are just within grasp. Rather, a driving assumption is that today’s high-end mobile technologies will “trickle down” to these user groups in due course. In this paper, we open the debate about what mobile technologies might be like if emergent users were directly involved in creating their visions for the future 5–10 years from now. To do this, we report on a set of envisioning workshops in India, South Africa and Kenya that provide a roadmap for valued, effective devices and services for these regions in the next decade. © 2016, The Author(s)
- …