28 research outputs found
Building Cognition-Aware Systems: A Mobile Toolkit for Extracting Time-of-Day Fluctuations of Cognitive Performance
People’s alertness fluctuates across the day: at some times we are highly focused while at others we feel unable to concentrate. So far, extracting fluctuation patterns has been time and cost-intensive. Using an in-the-wild approach with 12 participants, we evaluated three cognitive tasks regarding their adequacy as a mobile and economical assessment tool of diurnal changes in mental performance. Participants completed the five-minute test battery on their smartphones multiple times a day for a period of 1-2 weeks. Our results show that people’s circadian rhythm can be obtained under unregulated non-laboratory conditions. Along with this validation study, we release our test battery as an open source library for future work towards cognition-aware systems as well as a tool for psychological and medical research. We discuss ways of integrating the toolkit and possibilities for implicitly measuring performance variations in common applications. The ability to detect systematic patterns in alertness levels will allow cognition-aware systems to provide in-situ assistance in accordance with users’ current cognitive capabilities and limitations
The Consistency of Crossmodal Synchrony Perception across the Visual, Auditory, and Tactile Senses
Machulla T-K, Di Luca M, Ernst MO. The Consistency of Crossmodal Synchrony Perception across the Visual, Auditory, and Tactile Senses. Journal of Experimental Psychology: Human Perception and Performance. 2016;42(7):1026-1038
Understanding Perception of Human Augmentation: A Mixed-Method Study
Technologies that help users overcome their limitations and integrate with the human body are often termed “human augmentations”. Such technologies are now available on the consumer market, potentially supporting people in their everyday activities. To date, there is no systematic understanding of the perception of human augmentations yet. To address this gap and build an understanding of how to design positive experiences with human augmentations, we conducted a mixed-method study of the perception of augmented humans (AHs). We conducted two scenario-based studies: interviews (n = 16) and an online study (n = 506) with participants from four countries. The scenarios include one out of three augmentation categories (sensory, motor, and cognitive) and specify if the augmented person has a disability or not. Overall, results show that the type of augmentation and disability impacted user attitudes towards AHs. We derive design dimensions for creating technological augmentations for a diverse and global audience
Pixel Memories: Do Lifelog Summaries Fail to Enhance Memory but Offer Privacy-Aware Memory Assessments?
We explore the metaphorical "daily memory pill" concept – a brief pictorial lifelog recap aimed at reviving and preserving memories. Leveraging psychological strategies, we explore the potential of such summaries to boost autobiographical memory. We developed an automated lifelogging memory prosthesis and a research protocol (Automated Memory Validation “AMV”) for conducting privacy-aware, in-situ evaluations. We conducted a real-world lifelogging experiment for a month (n=11). We also designed a browser “Pixel Memories’’ for browsing one-week worth of lifelogs. The results suggest that daily timelapse summaries, while not yielding significant memory augmentation effects, also do not lead to memory degradation. Participants’ confidence in recalled content remains unaltered, but the study highlights the challenge of users’ overestimation of memory accuracy. Our core contributions, the AMV protocol and "Pixel Memories" browser, advance our understanding of memory augmentations and offer a privacy-preserving method for evaluating future ubicomp systems
Towards Inclusive Conversations in Virtual Reality for People with Visual Impairments
Current mainstream social Virtual Reality (VR) spaces pose barriers to the equal participation of people with visual impairments (PVI) in social interactions. At present, VR is first and primarily a visual medium with a strong emphasis on the visual design of the VR scene and the available avatars. If social communication cues, such as non-verbal communication, are available at all, they are often not provided in a form accessible to PVI. Such cues are essential in social interactions to successfully participate in social interactions and experience a conversation in VR as realistic. Here, we summarize previous research regarding specific requirements for social VR spaces to be accessible to PVIs. We describe how people with disabilities recognize and identify potential conversational partners and how non-verbal communication works between PVI and sighted people. Our goal was to provide an overview of valuable features that can be implemented for inclusive conversations in a social VR space
Visual Impairment Sensitization: Co-Designing a Virtual Reality Tool with Sensitization Instructors
Mixed Reality as Assistive Technology: Guidelines Based on an Assessment of Residual Functional Vision in Persons with Low Vision
AbstractResidual visual capabilities and the associated phenomenological experience can differ significantly between persons with similar visual acuity and similar diagnosis. There is a substantial variance in situations and tasks that persons with low vision find challenging. Smartglasses provide the opportunity of presenting individualized visual feedback targeted to each user’s requirements. Here, we interviewed nine persons with low vision to obtain insight into their subjective perceptual experience associated with factors such as illumination, color, contrast, and movement, as well as context factors. Further, we contribute a collection of everyday activities that rely on visual perception as well as strategies participants employ in their everyday lives. We find that our participants rely on their residual vision as the dominant sense in many different everyday activities. They prefer vision to other modalities if they can perceive the information visually, which highlights the need for assistive devices with visual feedback.</jats:p
