57,578 research outputs found

    Visual Search on a Mobile Device While Walking

    Get PDF
    Previous studies have examined the effects of walking and user interface elements on mobile task performance, using physical target selection tasks. The current study investigated the effect of walking and user interface elements on visual search on a mobile device, isolating the effects on perceptual and cognitive processes. The effects of object size, contrast, and target location on mobile devices while walking and standing were examined. A serial visual search using T and L shapes on a mobile device, which controlled for physical target selection involvement was conducted. The results showed that walking, bigger object size, and the target position in the outer area of the mobile device display slowed the visual search response time. This suggests that walking causes a negative performance effect not only on the physical task but also on the cognitive process while interacting with the mobile user interface. In addition, the results of the study suggest that the placing of major content and call-to-action items in the inner area of the display are likely to improve task performance on a mobile device

    Visual search on a mobile device while walking

    Full text link

    Mining user activity as a context source for search and retrieval

    Get PDF
    Nowadays in information retrieval it is generally accepted that if we can better understand the context of users then this could help the search process, either at indexing time by including more metadata or at retrieval time by better modelling the user context. In this work we explore how activity recognition from tri-axial accelerometers can be employed to model a user's activity as a means of enabling context-aware information retrieval. In this paper we discuss how we can gather user activity automatically as a context source from a wearable mobile device and we evaluate the accuracy of our proposed user activity recognition algorithm. Our technique can recognise four kinds of activities which can be used to model part of an individual's current context. We discuss promising experimental results, possible approaches to improve our algorithms, and the impact of this work in modelling user context toward enhanced search and retrieval

    Finding new music: a diary study of everyday encounters with novel songs

    Get PDF
    This paper explores how we, as individuals, purposefully or serendipitously encounter 'new music' (that is, music that we haven’t heard before) and relates these behaviours to music information retrieval activities such as music searching and music discovery via use of recommender systems. 41 participants participated in a three-day diary study, in which they recorded all incidents that brought them into contact with new music. The diaries were analyzed using a Grounded Theory approach. The results of this analysis are discussed with respect to location, time, and whether the music encounter was actively sought or occurred passively. Based on these results, we outline design implications for music information retrieval software, and suggest an extension of 'laid back' searching

    Investigating Performance and Usage of Input Methods for Soft Keyboard Hotkeys

    Get PDF
    Touch-based devices, despite their mainstream availability, do not support a unified and efficient command selection mechanism, available on every platform and application. We advocate that hotkeys, conventionally used as a shortcut mechanism on desktop computers, could be generalized as a command selection mechanism for touch-based devices, even for keyboard-less applications. In this paper, we investigate the performance and usage of soft keyboard shortcuts or hotkeys (abbreviated SoftCuts) through two studies comparing different input methods across sitting, standing and walking conditions. Our results suggest that SoftCuts not only are appreciated by participants but also support rapid command selection with different devices and hand configurations. We also did not find evidence that walking deters their performance when using the Once input method.Comment: 17+2 pages, published at Mobile HCI 202

    Assessing the quality of steady-state visual-evoked potentials for moving humans using a mobile electroencephalogram headset.

    Get PDF
    Recent advances in mobile electroencephalogram (EEG) systems, featuring non-prep dry electrodes and wireless telemetry, have enabled and promoted the applications of mobile brain-computer interfaces (BCIs) in our daily life. Since the brain may behave differently while people are actively situated in ecologically-valid environments versus highly-controlled laboratory environments, it remains unclear how well the current laboratory-oriented BCI demonstrations can be translated into operational BCIs for users with naturalistic movements. Understanding inherent links between natural human behaviors and brain activities is the key to ensuring the applicability and stability of mobile BCIs. This study aims to assess the quality of steady-state visual-evoked potentials (SSVEPs), which is one of promising channels for functioning BCI systems, recorded using a mobile EEG system under challenging recording conditions, e.g., walking. To systematically explore the effects of walking locomotion on the SSVEPs, this study instructed subjects to stand or walk on a treadmill running at speeds of 1, 2, and 3 mile (s) per hour (MPH) while concurrently perceiving visual flickers (11 and 12 Hz). Empirical results of this study showed that the SSVEP amplitude tended to deteriorate when subjects switched from standing to walking. Such SSVEP suppression could be attributed to the walking locomotion, leading to distinctly deteriorated SSVEP detectability from standing (84.87 ± 13.55%) to walking (1 MPH: 83.03 ± 13.24%, 2 MPH: 79.47 ± 13.53%, and 3 MPH: 75.26 ± 17.89%). These findings not only demonstrated the applicability and limitations of SSVEPs recorded from freely behaving humans in realistic environments, but also provide useful methods and techniques for boosting the translation of the BCI technology from laboratory demonstrations to practical applications

    Exploring haptic interfacing with a mobile robot without visual feedback

    Get PDF
    Search and rescue scenarios are often complicated by low or no visibility conditions. The lack of visual feedback hampers orientation and causes significant stress for human rescue workers. The Guardians project [1] pioneered a group of autonomous mobile robots assisting a human rescue worker operating within close range. Trials were held with fire fighters of South Yorkshire Fire and Rescue. It became clear that the subjects by no means were prepared to give up their procedural routine and the feel of security they provide: they simply ignored instructions that contradicted their routines

    SeeReader: An (Almost) Eyes-Free Mobile Rich Document Viewer

    Get PDF
    Reading documents on mobile devices is challenging. Not only are screens small and difficult to read, but also navigating an environment using limited visual attention can be difficult and potentially dangerous. Reading content aloud using text-to-speech (TTS) processing can mitigate these problems, but only for content that does not include rich visual information. In this paper, we introduce a new technique, SeeReader, that combines TTS with automatic content recognition and document presentation control that allows users to listen to documents while also being notified of important visual content. Together, these services allow users to read rich documents on mobile devices while maintaining awareness of their visual environment
    • 

    corecore