17 research outputs found

    Factors influencing visual attention switch in multi-display user interfaces: a survey

    Get PDF
    Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin

    Cross-device gaze-supported point-to-point content transfer

    Get PDF
    Within a pervasive computing environment, we see content on shared displays that we wish to acquire and use in a specific way i.e., with an application on a personal device, transferring from point-to-point. The eyes as input can indicate intention to interact with a service, providing implicit pointing as a result. In this paper we investigate the use of gaze and manual input for the positioning of gaze-acquired content on personal devices. We evaluate two main techniques, (1) Gaze Positioning, transfer of content using gaze with manual input to confirm actions, (2) Manual Positioning, content is selected with gaze but final positioning is performed by manual input, involving a switch of modalities from gaze to manual input. A first user study compares these techniques applied to direct and indirect manual input configurations, a tablet with touch input and a laptop with mouse input. A second study evaluated our techniques in an application scenario involving distractor targets. Our overall results showed general acceptance and understanding of all conditions, although there were clear individual user preferences dependent on familiarity and preference toward gaze, touch, or mouse input

    Expanding the bounds of seated virtual workspaces

    Get PDF
    Mixed Reality (MR), Augmented Reality (AR) and Virtual Reality (VR) headsets can improve upon existing physical multi-display environments by rendering large, ergonomic virtual display spaces whenever and wherever they are needed. However, given the physical and ergonomic limitations of neck movement, users may need assistance to view these display spaces comfortably. Through two studies, we developed new ways of minimising the physical effort and discomfort of viewing such display spaces. We first explored how the mapping between gaze angle and display position could be manipulated, helping users view wider display spaces than currently possible within an acceptable and comfortable range of neck movement. We then compared our implicit control of display position based on head orientation against explicit user control, finding significant benefits in terms of user preference, workload and comfort for implicit control. Our novel techniques create new opportunities for productive work by leveraging MR headsets to create interactive wide virtual workspaces with improved comfort and usability. These workspaces are flexible and can be used on-the-go, e.g., to improve remote working or make better use of commuter journeys

    The ASPECTA toolkit : affordable Full Coverage Displays

    Get PDF
    Full Coverage Displays (FCDs) cover the interior surface of an entire room with pixels. FCDs make possible many new kinds of immersive display experiences - but current technology for building FCDs is expensive and complex, and software support for developing full-coverage applications is limited. To address these problems, we introduce ASPECTA, a hardware configuration and software toolkit that provide a low-cost and easy-to-use solution for creating full coverage systems. We outline ASPECTA’s (minimal) hardware requirements and describe the toolkit’s architecture, development API, server implementation, and configuration tool; we also provide a full example of how the toolkit can be used. We performed two evaluations of the toolkit: a case study of a research system built with ASPECTA, and a laboratory study that tested the effectiveness of the API. Our evaluations, as well as multiple examples of ASPECTA in use, show how ASPECTA can simplify configuration and development while still dramatically reducing the cost for creators of applications that take advantage of full-coverage displays.Postprin

    Cross-display attention switching in mobile interaction with large displays

    Get PDF
    Mobile devices equipped with features (e.g., camera, network connectivity and media player) are increasingly being used for different tasks such as web browsing, document reading and photography. While the portability of mobile devices makes them desirable for pervasive access to information, their small screen real-estate often imposes restrictions on the amount of information that can be displayed and manipulated on them. On the other hand, large displays have become commonplace in many outdoor as well as indoor environments. While they provide an efficient way of presenting and disseminating information, they provide little support for digital interactivity or physical accessibility. Researchers argue that mobile phones provide an efficient and portable way of interacting with large displays, and the latter can overcome the limitations of the small screens of mobile devices by providing a larger presentation and interaction space. However, distributing user interface (UI) elements across a mobile device and a large display can cause switching of visual attention and that may affect task performance. This thesis specifically explores how the switching of visual attention across a handheld mobile device and a vertical large display can affect a single user's task performance during mobile interaction with large displays. It introduces a taxonomy based on the factors associated with the visual arrangement of Multi Display User Interfaces (MDUIs) that can influence visual attention switching during interaction with MDUIs. It presents an empirical analysis of the effects of different distributions of input and output across mobile and large displays on the user's task performance, subjective workload and preference in the multiple-widget selection task, and in visual search tasks with maps, texts and photos. Experimental results show that the selection of multiple widgets replicated on the mobile device as well as on the large display, versus those shown only on the large display, is faster despite the cost of initial attention switching in the former. On the other hand, a hybrid UI configuration where the visual output is distributed across the mobile and large displays is the worst, or equivalent to the worst, configuration in all the visual search tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best (i.e., tied with a mobile-only configuration) in text- and photo-search tasks

    Variant signal peptides of vaccine antigen, FHbp, impair processing affecting surface localization and antibody-mediated killing in most meningococcal isolates

    Get PDF
    Β© Copyright Β© 2019 da Silva, Karlyshev, Oldfield, Wooldridge, Bayliss, Ryan and Griffin. Meningococcal lipoprotein, Factor H binding protein (FHbp), is the sole antigen of the Trumenba vaccine (Pfizer) and one of four antigens of the Bexsero vaccine (GSK) targeting Neisseria meningitidis serogroup B isolates. Lipidation of FHbp is assumed to occur for all isolates. We show in the majority of a collection of United Kingdom isolates (1742/1895) non-synonymous single nucleotide polymorphisms (SNPs) in the signal peptide (SP) of FHbp. A single SNP, common to all, alters a polar amino acid that abolishes processing: lipidation and SP cleavage. Whilst some of the FHbp precursor is retained in the cytoplasm due to reduced binding to SecA, remarkably some is translocated and further surface-localized by Slam. Thus we show Slam is not lipoprotein-specific. In a panel of isolates tested, the overall reduced surface localization of the precursor FHbp, compared to isolates with an intact SP, corresponded with decreased susceptibility to antibody-mediated killing. Our findings shed new light on the canonical pathway for lipoprotein processing and translocation of important relevance for lipoprotein-based vaccines in development and in particular for Trumenba

    Full coverage displays for non-immersive applications

    Get PDF
    Full Coverage Displays (FCDs), which cover the interior surface of a room with display pixels, can create novel user interfaces taking advantage of natural aspects of human perception and memory which we make use of in our everyday lives. However, past research has generally focused on FCDs for immersive experiences, the required hardware is generally prohibitively expensive for the average potential user, configuration is complicated for developers and end users, and building applications which conform to the room layout is often difficult. The goals of this thesis are: to create an affordable, easy to use (for developers and end users) FCD toolkit for non-immersive applications; to establish efficient pointing techniques in FCD environments; and to explore suitable ways to direct attention to out-of-view targets in FCDs. In this thesis I initially present and evaluate my own "ASPECTA Toolkit" which was designed to meet the above requirements. Users during the main evaluation were generally positive about their experiences, all completing the task in less than three hours. Further evaluation was carried out through interviews with researchers who used ASPECTA in their own work. These revealed similarly positive results, with feedback from users driving improvements to the toolkit. For my exploration into pointing techniques, Mouse and Ray-Cast approaches were chosen as most appropriate for FCDs. An evaluation showed that the Ray-Cast approach was fastest overall, while a mouse-based approach showed a small advantage in the front hemisphere of the room. For attention redirection I implemented and evaluated a set of four visual techniques. The results suggest that techniques which are static and lead all the way to the target may have an advantage and that the cognitive processing time of a technique is an important consideration."This work was supported by the EPSRC (grant number EP/L505079/1) and SurfNet (NSERC)." - Acknowledgement

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work

    Cross-Device Taxonomy:Survey, Opportunities and Challenges of Interactions Spanning Across Multiple Devices

    Get PDF
    Designing interfaces or applications that move beyond the bounds of a single device screen enables new ways to engage with digital content. Research addressing the opportunities and challenges of interactions with multiple devices in concert is of continued focus in HCI research. To inform the future research agenda of this field, we contribute an analysis and taxonomy of a corpus of 510 papers in the cross- device computing domain. For both new and experienced researchers in the field we provide: an overview, historic trends and unified terminology of cross-device research; discussion of major and under-explored application areas; mapping of enabling technologies; synthesis of key interaction techniques spanning across multiple devices; and review of common evaluation strategies. We close with a discussion of open issues. Our taxonomy aims to create a unified terminology and common understanding for researchers in order to facilitate and stimulate future cross-device research
    corecore