37 research outputs found

    Factors influencing visual attention switch in multi-display user interfaces: a survey

    Get PDF
    Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin

    Full coverage displays for non-immersive applications

    Get PDF
    Full Coverage Displays (FCDs), which cover the interior surface of a room with display pixels, can create novel user interfaces taking advantage of natural aspects of human perception and memory which we make use of in our everyday lives. However, past research has generally focused on FCDs for immersive experiences, the required hardware is generally prohibitively expensive for the average potential user, configuration is complicated for developers and end users, and building applications which conform to the room layout is often difficult. The goals of this thesis are: to create an affordable, easy to use (for developers and end users) FCD toolkit for non-immersive applications; to establish efficient pointing techniques in FCD environments; and to explore suitable ways to direct attention to out-of-view targets in FCDs. In this thesis I initially present and evaluate my own "ASPECTA Toolkit" which was designed to meet the above requirements. Users during the main evaluation were generally positive about their experiences, all completing the task in less than three hours. Further evaluation was carried out through interviews with researchers who used ASPECTA in their own work. These revealed similarly positive results, with feedback from users driving improvements to the toolkit. For my exploration into pointing techniques, Mouse and Ray-Cast approaches were chosen as most appropriate for FCDs. An evaluation showed that the Ray-Cast approach was fastest overall, while a mouse-based approach showed a small advantage in the front hemisphere of the room. For attention redirection I implemented and evaluated a set of four visual techniques. The results suggest that techniques which are static and lead all the way to the target may have an advantage and that the cognitive processing time of a technique is an important consideration."This work was supported by the EPSRC (grant number EP/L505079/1) and SurfNet (NSERC)." - Acknowledgement

    The disentanglement of the neural and experiential complexity of self-generated thoughts : A users guide to combining experience sampling with neuroimaging data

    Get PDF
    Human cognition is not limited to the processing of events in the external environment, and the covert nature of certain aspects of the stream of consciousness (e.g. experiences such as mind-wandering) provides a methodological challenge. Although research has shown that we spend a substantial amount of time focused on thoughts and feelings that are intrinsically generated, evaluating such internal states, purely on psychological grounds can be restrictive. In this review of the different methods used to examine patterns of ongoing thought, we emphasise how the process of triangulation between neuroimaging techniques, with self-reported information, is important for the development of a more empirically grounded account of ongoing thought. Specifically, we show how imaging techniques have provided critical information regarding the presence of covert states and can help in the attempt to identify different aspects of experience

    The effect of interior bezel presence and width on magnitude judgement

    Get PDF
    © The Authors, 2014. This is the author's version of the work. It is posted here by permission for your personal use. Not for redistribution. First published in print by Canadian Human-Computer Communications Society, and also in electronic form by ACM, Wallace, J. R., Vogel, D., & Lank, E. (2014). The effect of interior bezel presence and width on magnitude judgement. In Proceedings of Graphics Interface 2014 (pp. 175–182). Montreal, Quebec, Canada: Canadian Information Processing Society.Large displays are often constructed by tiling multiple small displays, creating visual discontinuities from inner bezels that may affect human perception of data. Our work investigates how bezels impact magnitude judgement, a fundamental aspect of perception. Two studies are described which control for bezel presence, bezel width, and user-to-display distance. Our findings form three implications for the design of tiled displays. Bezels wider than 0.5cm introduce a 4-7% increase in judgement error from a distance, which we simplify to a 5% rule of thumb when assessing display hardware. Length judgements made at arm's length are most affected by wider bezels, and are an important use case to consider. At arm's length, bezel compensation techniques provide a limited benefit in terms of judgement accuracy. Copyright held by authors

    Interacting "Through the Display"

    Get PDF
    The increasing availability of displays at lower costs has led to a proliferation of such in our everyday lives. Additionally, mobile devices are ready to hand and have been proposed as interaction devices for external screens. However, only their input mechanism was taken into account without considering three additional factors in environments hosting several displays: first, a connection needs to be established to the desired target display (modality). Second, screens in the environment may be re-arranged (flexibility). And third, displays may be out of the user’s reach (distance). In our research we aim to overcome the problems resulting from these characteristics. The overall goal is a new interaction model that allows for (1) a non-modal connection mechanism for impromptu use on various displays in the environment, (2) interaction on and across displays in highly flexible environments, and (3) interacting at variable distances. In this work we propose a new interaction model called through the display interaction which enables users to interact with remote content on their personal device in an absolute and direct fashion. To gain a better understanding of the effects of the additional characteristics, we implemented two prototypes each of which investigates a different distance to the target display: LucidDisplay allows users to place their mobile device directly on top of a larger external screen. MobileVue on the other hand enables users to interact with an external screen at a distance. In each of these prototypes we analyzed their effects on the remaining two criteria – namely the modality of the connection mechanism as well as the flexibility of the environment. With the findings gained in this initial phase we designed Shoot & Copy, a system that allows the detection of screens purely based on their visual content. Users aim their personal device’s camera at the target display which then appears in live video shown in the viewfinder. To select an item, users take a picture which is analyzed to determine the targeted region. We further extended this approach to multiple displays by using a centralized component serving as gateway to the display environment. In Tap & Drop we refined this prototype to support real-time feedback. Instead of taking pictures, users can now aim their mobile device at the display resulting and start interacting immediately. In doing so, we broke the rigid sequential interaction of content selection and content manipulation. Both prototypes allow for (1) connections in a non-modal way (i.e., aim at the display and start interacting with it) from the user’s point of view and (2) fully flexible environments (i.e., the mobile device tracks itself with respect to displays in the environment). However, the wide-angle lenses and thus greater field of views of current mobile devices still do not allow for variable distances. In Touch Projector, we overcome this limitation by introducing zooming in combination with temporarily freezing the video image. Based on our extensions to taxonomy of mobile device interaction on external displays, we created a refined model of interacting through the display for mobile use. It enables users to interact impromptu without explicitly establishing a connection to the target display (non-modal). As the mobile device tracks itself with respect to displays in the environment, the model further allows for full flexibility of the environment (i.e., displays can be re-arranged without affecting on the interaction). And above all, users can interact with external displays regardless of their actual size at variable distances without any loss of accuracy.Die steigende Verfügbarkeit von Bildschirmen hat zu deren Verbreitung in unserem Alltag geführt. Ferner sind mobile Geräte immer griffbereit und wurden bereits als Interaktionsgeräte für zusätzliche Bildschirme vorgeschlagen. Es wurden jedoch nur Eingabemechanismen berücksichtigt ohne näher auf drei weitere Faktoren in Umgebungen mit mehreren Bildschirmen einzugehen: (1) Beide Geräte müssen verbunden werden (Modalität). (2) Bildschirme können in solchen Umgebungen umgeordnet werden (Flexibilität). (3) Monitore können außer Reichweite sein (Distanz). Wir streben an, die Probleme, die durch diese Eigenschaften auftreten, zu lösen. Das übergeordnete Ziel ist ein Interaktionsmodell, das einen nicht-modalen Verbindungsaufbau für spontane Verwendung von Bildschirmen in solchen Umgebungen, (2) Interaktion auf und zwischen Bildschirmen in flexiblen Umgebungen, und (3) Interaktionen in variablen Distanzen erlaubt. Wir stellen ein Modell (Interaktion durch den Bildschirm) vor, mit dem Benutzer mit entfernten Inhalten in direkter und absoluter Weise auf ihrem Mobilgerät interagieren können. Um die Effekte der hinzugefügten Charakteristiken besser zu verstehen, haben wir zwei Prototypen für unterschiedliche Distanzen implementiert: LucidDisplay erlaubt Benutzern ihr mobiles Gerät auf einen größeren, sekundären Bildschirm zu legen. Gegensätzlich dazu ermöglicht MobileVue die Interaktion mit einem zusätzlichen Monitor in einer gewissen Entfernung. In beiden Prototypen haben wir dann die Effekte der verbleibenden zwei Kriterien (d.h. Modalität des Verbindungsaufbaus und Flexibilität der Umgebung) analysiert. Mit den in dieser ersten Phase erhaltenen Ergebnissen haben wir Shoot & Copy entworfen. Dieser Prototyp erlaubt die Erkennung von Bildschirmen einzig über deren visuellen Inhalt. Benutzer zeigen mit der Kamera ihres Mobilgeräts auf einen Bildschirm dessen Inhalt dann in Form von Video im Sucher dargestellt wird. Durch die Aufnahme eines Bildes (und der darauf folgenden Analyse) wird Inhalt ausgewählt. Wir haben dieses Konzept zudem auf mehrere Bildschirme erweitert, indem wir eine zentrale Instanz verwendet haben, die als Schnittstelle zur Umgebung agiert. Mit Tap & Drop haben wir den Prototyp verfeinert, um Echtzeit-Feedback zu ermöglichen. Anstelle der Bildaufnahme können Benutzer nun ihr mobiles Gerät auf den Bildschirm richten und sofort interagieren. Dadurch haben wir die strikt sequentielle Interaktion (Inhalt auswählen und Inhalt manipulieren) aufgebrochen. Beide Prototypen erlauben bereits nicht-modale Verbindungsmechanismen in flexiblen Umgebungen. Die in heutigen Mobilgeräten verwendeten Weitwinkel-Objektive erlauben jedoch nach wie vor keine variablen Distanzen. Mit Touch Projector beseitigen wir diese Einschränkung, indem wir Zoomen in Kombination mit einer vorübergehenden Pausierung des Videos im Sucher einfügen. Basierend auf den Erweiterungen der Klassifizierung von Interaktionen mit zusätzlichen Bildschirmen durch mobile Geräte haben wir ein verbessertes Modell (Interaktion durch den Bildschirm) erstellt. Es erlaubt Benutzern spontan zu interagieren, ohne explizit eine Verbindung zum zweiten Bildschirm herstellen zu müssen (nicht-modal). Da das mobile Gerät seinen räumlichen Bezug zu allen Bildschirmen selbst bestimmt, erlaubt unser Modell zusätzlich volle Flexibilität in solchen Umgebungen. Darüber hinaus können Benutzer mit zusätzlichen Bildschirmen (unabhängig von deren Größe) in variablen Entfernungen interagieren

    Cross-display attention switching in mobile interaction with large displays

    Get PDF
    Mobile devices equipped with features (e.g., camera, network connectivity and media player) are increasingly being used for different tasks such as web browsing, document reading and photography. While the portability of mobile devices makes them desirable for pervasive access to information, their small screen real-estate often imposes restrictions on the amount of information that can be displayed and manipulated on them. On the other hand, large displays have become commonplace in many outdoor as well as indoor environments. While they provide an efficient way of presenting and disseminating information, they provide little support for digital interactivity or physical accessibility. Researchers argue that mobile phones provide an efficient and portable way of interacting with large displays, and the latter can overcome the limitations of the small screens of mobile devices by providing a larger presentation and interaction space. However, distributing user interface (UI) elements across a mobile device and a large display can cause switching of visual attention and that may affect task performance. This thesis specifically explores how the switching of visual attention across a handheld mobile device and a vertical large display can affect a single user's task performance during mobile interaction with large displays. It introduces a taxonomy based on the factors associated with the visual arrangement of Multi Display User Interfaces (MDUIs) that can influence visual attention switching during interaction with MDUIs. It presents an empirical analysis of the effects of different distributions of input and output across mobile and large displays on the user's task performance, subjective workload and preference in the multiple-widget selection task, and in visual search tasks with maps, texts and photos. Experimental results show that the selection of multiple widgets replicated on the mobile device as well as on the large display, versus those shown only on the large display, is faster despite the cost of initial attention switching in the former. On the other hand, a hybrid UI configuration where the visual output is distributed across the mobile and large displays is the worst, or equivalent to the worst, configuration in all the visual search tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best (i.e., tied with a mobile-only configuration) in text- and photo-search tasks

    Mind-Wandering Experiences in Ageing: Neurocognitive Processes and Other Influencing Factors

    Get PDF
    The ability to self-generate thoughts in imagination is a central aspect of the human experience. Mind-wandering episodes are multifaceted and are heterogeneous in terms of their content, form (e.g. modality, level of detail), and behavioural outcomes. Older adults’ neurocognitive profile shows impairments in functions highly linked to the generation and management of such episodes, namely episodic memory, attentional control, and abilities associated with the recruitment of the default mode network (DMN). Robust findings have documented a decrease in the frequency of mind-wandering with increasing age. However, age-related changes in thought content, and how this is related to the cerebral organisation of the brain, has largely been neglected. This PhD project aimed to: (i) investigate older adults’ neurocognitive profile alongside the complexities of mind-wandering, and importantly (ii) explore the impact of moderating factors on thought content as we grow older. Converging behavioural and neuroimaging methods were employed to provide a comprehensive account of self-generated thoughts. The first two chapters combined self-reports with electrophysiological and fMRI connectivity data, and demonstrated associations between changes in the recruitment of the DMN and age-related changes in self-generated thoughts. Subsequent experimental chapters considered the influence of key factors believed to impact on the content of thoughts. Examining the influence of culture revealed that native French speakers favoured self-reflection and engaged in more positively oriented thoughts, in comparison to English native speakers. In addition, the manipulation of task difficulty encouraged verbal rehearsal, and meta-awareness mainly targeted the temporal characteristics of thoughts. Finally, after a 4-week meditation intervention, there was a reduction in both negative and past-oriented thoughts. Throughout, behavioural measures demonstrated older adults’ bias toward deliberate on-task thoughts, with evidence of a decrease of negatively oriented thoughts, stable rates of positively oriented thoughts, and an increase of visual thoughts, and task-related interference. Overall, the systematic use of convergent behavioural and neuroimaging methodology has provided a more in-depth understanding of mind-wandering experiences in ageing where previously the frequency of these episodes has only been considered

    Cross-Device Taxonomy:Survey, Opportunities and Challenges of Interactions Spanning Across Multiple Devices

    Get PDF
    Designing interfaces or applications that move beyond the bounds of a single device screen enables new ways to engage with digital content. Research addressing the opportunities and challenges of interactions with multiple devices in concert is of continued focus in HCI research. To inform the future research agenda of this field, we contribute an analysis and taxonomy of a corpus of 510 papers in the cross- device computing domain. For both new and experienced researchers in the field we provide: an overview, historic trends and unified terminology of cross-device research; discussion of major and under-explored application areas; mapping of enabling technologies; synthesis of key interaction techniques spanning across multiple devices; and review of common evaluation strategies. We close with a discussion of open issues. Our taxonomy aims to create a unified terminology and common understanding for researchers in order to facilitate and stimulate future cross-device research

    Wisewrds _ Bridge Employment & Intergenerational Knowledge Transfer

    Get PDF
    Paid employment following retirement is a growing phenomenon known as bridge employment. With a predicted rise in the aging population and no mandatory retirement age, this form of employment is expected to see an upward trend. This research explores the concept of part-time bridge employment at OCAD University. The university is an age-integrated workplace that includes mature adults (Baby Boomers) and young adults (Millennials). Through a user-centered design approach, potential challenges and opportunities afforded by inter-age knowledge transfer for faculty members (young and mature) and students are examined. Mentorship is used as a strategy for workload reduction at the institution. A web app, Wisewrds, is designed to facilitate part-time bridge employment to mature adults and contribute to young adults seeking informal mentorship. An online platform, a flexible working environment and support of intergenerational knowledge transfer through mentoring service are proposed to the University as enabling conditions for part-time bridge employment
    corecore