231 research outputs found

    Route previewing results in altered gaze behaviour, increased self-confidence and improved stepping safety in both young and older adults during adaptive locomotion

    Get PDF
    Older adults with falls-risk tend to look away prematurely from targets for safe foot placement to view future hazards; behaviour associated with increased anxiety and stepping inaccuracies. We aimed to determine the effectiveness of route-previewing in reducing anxiety and optimizing gaze behaviour and stepping performance of young and older adults. Nine younger and nine older adults completed six walks with three task complexities over two sessions. Each trial used either an isolated stepping target, or a target followed by either one or two obstacles. Participants with eyes closed, on hearing a signal, opened their eyes and initiated walking (go trials) or stood previewing the route for 10s before starting (preview trials). Kinematic data were collected using a Vicon motion analysis system. Gaze behaviour was recorded using a Dikablis eye tracker. On average, both older and younger adults fixated the target for significantly longer during walking when they had previewed the route than when they had not. Self-confidence scores were also significantly higher following ‘preview trials’ than ‘go trials’. Stepping performance significantly improved following route previewing (reduced Medial lateral foot placement variability for both groups and reduced Anterior/posterior foot placement error in older adults only). These findings implicate route previewing as a potential intervention to increase self-confidence and reduce the risk of tripping in older adults

    How to improve learning from video, using an eye tracker

    Get PDF
    The initial trigger of this research about learning from video was the availability of log files from users of video material. Video modality is seen as attractive as it is associated with the relaxed mood of watching TV. The experiments in this research have the goal to gain more insight in viewing patterns of students when viewing video. Students received an awareness instruction about the use of possible alternative viewing behaviors to see whether this would enhance their learning effects. We found that: - the learning effects of students with a narrow viewing repertoire were less than the learning effects of students with a broad viewing repertoire or strategic viewers. - students with some basic knowledge of the topics covered in the videos benefited most from the use of possible alternative viewing behaviors and students with low prior knowledge benefited the least. - the knowledge gain of students with low prior knowledge disappeared after a few weeks; knowledge construction seems worse when doing two things at the same time. - media players could offer more options to help students with their search for the content they want to view again. - there was no correlation between pervasive personality traits and viewing behavior of students. The right use of video in higher education will lead to students and teachers that are more aware of their learning and teaching behavior, to better videos, to enhanced media players, and, finally, to higher learning effects that let users improve their learning from video

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Visualization and Human-Machine Interaction

    Get PDF
    The digital age offers a lot of challenges in the eld of visualization. Visual imagery has been effectively used to communicate messages through the ages, to express both abstract and concrete ideas. Today, visualization has ever-expanding applications in science, engineering, education, medicine, entertainment and many other areas. Different areas of research contribute to the innovation in the eld of interactive visualization, such as data science, visual technology, Internet of things and many more. Among them, two areas of renowned importance are Augmented Reality and Visual Analytics. This thesis presents my research in the fields of visualization and human-machine interaction. The purpose of the proposed work is to investigate existing solutions in the area of Augmented Reality (AR) for maintenance. A smaller section of this thesis presents a minor research project on an equally important theme, Visual Analytics. Overall, the main goal is to identify the most important existing problems and then design and develop innovative solutions to address them. The maintenance application domain has been chosen since it is historically one of the first fields of application for Augmented Reality and it offers all the most common and important challenges that AR can arise, as described in chapter 2. Since one of the main problem in AR application deployment is reconfigurability of the application, a framework has been designed and developed that allows the user to create, deploy and update in real-time AR applications. Furthermore, the research focused on the problems related to hand-free interaction, thus investigating the area of speech-recognition interfaces and designing innovative solutions to address the problems of intuitiveness and robustness of the interface. On the other hand, the area of Visual Analytics has been investigated: among the different areas of research, multidimensional data visualization, similarly to AR, poses specific problems related to the interaction between the user and the machine. An analysis of the existing solutions has been carried out in order to identify their limitations and to point out possible improvements. Since this analysis delineates the scatterplot as a renowned visualization tool worthy of further research, different techniques for adapting its usage to multidimensional data are analyzed. A multidimensional scatterplot has been designed and developed in order to perform a comparison with another multidimensional visualization tool, the ScatterDice. The first chapters of my thesis describe my investigations in the area of Augmented Reality for maintenance. Chapter 1 provides definitions for the most important terms and an introduction to AR. The second chapter focuses on maintenance, depicting the motivations that led to choose this application domain. Moreover, the analysis concerning open problems and related works is described along with the methodology adopted to design and develop the proposed solutions. The third chapter illustrates how the adopted methodology has been applied in order to assess the problems described in the previous one. Chapter 4 describes the methodology adopted to carry out the tests and outlines the experimental results, whereas the fifth chapter illustrates the conclusions and points out possible future developments. Chapter 6 describes the analysis and research work performed in the eld of Visual Analytics, more specifically on multidimensional data visualizations. Overall, this thesis illustrates how the proposed solutions address common problems of visualization and human-machine interaction, such as interface de- sign, robustness of the interface and acceptance of new technology, whereas other problems are related to the specific research domain, such as pose tracking and reconfigurability of the procedure for the AR domain

    Description and application of the correlation between gaze and hand for the different hand events occurring during interaction with tablets

    Get PDF
    People’s activities naturally involve the coordination of gaze and hand. Research in Human-Computer Interaction (HCI) endeavours to enable users to exploit this multimodality for enhanced interaction. With the abundance of touch screen devices, direct manipulation of an interface has become a dominating interaction technique. Although touch enabled devices are prolific in both public and private spaces, interactions with these devices do not fully utilise the benefits from the correlation between gaze and hand. Touch enabled devices do not employ the richness of the continuous manual activity above their display surface for interaction and a lot of information expressed by users through their hand movements is ignored. This thesis aims at investigating the correlation between gaze and hand during natural interaction with touch enabled devices to address these issues. To do so, we set three objectives. Firstly, we seek to describe the correlation between gaze and hand in order to understand how they operate together: what is the spatial and temporal relationship between these modalities when users interact with touch enabled devices? Secondly, we want to know the role of some of the inherent factors brought by the interaction with touch enabled devices on the correlation between gaze and hand, because identifying what modulates the correlation is crucial to design more efficient applications: what are the impacts of the individual differences, the task characteristics and the features of the on-screen targets? Thirdly, as we want to see whether additional information related to the user can be extracted from the correlation between gaze and hand, we investigate the latter for the detection of users’ cognitive state while they interact with touch enabled devices: can the correlation reveal the users’ hesitation? To meet the objectives, we devised two data collections for gaze and hand. In the first data collection, we cover the manual interaction on-screen. In the second data collection, we focus instead on the manual interaction in-the-air. We dissect the correlation between gaze and hand using three common hand events users perform while interacting with touch enabled devices. These events comprise taps, stationary hand events and the motion between taps and stationary hand events. We use a tablet as a touch enabled device because of its medium size and the ease to integrate both eye and hand tracking sensors. We study the correlation between gaze and hand for tap events by collecting gaze estimation data and taps on tablet in the context of Internet related tasks, representative of typical activities executed using tablets. The correlation is described in the spatial and temporal dimensions. Individual differences and effects of the task nature and target type are also investigated. To study the correlation between gaze and hand when the hand is in a stationary situation, we conducted a data collection in the context of a Memory Game, chosen to generate enough cognitive load during playing while requiring the hand to leave the tablet’s surface. We introduce and evaluate three detection algorithms, inspired by eye tracking, based on the analogy between gaze and hand patterns. Afterwards, spatial comparisons between gaze and hands are analysed to describe the correlation. We study the effects on the task difficulty and how the hesitation of the participants influences the correlation. Since there is no certain way of knowing when a participant hesitates, we approximate the hesitation with the failure of matching a pair of already seen tiles. We study the correlation between gaze and hand during hand motion between taps and stationary hand events from the same data collection context than the case mentioned above. We first align gaze and hand data in time and report the correlation coefficients in both X and Y axis. After considering the general case, we examine the impact of the different factors implicated in the context: participants, task difficulty, duration and type of the hand motion. Our results show that the correlation between gaze and hand, throughout the interaction, is stronger in the horizontal dimension of the tablet rather than in its vertical dimension, and that it varies widely across users, especially spatially. We also confirm the eyes lead the hand for target acquisition. Moreover, we find out that the correlation between gaze and hand when the hand is in the air above the tablet’s surface depends on where the users look at on the tablet. As well, we show that the correlation during eye and hand during stationary hand events can indicate the users’ indecision, and that while the hand is moving, the correlation depends on different factors, such as the degree of difficulty of the task performed on the tablet and the nature of the event before/after the motion

    AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures

    Get PDF
    AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air - with 2-5 armatures poseable in 7DoF within the same workspace - to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing "seeing and being seen"in remote work

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun
    • …
    corecore