9 research outputs found

    Explorative Study on Asymmetric Sketch Interactions for Object Retrieval in Virtual Reality

    Get PDF
    Drawing tools for Virtual Reality (VR) enable users to model 3D designs from within the virtual environment itself. These tools employ sketching and sculpting techniques known from desktop-based interfaces and apply them to hand-based controller interaction. While these techniques allow for mid-air sketching of basic shapes, it remains difficult for users to create detailed and comprehensive 3D models. Our work focuses on supporting the user in designing the virtual environment around them by enhancing sketch-based interfaces with a supporting system for interactive model retrieval. An immersed user can query a database containing detailed 3D models and replace them with the virtual environment through sketching. To understand supportive sketching within a virtual environment, we made an explorative comparison between asymmetric methods of sketch interaction, i.e., 3D mid-air sketching, 2D sketching on a virtual tablet, 2D sketching on a fixed virtual whiteboard, and 2D sketching on a real tablet. Our work shows that different patterns emerge when users interact with 3D sketches rather than 2D sketches to compensate for different results from the retrieval system. In particular, the user adopts strategies when drawing on canvas of different sizes or using a physical device instead of a virtual canvas. While we pose our work as a retrieval problem for 3D models of chairs, our results can be extrapolated to other sketching tasks for virtual environments

    Investigating Precise Control in Spatial Interactions: Proxemics, Kinesthetics, and Analytics

    Get PDF
    Augmented and Virtual Reality (AR/VR) technologies have reshaped the way in which we perceive the virtual world. In fact, recent technological advancements provide experiences that make the physical and virtual worlds almost indistinguishable. However, the physical world affords subtle sensorimotor cues which we subconsciously utilize to perform simple and complex tasks in our daily lives. The lack of this affordance in existing AR/VR systems makes it difficult for their mainstream adoption over conventional 2D2D user interfaces. As a case in point, existing spatial user interfaces (SUI) lack the intuition to perform tasks in a manner that is perceptually familiar to the physical world. The broader goal of this dissertation lies in facilitating an intuitive spatial manipulation experience, specifically for motor control. We begin by investigating the role of proximity to an action on precise motor control in spatial tasks. We do so by introducing a new SUI called the Clock-Maker's Work-Space (CMWS), with the goal of enabling precise actions close to the body, akin to the physical world. On evaluating our setup in comparison to conventional mixed-reality interfaces, we find CMWS to afford precise actions for bi-manual spatial tasks. We further compare our SUI with a physical manipulation task and observe similarities in user behavior across both tasks. We subsequently narrow our focus on studying precise spatial rotation. We utilize haptics, specifically force-feedback (kinesthetics) for augmenting fine motor control in spatial rotational task. By designing three kinesthetic rotation metaphors, we evaluate precise rotational control with and without haptic feedback for 3D shape manipulation. Our results show that haptics-based rotation algorithms allow for precise motor control in 3D space, also, help reduce hand fatigue. In order to understand precise control in its truest form, we investigate orthopedic surgery training from the point of analyzing bone-drilling tasks. We designed a hybrid physical-virtual simulator for bone-drilling training and collected physical data for analyzing precise drilling action. We also developed a Laplacian based performance metric to help expert surgeons evaluate the resident training progress across successive years of orthopedic residency

    A Tangible User Interface for Interactive Data Visualisation

    Get PDF
    Information visualisation (infovis) tools are integral for the analysis of large abstract data, where interactive processes are adopted to explore data, investigate hypotheses and detect patterns. New technologies exist beyond post-windows, icons, menus and pointing (WIMP), such as tangible user interfaces (TUIs). TUIs expand on the affordance of physical objects and surfaces to better exploit motor and perceptual abilities and allow for the direct manipulation of data. TUIs have rarely been studied in the field of infovis. The overall aim of this thesis is to design, develop and evaluate a TUI for infovis, using expression quantitative trait loci (eQTL) as a case study. The research began with eliciting eQTL analysis requirements that identified high- level tasks and themes for quantitative genetic and eQTL that were explored in a graphical prototype. The main contributions of this thesis are as follows. First, a rich set of interface design options for touch and an interactive surface with exclusively tangible objects were explored for the infovis case study. This work includes characterising touch and tangible interactions to understand how best to use them at various levels of metaphoric representation and embodiment. These design were then compared to identify a set of options for a TUI that exploits the advantages of touch and tangible interaction. Existing research shows computer vision commonly utilised as the TUI technology of choice. This thesis contributes a rigorous technical evaluation of another promising technology, micro-controllers and sensors, as well as computer vision. However the findings showed that some sensors used with micro-controllers are lacking in capability, so computer vision was adopted for the development of the TUI. The majority of TUIs for infovis are presented as technical developments or design case studies, but lack formal evaluation. The last contribution of this thesis is a quantitative and qualitative comparison of the TUI and touch UI for the infovis case study. Participants adopted more effective strategies to explore patterns and performed fewer unnecessary analyses with the TUI, which led to significantly faster performance. Contrary to common belief bimanual interactions were infrequently used for both interfaces, while epistemic actions were strongly promoted for the TUI and contributed to participants’ efficient exploration strategies

    How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRR’s Rehabilitation Engineering Research Centers

    Get PDF
    Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a “total approach to rehabilitation”, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970’s, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an PopularitĂ€t gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von RealitĂ€t und VirtualitĂ€t kombinieren. WĂ€hrend die Technologie sowohl fĂŒr Eingabe- als auch fĂŒr AusgabegerĂ€te marktreif ist, existieren nur wenige Lösungen fĂŒr den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen ĂŒber Leistung und BenutzerprĂ€ferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten BenutzeroberflĂ€chen fĂŒr VR zu einer großen Herausforderung. Diese Arbeit beschĂ€ftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingefĂŒhrt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und BenutzerprĂ€ferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und MenĂŒsteuerung im Kontext des tĂ€glichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    Design-led approach for transferring the embodied skills of puppet stop-motion animators into haptic workspaces

    Get PDF
    This design-led research investigates the transfer of puppet stop-motion animators’ embodied skills from the physical workspace into a digital environment. The approach is to create a digital workspace that evokes an embodied animating experience and allows puppet stop-motion animators to work in it unencumbered. The insights and outcomes of the practical explorations are discussed from the perspective of embodied cognition. The digital workspace employs haptic technology, an advanced multi-modal interface technology capable of invoking the tactile, kinaesthetic and proprioceptive senses. The overall aim of this research is to contribute, to the Human-Computer Interaction design community, design considerations and strategies for developing haptic workspaces that can seamlessly transfer and accommodate the rich embodied knowledge of non-digital skillful practitioners. Following an experiential design methodology, a series of design studies in collaboration with puppet stop-motion animators led to the development of a haptic workspace prototype for producing stop-motion animations. Each design study practically explored the transfer of different aspects of the puppet stop-motion animation practice into the haptic workspace. Beginning with an initial haptic workspace prototype, its design was refined in each study with the addition of new functionalities and new interaction metaphors which were always developed with the aim to create and maintain an embodied animating experience. The method of multiple streams of reflection was proposed as an important design tool for identifying, understanding and articulating design insights, empirical results and contextual considerations throughout the design studies. This thesis documents the development of the haptic workspace prototype and discusses the collected design insights and empirical results from the perspective of embodied cognition. In addition, it describes and reviews the design methodology that was adopted as an appropriate approach towards the design of the haptic workspace prototype

    The significance of silence. Long gaps attenuate the preference for ‘yes’ responses in conversation.

    Get PDF
    In conversation, negative responses to invitations, requests, offers and the like more often occur with a delay – conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when given relatively fast (300 ms) or delayed (1000 ms). Participants heard minidialogues, with turns extracted from a spoken corpus, while having their EEG recorded. We find that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’, however this contrast is not present for delayed responses. This shows that an immediate response is expected to be positive – but this expectation disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. Additionally, however, 'No' responses elicit a late frontal positivity both when they are fast and when they are delayed. Thus, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred and may require an account. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, as an immediate response

    Task Allocation in Foraging Robot Swarms:The Role of Information Sharing

    Get PDF
    Autonomous task allocation is a desirable feature of robot swarms that collect and deliver items in scenarios where congestion, caused by accumulated items or robots, can temporarily interfere with swarm behaviour. In such settings, self-regulation of workforce can prevent unnecessary energy consumption. We explore two types of self-regulation: non-social, where robots become idle upon experiencing congestion, and social, where robots broadcast information about congestion to their team mates in order to socially inhibit foraging. We show that while both types of self-regulation can lead to improved energy efficiency and increase the amount of resource collected, the speed with which information about congestion flows through a swarm affects the scalability of these algorithms
    corecore