356 research outputs found

    User-Defined Gestures with Physical Props in Virtual Reality

    Get PDF
    ©2021 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the ACM on Human-Computer Interaction, https://doi.org/10.1145/3486954.When interacting with virtual reality (VR) applications like CAD and open-world games, people may want to use gestures as a means of leveraging their knowledge from the physical world. However, people may prefer physical props over handheld controllers to input gestures in VR. We present an elicitation study where 21 participants chose from 95 props to perform manipulative gestures for 20 CAD-like and open-world game-like referents. When analyzing this data, we found existing methods for elicitation studies were insufficient to describe gestures with props, or to measure agreement with prop selection (i.e., agreement between sets of items). We proceeded by describing gestures as context-free grammars, capturing how different props were used in similar roles in a given gesture. We present gesture and prop agreement scores using a generalized agreement score that we developed to compare multiple selections rather than a single selection. We found that props were selected based on their resemblance to virtual objects and the actions they afforded; that gesture and prop agreement depended on the referent, with some referents leading to similar gesture choices, while others led to similar prop choices; and that a small set of carefully chosen props can support multiple gestures.NSERC, Discovery Grant 2016-04422 || NSERC, Discovery Grant 2019-06589 || NSERC, Discovery Accelerator Grant 492970-2016 || NSERC, CREATE Saskatchewan-Waterloo Games User Research (SWaGUR) Grant 479724-2016 || Ontario Ministry of Colleges and Universities, Ontario Early Researcher Award ER15-11-18

    An Overview of Wearable Haptic Technologies and Their Performance in Virtual Object Exploration.

    Get PDF
    We often interact with our environment through manual handling of objects and exploration of their properties. Object properties (OP), such as texture, stiffness, size, shape, temperature, weight, and orientation provide necessary information to successfully perform interactions. The human haptic perception system plays a key role in this. As virtual reality (VR) has been a growing field of interest with many applications, adding haptic feedback to virtual experiences is another step towards more realistic virtual interactions. However, integrating haptics in a realistic manner, requires complex technological solutions and actual user-testing in virtual environments (VEs) for verification. This review provides a comprehensive overview of recent wearable haptic devices (HDs) categorized by the OP exploration for which they have been verified in a VE. We found 13 studies which specifically addressed user-testing of wearable HDs in healthy subjects. We map and discuss the different technological solutions for different OP exploration which are useful for the design of future haptic object interactions in VR, and provide future recommendations

    Architectural and Urban Spatial Digital Simulations

    Get PDF
    This study concerns digital tools and simulation methods necessary for the description, conception, perception, and analysis of spatial architectural and urban design. The purpose of the study is to categorize, analyse, and describe the influence of digital simulation tools and methods in architectural and urban design. The study analyses techniques, applications, and research in the field of digital simulations of architectural/urban ensembles while also referring to the benefits of their use both at the level of scientific and spatial perception of architectural/urban design

    EagleView:A Video Analysis Tool for Visualising and Querying Spatial Interactions of People and Devices

    Get PDF
    To study and understand group collaborations involving multiple handheld devices and large interactive displays, researchers frequently analyse video recordings of interaction studies to interpret people's interactions with each other and/or devices. Advances in ubicomp technologies allow researchers to record spatial information through sensors in addition to video material. However, the volume of video data and high number of coding parameters involved in such an interaction analysis makes this a time-consuming and labour-intensive process. We designed EagleView, which provides analysts with real-time visualisations during playback of videos and an accompanying data-stream of tracked interactions. Real-time visualisations take into account key proxemic dimensions, such as distance and orientation. Overview visualisations show people's position and movement over longer periods of time. EagleView also allows the user to query people's interactions with an easy-to-use visual interface. Results are highlighted on the video player's timeline, enabling quick review of relevant instances. Our evaluation with expert users showed that EagleView is easy to learn and use, and the visualisations allow analysts to gain insights into collaborative activities

    Clothing-Integrated Human-Technology Interaction

    Get PDF
    Due to the different disabilities of people and versatile use environments, the current handheld and screen-based digital devices on the market are not suitable for all consumers and all situations. Thus, there is an urgent need for human- technology interaction solutions, where the required input actions to digital devices are simple, easy to establish, and instinctive, allowing the whole society to effortlessly interact with the surrounding technology. In passive ultra-high frequency (UHF) radio frequency identification (RFID) systems, the tag consists only of an antenna and a simple integrated circuit (IC). The tag gets all the needed power from the RFID reader and can be thus seamlessly and in a maintenance-free way integrated into clothing. In this thesis, it is presented that by integrating passive UHF RFID technology into clothing, body movements and gestures can be monitored by monitoring the individual IDs and backscattered signals of the tags. Electro-textiles and embroidery with conductive thread are found to be suitable options when manufacturing and materials for such garments are considered. This thesis establishes several RFID- based interface solutions, multiple types of inputs through RFID platforms, and controlling the surrounding and communicating with RFID-based on/off functions. The developed intelligent clothing is visioned to provide versatile applications for assistive technology, for entertainment, and ambient assistant living, and for comfort and safety in work environments, just to name a few examples

    Investigating New Forms of Single-handed Physical Phone Interaction with Finger Dexterity

    Get PDF
    With phones becoming more powerful and such an essential part of our lives, manufacturers are creating new device forms and interactions to better support even more diverse functions. A common goal is to enable a larger input space and expand the input vocabulary using new physical phone interactions other than touchscreen input. This thesis explores how utilizing our hand and finger dexterity can expand physical phone interactions. To understand how we can physically manipulate a phone using the fine motor skills of finger, we identify and evaluate single-handed "dexterous gestures". Four manipulations are defined: shift, spin (yaw axis), rotate (roll axis) and flip (pitch axis), with a formative survey showing all except flip have been performed for various reasons. A controlled experiment examines the speed, behaviour, and preference of manipulations in the form of dexterous gestures, by considering two directions and two movement magnitudes. Using a heuristic recognizer for spin, rotate, and flip, a one-week usability experiment finds increased practice and familiarity improve the speed and comfort of dexterous gestures. With the confirmation that users can loosen their grip and perform gestures with finger dexterity, we investigate the performance of one-handed touch input on the side of a mobile phone. An experiment examines grip change and subjective preference when reaching for side targets using different fingers. Two following experiments examine taps and flicks using the thumb and index finger in a new two-dimensional input space. We simulate a side-touch sensor with a combination of capacitive sensing and motion tracking to distinguish touches on the lower, middle, or upper edges. We further focus on physical phone interaction with a new phone form factor by exploring and evaluating single-handed folding interactions suitable for "modern flip phones": smartphones with a bendable full screen touch display. Three categories of interactions are identified: only-fold, touch-enhanced fold, and fold-enhanced touch; in which gestures are created using fold direction, fold magnitude, and touch position. A prototype evaluation device is built to resemble current flip phones, but with a modified spring system to enable folding in both directions. A study investigates performance and preference for 30 fold gestures, revealing which are most promising. Overall, our exploration shows that users can loosen their grip to physically interact with phones in new ways, and these interactions could be practically integrated into daily phone applications

    東北大学電気通信研究所研究活動報告 第29号(2022年度)

    Get PDF
    紀要類(bulletin)departmental bulletin pape

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation
    corecore