175 research outputs found

    Interaction Methods for Smart Glasses : A Survey

    Get PDF
    Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe

    Novel Interaction Techniques for Mobile Augmented Reality applications. A Systematic Literature Review

    Get PDF
    This study reviews the research on interaction techniques and methods that could be applied in mobile augmented reality scenarios. The review is focused on themost recent advances and considers especially the use of head-mounted displays. Inthe review process, we have followed a systematic approach, which makes the reviewtransparent, repeatable, and less prone to human errors than if it was conducted in amore traditional manner. The main research subjects covered in the review are headorientation and gaze-tracking, gestures and body part-tracking, and multimodality– as far as the subjects are related to human-computer interaction. Besides these,also a number of other areas of interest will be discussed.Siirretty Doriast

    EdgeGlass: Exploring Tapping Performance on Smart Glasses while Sitting and Walking

    Get PDF
    Department of Human Factors EngineeringCurrently, smart glasses allow only touch sensing area which supports front mounted touch pads. However, touches on top, front and bottom sides of glass mounted touchpad is not yet explored. We made a customized touch sensor (length: 5-6 cm, height: 1 cm, width: 0.5 cm) featuring the sensing on its top, front, and bottom surfaces. For doing that, we have used capacitive touch sensing technology (MPR121 chips) with an electrode size of ~4.5 mm square, which is typical in the modern touchscreens. We have created a hardware system which consists of a total of 48 separate touch sensors. We investigated the interaction technique by it for both the sitting and walking situation, using a single finger sequential tapping and a pair finger simultaneous tapping. We have divided each side into three equal target areas and this separation made a total of 36 combinations. Our quantitative result showed that pair finger simultaneous tapping touches were faster, less error-prone in walking condition, compared to single finger sequential tapping into walking condition. Whereas, single finger sequence tapping touches were slower, but less error-prone in sitting condition, compared to pair simultaneous tapping in sitting condition. However, single finger sequential tapping touches were slower, much less error-prone in sitting condition compared to walking. Interestingly, double finger tapping touches had similar performance result in terms of both, error rate and completion time, in both sitting and walking conditions. Mental, physical, performance, effort did not have any effect on any temporal tapping???s and body poses experience of workload. In case of the parameter of temporal demand, for single finger sequential tapping mean temporal (time pressure) workload demand was higher than pair finger simultaneous tapping but body poses did not affect temporal (time pressure) workload for both of the sequential and simultaneous tapping type. In case of the parameter of frustration, the result suggested that mean frustration workload was higher for single finger sequential tapping experienced by the participants compared to pair finger simultaneous tapping and among body poses, walking experienced higher frustration mean workload than sitting. The subjective measure of overall workload during the performance study showed no significant difference between both independent variable: body pose (sitting and walking) and temporal tapping (single finger sequential tapping and pair finger simultaneous tapping).ope

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs

    Novel Interaction Techniques for Mobile Augmented Reality Applications – A Systematic Literature Review

    Get PDF
    This study reviews the research on interaction techniques and methods that could be applied in mobile augmented reality scenarios. The review is focused on the most recent advances and considers especially the use of head-mounted displays. In the review process, we have followed a systematic approach, which makes the review transparent, repeatable, and less prone to human errors than if it was conducted in a more traditional manner. The main research subjects covered in the review are head orientation and gaze tracking, gestures and body part tracking, and multimodality&ndash; as far as the subjects are related to human-computer interaction. Besides these, also a number of other areas of interest will be discussed.</p

    Exploring Hand-Based Haptic Interfaces for Mobile Interaction Design

    Get PDF
    Visual attention is crucial in mobile environments, not only for staying aware of dynamic situations, but also for safety reasons. However, current mobile interaction design forces the user to focus on the visual interface of the handheld device, thus limiting the user's ability to process visual information from their environment. In response to these issues, a common solution is to encode information with on-device vibrotactile feedback. However, the vibration is transitory and is often difficult to perceive when mobile. Another approach is to make visual interfaces even more dominant with smart glasses, which enable head-up interaction on their see-through interface. Yet, their input methods raise many concerns regarding social acceptability, preventing them from being widely adopted. There is a need to derive feasible interaction techniques for mobile use while maintaining the user's situational awareness, and this thesis argues that solutions could be derived through the exploration of hand-based haptic interfaces. The objective of this research is to provide multimodal interaction for users to better interact with information while maintaining proper attention to the environment in mobile scenarios. Three research areas were identified. The first is developing expressive haptic stimuli, in which the research investigates how static haptic stimuli could be derived. The second is designing mobile spatial interaction with the user's surroundings as content, which manifests situations in which visual attention to the environment is most needed. The last is interacting with the always-on visual interface on smart glasses, the seemingly ideal solution for mobile applications. The three areas extend along the axis of the demand of visual attention on the interface, from non-visual to always-on visual interfaces. Interactive prototypes were constructed and deployed in studies for each research area, including two shape-changing mechanisms feasible for augmenting mobile devices and a spatial-sensing haptic glove featuring mid-air hand-gestural interaction with haptic support. The findings across the three research areas highlight the immediate benefits of incorporating hand-based haptic interfaces into applications. First, shape-changing interfaces can provide static and continuous haptic stimuli for mobile communication. Secondly, enabling direct interaction with real-world landmarks through a haptic glove and leaving visual attention on the surroundings could result in a higher level of immersed experience. Lastly, the users of smart glasses can benefit from the unobtrusive hand-gestural interaction enabled by the isolated tracking technique of a haptic glove. Overall, this work calls for mobile interaction design to consider haptic stimuli beyond on-device vibration, and mobile hardware solutions beyond the handheld form factor. It also invites designers to consider how to confront the competition of cognitive resources among multiple tasks from an interaction design perspective.Visuaalisen huomiokyvyn säilyttäminen mobiililaitteita käytettäessä on tärkeää sekä ympäröivien tilanteiden havainnoimisen että käyttäjän turvallisuuden kannalta. Nykyiset mobiilikäyttöliittymäratkaisut kuitenkin vaativat käyttäjää keskittämään huomionsa mobiililaitteen ruudulle, mikä rajoittaa käyttäjän kykyä käsitellä ympäristöön liittyvää visuaalista informaatiota. Eräs paljon käytetty ratkaisu tähän ongelmaan on esittää informaatiota laitteen värinän avulla. Värinäpohjainen esitystapa on kuitenkin hetkeen sidottu ja siten ohimenevä, eikä sitä ole välttämättä helppo havaita käyttäjän liikkeellä ollessa. Toinen tapa hyödyntää ns. älylaseja visuaalisen informaation esittämiseen. Tämän tavan etuna on läpinäkyvä näyttöpinta, joka ei vaadi keskittämään katsetta erilliseen laitteeseen. Älylasien tyypilliset syötemuodot kuitenkin aiheuttavat ongelmia niiden sosiaaliselle hyväksyttävyydelle, mikä estää niiden laajempaa käyttöönottoa. Niinpä tämän tutkimuksen lähtökohtana on tarve uudenlaisten mobiilikäyttöliittymien suunnittelulle siten, että käyttäjän huomio säilyy ympäristössä. Väitöskirjatutkimuksessa esitetään, että ratkaisu voi pohjautua käsin kosketeltavaan haptiseen rajapintaan. Tutkimuksen tavoitteena on tuottaa mobiilitilanteisiin multimodaalisia käyttöliittymiä, joiden avulla käyttäjä voi vuorovaikuttaa informaation kanssa menettämättä huomiotaan ympäristöstä. Tutkimus keskittyy kolmeen tutkimuskohteeseen. Ensimmäisessä kehitetään ilmaisuvoimaisia haptisia ärsykkeitä tarkastelemalla staattisten haptisten ärsykkeiden suunnittelun mahdollisuuksia. Toinen kohde liittyy tilaan perustuvan vuorovaikutuksen suunnitteluun tilanteessa, jossa käyttäjä vuorovaikuttaa ympäristöön liittyvän informaation kanssa liikkeellä ollessaan, jolloin ympäristön visuaalinen havainnointi on tärkeää. Kolmannessa tutkimuskohteessa kehitetään uudenlainen syötemuoto älylaseille. Nämä kolme tutkimuskohdetta voidaan esittää osina jatkumoa, joka perustuu laitteen vaatiman visuaalisen huomion määrään täysin ei-visuaalisista täysin visuaalisiin käyttöliittymiin. Jokaisen tutkimuskohteen osalta kehitettiin vuorovaikutteisia prototyyppejä: kaksi muotoa muuttavaa mekanismia mobiililaitteiden täydentämiseksi uusilla palautemuodoilla sekä haptinen hansikas, joka mahdollistaa vuorovaikutuksen ilmassa suoritettavien eleiden ja haptisen palautteen avulla. Kaikkien kolmen tutkimuskohteen tulokset korostavat käsin kosketeltavien haptisten rajapintojen etuja käytännön sovelluksissa. Ensinnäkin muotoa muuttavat rajapinnat voivat tuottaa staattisia ja jatkuvia ärsykkeitä, joita voidaan hyödyntää mobiilivuorovaikutuksessa. Toiseksi haptisen hansikkaan mahdollistama suora vuorovaikutus ympäröivien maamerkkien kanssa auttaa säilyttämään visuaalisen huomion ympäristössä ja voi saada aikaan mukaansatempaavamman käyttökokemuksen. Kolmanneksi älylasien käyttäjät hyötyvät haptisen hansikkaan anturien mahdollistamasta huomaamattomasta elevuorovaikutuksesta. Tämä väitöskirja kehottaa mobiilikäyttöliittymien suunnittelijoita ottamaan huomioon muut kuin kädessä pideltävät laitemuodot sekä haptisten ärsykkeiden mahdollisuudet laajemmin kuin laitteen sisäänrakennetun värinäominaisuuden kautta. Väitöstutkimus myös pyytää suunnittelijoita ja muotoilijoita pohtimaan vuorovaikutussuunnittelun näkökulmasta, miten kohdata monisuorittamistilanteissa käyttäjän kognitiivisten resurssien välinen kilpailu

    From seen to unseen: Designing keyboard-less interfaces for text entry on the constrained screen real estate of Augmented Reality headsets

    Get PDF
    Text input is a very challenging task in the constrained screen real-estate of Augmented Reality headsets. Typical keyboards spread over multiple lines and occupy a significant portion of the screen. In this article, we explore the feasibility of single-line text entry systems for smartglasses. We first design FITE, a dynamic keyboard where the characters are positioned depending on their probability within the current input. However, the dynamic layout leads to mediocre text input and low accuracy. We then introduce HIBEY, a fixed 1-line solution that further decreases the screen real-estate usage by hiding the layout. Despite its hidden layout, HIBEY surprisingly performs much better than FITE, and achieves a mean text entry rate of 9.95 words per minute (WPM) with 96.06% accuracy, which is comparable to other state-of-the-art approaches. After 8 days, participants achieve an average of 13.19 WPM. In addition, HIBEY only occupies 13.14% of the screen real estate at the edge region, which is 62.80% smaller than the default keyboard layout on Microsoft Hololens.Peer reviewe

    A Body-and-Mind-Centric Approach to Wearable Personal Assistants

    Get PDF
    corecore