791 research outputs found

    Interaction Methods for Smart Glasses : A Survey

    Get PDF
    Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe

    Exploring Hand-Based Haptic Interfaces for Mobile Interaction Design

    Get PDF
    Visual attention is crucial in mobile environments, not only for staying aware of dynamic situations, but also for safety reasons. However, current mobile interaction design forces the user to focus on the visual interface of the handheld device, thus limiting the user's ability to process visual information from their environment. In response to these issues, a common solution is to encode information with on-device vibrotactile feedback. However, the vibration is transitory and is often difficult to perceive when mobile. Another approach is to make visual interfaces even more dominant with smart glasses, which enable head-up interaction on their see-through interface. Yet, their input methods raise many concerns regarding social acceptability, preventing them from being widely adopted. There is a need to derive feasible interaction techniques for mobile use while maintaining the user's situational awareness, and this thesis argues that solutions could be derived through the exploration of hand-based haptic interfaces. The objective of this research is to provide multimodal interaction for users to better interact with information while maintaining proper attention to the environment in mobile scenarios. Three research areas were identified. The first is developing expressive haptic stimuli, in which the research investigates how static haptic stimuli could be derived. The second is designing mobile spatial interaction with the user's surroundings as content, which manifests situations in which visual attention to the environment is most needed. The last is interacting with the always-on visual interface on smart glasses, the seemingly ideal solution for mobile applications. The three areas extend along the axis of the demand of visual attention on the interface, from non-visual to always-on visual interfaces. Interactive prototypes were constructed and deployed in studies for each research area, including two shape-changing mechanisms feasible for augmenting mobile devices and a spatial-sensing haptic glove featuring mid-air hand-gestural interaction with haptic support. The findings across the three research areas highlight the immediate benefits of incorporating hand-based haptic interfaces into applications. First, shape-changing interfaces can provide static and continuous haptic stimuli for mobile communication. Secondly, enabling direct interaction with real-world landmarks through a haptic glove and leaving visual attention on the surroundings could result in a higher level of immersed experience. Lastly, the users of smart glasses can benefit from the unobtrusive hand-gestural interaction enabled by the isolated tracking technique of a haptic glove. Overall, this work calls for mobile interaction design to consider haptic stimuli beyond on-device vibration, and mobile hardware solutions beyond the handheld form factor. It also invites designers to consider how to confront the competition of cognitive resources among multiple tasks from an interaction design perspective.Visuaalisen huomiokyvyn säilyttäminen mobiililaitteita käytettäessä on tärkeää sekä ympäröivien tilanteiden havainnoimisen että käyttäjän turvallisuuden kannalta. Nykyiset mobiilikäyttöliittymäratkaisut kuitenkin vaativat käyttäjää keskittämään huomionsa mobiililaitteen ruudulle, mikä rajoittaa käyttäjän kykyä käsitellä ympäristöön liittyvää visuaalista informaatiota. Eräs paljon käytetty ratkaisu tähän ongelmaan on esittää informaatiota laitteen värinän avulla. Värinäpohjainen esitystapa on kuitenkin hetkeen sidottu ja siten ohimenevä, eikä sitä ole välttämättä helppo havaita käyttäjän liikkeellä ollessa. Toinen tapa hyödyntää ns. älylaseja visuaalisen informaation esittämiseen. Tämän tavan etuna on läpinäkyvä näyttöpinta, joka ei vaadi keskittämään katsetta erilliseen laitteeseen. Älylasien tyypilliset syötemuodot kuitenkin aiheuttavat ongelmia niiden sosiaaliselle hyväksyttävyydelle, mikä estää niiden laajempaa käyttöönottoa. Niinpä tämän tutkimuksen lähtökohtana on tarve uudenlaisten mobiilikäyttöliittymien suunnittelulle siten, että käyttäjän huomio säilyy ympäristössä. Väitöskirjatutkimuksessa esitetään, että ratkaisu voi pohjautua käsin kosketeltavaan haptiseen rajapintaan. Tutkimuksen tavoitteena on tuottaa mobiilitilanteisiin multimodaalisia käyttöliittymiä, joiden avulla käyttäjä voi vuorovaikuttaa informaation kanssa menettämättä huomiotaan ympäristöstä. Tutkimus keskittyy kolmeen tutkimuskohteeseen. Ensimmäisessä kehitetään ilmaisuvoimaisia haptisia ärsykkeitä tarkastelemalla staattisten haptisten ärsykkeiden suunnittelun mahdollisuuksia. Toinen kohde liittyy tilaan perustuvan vuorovaikutuksen suunnitteluun tilanteessa, jossa käyttäjä vuorovaikuttaa ympäristöön liittyvän informaation kanssa liikkeellä ollessaan, jolloin ympäristön visuaalinen havainnointi on tärkeää. Kolmannessa tutkimuskohteessa kehitetään uudenlainen syötemuoto älylaseille. Nämä kolme tutkimuskohdetta voidaan esittää osina jatkumoa, joka perustuu laitteen vaatiman visuaalisen huomion määrään täysin ei-visuaalisista täysin visuaalisiin käyttöliittymiin. Jokaisen tutkimuskohteen osalta kehitettiin vuorovaikutteisia prototyyppejä: kaksi muotoa muuttavaa mekanismia mobiililaitteiden täydentämiseksi uusilla palautemuodoilla sekä haptinen hansikas, joka mahdollistaa vuorovaikutuksen ilmassa suoritettavien eleiden ja haptisen palautteen avulla. Kaikkien kolmen tutkimuskohteen tulokset korostavat käsin kosketeltavien haptisten rajapintojen etuja käytännön sovelluksissa. Ensinnäkin muotoa muuttavat rajapinnat voivat tuottaa staattisia ja jatkuvia ärsykkeitä, joita voidaan hyödyntää mobiilivuorovaikutuksessa. Toiseksi haptisen hansikkaan mahdollistama suora vuorovaikutus ympäröivien maamerkkien kanssa auttaa säilyttämään visuaalisen huomion ympäristössä ja voi saada aikaan mukaansatempaavamman käyttökokemuksen. Kolmanneksi älylasien käyttäjät hyötyvät haptisen hansikkaan anturien mahdollistamasta huomaamattomasta elevuorovaikutuksesta. Tämä väitöskirja kehottaa mobiilikäyttöliittymien suunnittelijoita ottamaan huomioon muut kuin kädessä pideltävät laitemuodot sekä haptisten ärsykkeiden mahdollisuudet laajemmin kuin laitteen sisäänrakennetun värinäominaisuuden kautta. Väitöstutkimus myös pyytää suunnittelijoita ja muotoilijoita pohtimaan vuorovaikutussuunnittelun näkökulmasta, miten kohdata monisuorittamistilanteissa käyttäjän kognitiivisten resurssien välinen kilpailu

    XR Input Error Mediation for Hand-Based Input: Task and Context Influences a User's Preference

    Full text link
    Many XR devices use bare-hand gestures to reduce the need for handheld controllers. Such gestures, however, lead to false positive and false negative recognition errors, which detract from the user experience. While mediation techniques enable users to overcome recognition errors by clarifying their intentions via UI elements, little research has explored how mediation techniques should be designed in XR and how a user's task and context may impact their design preferences. This research presents empirical studies about the impact of user perceived error costs on users' preferences for three mediation technique designs, under different simulated scenarios that were inspired by real-life tasks. Based on a large-scale crowd-sourced survey and an immersive VR-based user study, our results suggest that the varying contexts within each task type can impact users' perceived error costs, leading to different preferred mediation techniques. We further discuss the study implications of these results on future XR interaction design.Comment: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 202

    An Empirical Evaluation On Vibrotactile Feedback For Wristband System

    Full text link
    With the rapid development of mobile computing, wearable wrist-worn is becoming more and more popular. But the current vibrotactile feedback patterns of most wrist-worn devices are too simple to enable effective interaction in nonvisual scenarios. In this paper, we propose the wristband system with four vibrating motors placed in different positions in the wristband, providing multiple vibration patterns to transmit multi-semantic information for users in eyes-free scenarios. However, we just applied five vibrotactile patterns in experiments (positional up and down, horizontal diagonal, clockwise circular, and total vibration) after contrastive analyzing nine patterns in a pilot experiment. The two experiments with the same 12 participants perform the same experimental process in lab and outdoors. According to the experimental results, users can effectively distinguish the five patterns both in lab and outside, with approximately 90% accuracy (except clockwise circular vibration of outside experiment), proving these five vibration patterns can be used to output multi-semantic information. The system can be applied to eyes-free interaction scenarios for wrist-worn devices.Comment: 10 pages

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Augmented reality selection through smart glasses

    Get PDF
    O mercado de óculos inteligentes está em crescimento. Este crescimento abre a possibilidade de um dia os óculos inteligentes assumirem um papel mais ativo tal como os smartphones já têm na vida quotidiana das pessoas. Vários métodos de interação com esta tecnologia têm sido estudados, mas ainda não é claro qual o método que poderá ser o melhor para interagir com objetos virtuais. Neste trabalho são mencionados diversos estudos que se focam nos diferentes métodos de interação para aplicações de realidade aumentada. É dado destaque às técnicas de interação para óculos inteligentes tal como às suas vantagens e desvantagens. No contexto deste trabalho foi desenvolvido um protótipo de Realidade Aumentada para locais fechados, implementando três métodos de interação diferentes. Foram também estudadas as preferências do utilizador e sua vontade de executar o método de interação em público. Além disso, é extraído o tempo de reação que é o tempo entre a deteção de uma marca e o utilizador interagir com ela. Um protótipo de Realidade Aumentada ao ar livre foi desenvolvido a fim compreender os desafios diferentes entre uma aplicação de Realidade Aumentada para ambientes interiores e exteriores. Na discussão é possível entender que os utilizadores se sentem mais confortáveis usando um método de interação semelhante ao que eles já usam. No entanto, a solução com dois métodos de interação, função de toque nos óculos inteligentes e movimento da cabeça, permitem obter resultados próximos aos resultados do controlador. É importante destacar que os utilizadores não passaram por uma fase de aprendizagem os resultados apresentados nos testes referem-se sempre à primeira e única vez com o método de interação. O que leva a crer que o futuro de interação com óculos inteligentes possa ser uma fusão de diferentes técnicas de interação.The smart glasses’ market continues growing. It enables the possibility of someday smart glasses to have a presence as smartphones have already nowadays in people's daily life. Several interaction methods for smart glasses have been studied, but it is not clear which method could be the best to interact with virtual objects. In this research, it is covered studies that focus on the different interaction methods for reality augmented applications. It is highlighted the interaction methods for smart glasses and the advantages and disadvantages of each interaction method. In this work, an Augmented Reality prototype for indoor was developed, implementing three different interaction methods. It was studied the users’ preferences and their willingness to perform the interaction method in public. Besides that, it is extracted the reaction time which is the time between the detection of a marker and the user interact with it. An outdoor Augmented Reality application was developed to understand the different challenges between indoor and outdoor Augmented Reality applications. In the discussion, it is possible to understand that users feel more comfortable using an interaction method similar to what they already use. However, the solution with two interaction methods, smart glass’s tap function, and head movement allows getting results close to the results of the controller. It is important to highlight that was always the first time of the users, so there was no learning before testing. This leads to believe that the future of smart glasses interaction can be the merge of different interaction methods

    Midair Gestural Techniques for Translation Tasks in Large-Display Interaction

    Get PDF
    Midair gestural interaction has gained a lot of attention over the past decades, with numerous attempts to apply midair gestural interfaces with large displays (and TVs), interactive walls, and smart meeting rooms. These attempts, reviewed in numerous studies, utilized differing gestural techniques for the same action making them inherently incomparable, which further makes it difficult to summarize recommendations for the development of midair gestural interaction applications. Therefore, the aim was to take a closer look at one common action, translation, that is defined as dragging (or moving) an entity to a predefined target position while retaining the entity’s size and rotation. We compared performance and subjective experiences (participants = 30) of four midair gestural techniques (i.e., by fist, palm, pinch, and sideways) in the repetitive translation of 2D objects to short and long distances with a large display. The results showed statistically significant differences in movement time and error rate favoring translation by palm over pinch and sideways at both distances. Further, fist and sideways gestural techniques showed good performances, especially at short and long distances correspondingly. We summarize the implications of the results for the design of midair gestural interfaces, which would be useful for interaction designers and gesture recognition researchers.publishedVersionPeer reviewe
    corecore