100 research outputs found
Recommended from our members
Examining the sense of agency in human-computer interaction
Humans are agents, we feel that we control the course of events on our everyday life. This refers to the Sense of Agency (SoA). This experience is not only crucial in our daily life, but also in our interaction with technology. When we manipulate a user interface (e.g., computer, smartphone, etc.), we expect that the system responds to our input commands with feedback, as we desire to feel that we are in charge of the interaction. If this interplay elicits a SoA, then the user will perceive an instinctive feeling of “I am controlling this”. Although research in Human-Computer Interaction (HCI) pursuits the design of intuitive and responsive systems, most of the current studies have been focussed mainly on interaction techniques (e.g., software-hardware) and User Experience (UX) (e.g., comfort, usability, etc.), and very little has been investigated in terms of the SoA i.e., the conscious experience of being in control regarding the interaction. In this thesis, we present an experimental exploration of the role of the SoA in interaction paradigms typical of HCI. After two chapters of introduction and related work, we describe a series of studies that explore agency implication in interaction with systems through human senses such as vision, audio, touch and smell. Chapter 3 explores the SoA in mid-air haptic interaction through touchless actions. Then, Chapter 4 examines agency modulation through smell and its application for olfactory interfaces. Chapter 5 describes two novel timing techniques based on auditory and haptic cues that provide alternative timing methods to the traditional Libet clock. Finally, we conclude with a discussion chapter that highlights the importance of our SoA during interactions with technology as well as the implications of the results found, in the design of user interfaces
Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments
The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/
Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments
The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/
Using a Bayesian Framework to Develop 3D Gestural Input Systems Based on Expertise and Exposure in Anesthesia
Interactions with a keyboard and mouse fall short of human capabilities and what is lacking in the technological revolution is a surge of new and natural ways of interacting with computers. In-air gestures are a promising input modality as they are expressive, easy to use, quick to use, and natural for users. It is known that gestural systems should be developed within a particular context as gesture choice is dependent on the context; however, there is little research investigating other individual factors which may influence gesture choice such as expertise and exposure. Anesthesia providers’ hands have been linked to bacterial transmission; therefore, this research investigates the context of gestural technology for anesthetic task. The objective of this research is to understand how expertise and exposure influence gestural behavior and to develop Bayesian statistical models that can accurately predict how users would choose intuitive gestures in anesthesia based on expertise and exposure. Expertise and exposure may influence gesture responses for individuals; however, there is limited to no work investigating how these factors influence intuitive gesture choice and how to use this information to predict intuitive gestures to be used in system design. If researchers can capture users’ gesture variability within a particular context based on expertise and exposure, then statistical models can be developed to predict how users may gesturally respond to a computer system and use those predictions to design a gestural system which anticipates a user’s response and thus affords intuitiveness to multiple user groups. This allows designers to more completely understand the end user and implement intuitive gesture systems that are based on expected natural responses. Ultimately, this dissertation seeks to investigate the human factors challenges associated with gestural system development within a specific context and to offer statistical approaches to understanding and predicting human behavior in a gestural system. Two experimental studies and two Bayesian analyses were completed in this dissertation. The first experimental study investigated the effect of expertise within the context of anesthesiology. The main finding of this study was that domain expertise is influential when developing 3D gestural systems as novices and experts differ in terms of intuitive gesture-function mappings as well as reaction times to generate an intuitive mapping. The second study investigated the effect of exposure for controlling a computer-based presentation and found that there is a learning effect of gestural control in that participants were significantly faster at generating intuitive mappings as they gained exposure with the system. The two Bayesian analyses were in the form of Bayesian multinomial logistic regression models where intuitive gesture choice was predicted based on the contextual task and either expertise or exposure. The Bayesian analyses generated posterior predictive probabilities for all combinations of task, expertise level, and exposure level and showed that gesture choice can be predicted to some degree. This work provides further insights into how 3D gestural input systems should be designed and how Bayesian statistics can be used to model human behavior
An investigation of mid-air gesture interaction for older adults
Older adults (60+) face natural and gradual decline in cognitive, sensory and motor functions that are often the reason for the difficulties that older users come up against when interacting with computers. For that reason, the investigation and design of age-inclusive input methods for computer interaction is much needed and relevant due to an ageing population. The advances of motion sensing technologies and mid-air gesture interaction reinvented how individuals can interact with computer interfaces and this modality of input method is often deemed as a more “natural” and “intuitive” than using purely traditional input devices such mouse interaction. Although explored in gaming and entertainment, the suitability of mid-air gesture interaction for older users in particular is still little known. The purpose of this research is to investigate the potential of mid-air gesture interaction to facilitate computer use for older users, and to address the challenges that older adults may face when interacting with gestures in mid-air. This doctoral research is presented as a collection of papers that, together, develop the topic of ageing and computer interaction through mid-air gestures. The initial point for this research was to establish how older users differ from younger users and focus on the challenges faced by older adults when interacting with mid-air gesture interaction. Once these challenges were identified, this work aimed to explore a series of usability challenges and opportunities to further develop age-inclusive interfaces based on mid-air gesture interaction. Through a series of empirical studies, this research intends to provide recommendations for designing mid-air gesture interaction that better take into consideration the needs and skills of the older population and aims to contribute to the advance of age-friendly interfaces
Challenges in passenger use of mixed reality headsets in cars and other transportation
This paper examines key challenges in supporting passenger use of augmented and virtual reality headsets in transit. These headsets will allow passengers to break free from the restraints of physical displays placed in constrained environments such as cars, trains and planes. Moreover, they have the potential to allow passengers to make better use of their time by making travel more productive and enjoyable, supporting both privacy and immersion. However, there are significant barriers to headset usage by passengers in transit contexts. These barriers range from impediments that would entirely prevent safe usage and function (e.g. motion sickness) to those that might impair their adoption (e.g. social acceptability). We identify the key challenges that need to be overcome and discuss the necessary resolutions and research required to facilitate adoption and realize the potential advantages of using mixed reality headsets in transit
- …