1,503 research outputs found
Breaking Virtual Barriers : Investigating Virtual Reality for Enhanced Educational Engagement
Virtual reality (VR) is an innovative technology that has regained popularity in recent years. In the field of education, VR has been introduced as a tool to enhance learning experiences. This thesis presents an exploration of how VR is used from the context of educators and learners. The research employed a mixed-methods approach, including surveying and interviewing educators, and conducting empirical studies to examine engagement, usability, and user behaviour within VR. The results revealed educators are interested in using VR for a wide range of scenarios, including thought exercises, virtual field trips, and simulations. However, they face several barriers to incorporating VR into their practice, such as cost, lack of training, and technical challenges. A subsequent study found that virtual reality can no longer be assumed to be more engaging than desktop equivalents. This empirical study showed that engagement levels were similar in both VR and non-VR environments, suggesting that the novelty effect of VR may be less pronounced than previously assumed. A study against a VR mind mapping artifact, VERITAS, demonstrated that complex interactions are possible on low-cost VR devices, making VR accessible to educators and students. The analysis of user behaviour within this VR artifact showed that quantifiable strategies emerge, contributing to the understanding of how to design for collaborative VR experiences. This thesis provides insights into how the end-users in the education space perceive and use VR. The findings suggest that while educators are interested in using VR, they face barriers to adoption. The research highlights the need to design VR experiences, with understanding of existing pedagogy, that are engaging with careful thought applied to complex interactions, particularly for collaborative experiences. This research contributes to the understanding of the potential of VR in education and provides recommendations for educators and designers to enhance learning experiences using VR
Shared task representation for human–robot collaborative navigation: the collaborative search case
© The Author(s) 2023Recent research in Human Robot Collaboration (HRC) has spread and specialised in many sub-fields. Many show considerable advances, but the human–robot collaborative navigation (HRCN) field seems to be stuck focusing on implicit collaboration settings, on hypothetical or simulated task allocation problems, on shared autonomy or on having the human as a manager. This work takes a step forward by presenting an end-to-end system capable of handling real-world human–robot collaborative navigation tasks. This system makes use of the Social Reward Sources model (SRS), a knowledge representation to simultaneously tackle task allocation and path planning, proposes a multi-agent Monte Carlo Tree Search (MCTS) planner for human–robot teams, presents the collaborative search as a testbed for HRCN and studies the usage of smartphones for communication in this setting. The detailed experiments prove the viability of the approach, explore collaboration roles adopted by the human–robot team and test the acceptability and utility of different communication interface designs.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work was supported under the Spanish State Research Agency through the Maria de Maeztu Seal of Excellence to IRI (MDM-2016-0656) and ROCOTRANSP project (PID2019- 106702RB-C21 / AEI / 10.13039/501100011033), the European research grant TERRINet (H2020-INFRAIA-2017-1-730994) and by JST Moonshot R & D Grant Number JPMJMS2011-85.Peer ReviewedPostprint (published version
A Taxonomy of Freehand Grasping Patterns in Virtual Reality
Grasping is the most natural and primary interaction paradigm people perform every day, which allows us to pick up and manipulate objects around us such as drinking a cup of coffee or writing with a pen. Grasping has been highly explored in real environments, to understand and structure the way people grasp and interact with objects by presenting categories, models and theories for grasping approach. Due to the complexity of the human hand, classifying grasping knowledge to provide meaningful insights is a challenging task, which led to researchers developing grasp taxonomies to provide guidelines for emerging grasping work (such as in anthropology, robotics and hand surgery) in a systematic way.
While this body of work exists for real grasping, the nuances of grasping transfer in virtual environments is unexplored. The emerging development of robust hand tracking sensors for virtual devices now allow the development of grasp models that enable VR to simulate real grasping interactions. However, present work has not yet explored the differences and nuances that are present in virtual grasping compared to real object grasping, which means that virtual systems that create grasping models based on real grasping knowledge, might make assumptions which are yet to be proven true or untrue around the way users intuitively grasp and interact with virtual objects.
To address this, this thesis presents the first user elicitation studies to explore grasping patterns directly in VR. The first study presents main similarities and differences between real and virtual object grasping, the second study furthers this by exploring how virtual object shape influences grasping patterns, the third study focuses on visual thermal cues and how this influences grasp metrics, and the fourth study focuses on understanding other object characteristics such as stability and complexity and how they influence grasps in VR. To provide structured insights on grasping interactions in VR, the results are synthesized in the first VR Taxonomy of Grasp Types, developed following current methods for developing grasping and HCI taxonomies and re-iterated to
present an updated and more complete taxonomy.
Results show that users appear to mimic real grasping behaviour in VR, however they also illustrate that users present issues around object size estimation and generally a lower variability in grasp types is used. The taxonomy shows that only five grasps account for the majority of grasp data in VR, which can be used for computer systems aiming to achieve natural and intuitive interactions at lower computational cost. Further, findings show that virtual object characteristics such as shape, stability and complexity as well as visual cues for temperature influence grasp metrics such as aperture, category, type, location and dimension. These changes in grasping patterns together with virtual object categorisation methods can be used to inform design decisions when developing intuitive interactions and virtual objects and environments and therefore taking a step forward in achieving natural grasping interaction in VR
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
Gestures that accompany speech are an essential part of natural and efficient
embodied human communication. The automatic generation of such co-speech
gestures is a long-standing problem in computer animation and is considered an
enabling technology in film, games, virtual social spaces, and for interaction
with social robots. The problem is made challenging by the idiosyncratic and
non-periodic nature of human co-speech gesture motion, and by the great
diversity of communicative functions that gestures encompass. Gesture
generation has seen surging interest recently, owing to the emergence of more
and larger datasets of human gesture motion, combined with strides in
deep-learning-based generative models, that benefit from the growing
availability of data. This review article summarizes co-speech gesture
generation research, with a particular focus on deep generative models. First,
we articulate the theory describing human gesticulation and how it complements
speech. Next, we briefly discuss rule-based and classical statistical gesture
synthesis, before delving into deep learning approaches. We employ the choice
of input modalities as an organizing principle, examining systems that generate
gestures from audio, text, and non-linguistic input. We also chronicle the
evolution of the related training data sets in terms of size, diversity, motion
quality, and collection method. Finally, we identify key research challenges in
gesture generation, including data availability and quality; producing
human-like motion; grounding the gesture in the co-occurring speech in
interaction with other speakers, and in the environment; performing gesture
evaluation; and integration of gesture synthesis into applications. We
highlight recent approaches to tackling the various key challenges, as well as
the limitations of these approaches, and point toward areas of future
development.Comment: Accepted for EUROGRAPHICS 202
Exploring Compassion-Driven Interaction: Bridging Buddhist Theory and Contemplative Practice Through Arts-led Research-through-Design
Compassion cultivation focuses on developing a genuine concern for others and a willingness to alleviate their suffering. As understandings of the benefits of compassion cultivation on wellbeing have evolved, an increasing interest in designing technologies for this context have followed. However, while scientific research focuses on measuring and evaluating compassion, designerly understandings of compassion informing human-computer interaction have been less explored.
We are currently confronted with huge global challenges and our entanglement with technology brings paradoxes and existential tensions related to wellbeing and human flourishing. Viewing technologies as mediators of values and morality, human-computer interaction has a stake in shaping our possible futures. A shift in the field to welcoming a plurality of worldviews, invites opportunities to authentically integrate knowledge from ancient wisdom traditions into how and why we design. This research aims to advance understandings of compassion cultivation for designing technologies by developing novel approaches to research inspired by Buddhist philosophy and practice.
This thesis draws upon an arts-led research-through-design approach and spiritual practice. The findings and insights from the studies contribute primarily to the areas of soma design, first-person research and design for wellbeing. The main contributions to knowledge are design guidelines emerging from three case studies: Understanding Tonglen, Wish Happiness, and Inner Suchness comprising one autoethnography and two concept-driven design artefacts for public exhibition. While in the act of researching, the contemplative practitioner-researcher, a research persona, emerged to support authentic engagement and embodied understandings of the dynamic unfolding processes of the practice. A contemplative framework to train self-observation and the concept of designerly gaze were developed to help investigate the phenomenon
Chimpanzee (Pan troglodytes) cognitive mechanisms for joint action and virtual environment navigation
Chimpanzees have demonstrated across several experimental studies and field observations that they can successfully work together. The cognitive mechanisms that chimpanzees employ for joint action, however, remain unclear. A key component of human co-ordination is the ability to represent not only one’s own role, but also the role of a partner. In the first two studies presented, I report evidence that chimpanzees may also represent a partner’s actions during joint action. First, I present evidence that chimpanzees accommodate an experimenter’s actions when passing an object, possibly incorporating another’s actions into their own action plans. Second, I present evidence that chimpanzees learn about a partner’s actions, which may facilitate their ability to produce those actions themselves in a partial role-reversal task. Another open question about chimpanzee joint action is the motivation behind choosing to work together or alone. To investigate whether physical effort may influence chimpanzees’ apparatus choices, I present evidence from a task in which chimpanzees chose between a high and low effort puzzle-box apparatus. Chimpanzees showed no preference for either apparatus. There is also a spatial component to joint action, and how the action space is represented may affect perspective taking and how others’ actions are represented. In the final experiment, I examined chimpanzee’s spatial frames of reference in a virtual environment task. The results showed that some subjects used a simple landmark as an allocentric cue, but not more distal landmarks. Learning about how chimpanzees represent virtual spaces, and whether they could conceive of alternative perspectives, is an important first step towards virtual cooperative games with captive primates. The results of this thesis suggest that chimpanzees understand the role of their partner during joint action, may not reduce their own effort, are sometimes able to use simple virtual landmarks, and can find out-of-sight food in a virtual environment
Moving usable security research out of the lab: evaluating the use of VR studies for real-world authentication research
Empirical evaluations of real-world research artefacts that derive results from observations and experiments are a core aspect of usable security research. Expert interviews as part of this thesis revealed that the costs associated with developing and maintaining physical research artefacts often amplify human-centred usability and security research challenges. On top of that, ethical and legal barriers often make usability and security research in the field infeasible. Researchers have begun simulating real-life conditions in the lab to contribute to ecological validity. However, studies of this type are still restricted to what can be replicated in physical laboratory settings. Furthermore, historically, user study subjects were mainly recruited from local areas only when evaluating hardware prototypes. The human-centred research communities have recognised and partially addressed these challenges using online studies such as surveys that allow for the recruitment of large and diverse samples as well as learning about user behaviour. However, human-centred security research involving hardware prototypes is often concerned with human factors and their impact on the prototypes’ usability and security, which cannot be studied using traditional online surveys.
To work towards addressing the current challenges and facilitating research in this space, this thesis explores if – and how – virtual reality (VR) studies can be used for real-world usability and security research. It first validates the feasibility and then demonstrates the use of VR studies for human-centred usability and security research through six empirical studies, including remote and lab VR studies as well as video prototypes as part of online surveys.
It was found that VR-based usability and security evaluations of authentication prototypes, where users provide touch, mid-air, and eye-gaze input, greatly match the findings from the original real-world evaluations. This thesis further investigated the effectiveness of VR studies by exploring three core topics in the authentication domain: First, the challenges around in-the-wild shoulder surfing studies were addressed. Two novel VR shoulder surfing methods were implemented to contribute towards realistic shoulder surfing research and explore the use of VR studies for security evaluations. This was found to allow researchers to provide a bridge over the methodological gap between lab and field studies. Second, the ethical and legal barriers when conducting in situ usability research on authentication systems were addressed. It was found that VR studies can represent plausible authentication environments and that a prototype’s in situ usability evaluation results deviate from traditional lab evaluations. Finally, this thesis contributes a novel evaluation method to remotely study interactive VR replicas of real-world prototypes, allowing researchers to move experiments that involve hardware prototypes out of physical laboratories and potentially increase a sample’s diversity and size.
The thesis concludes by discussing the implications of using VR studies for prototype usability and security evaluations. It lays the foundation for establishing VR studies as a powerful, well-evaluated research method and unfolds its methodological advantages and disadvantages
Hand interaction designs in mixed and augmented reality head mounted display: a scoping review and classification
Mixed reality has made its first step towards democratization in 2017 with the launch of a first generation of commercial devices. As a new medium, one of the challenges is to develop interactions using its endowed spatial awareness and body tracking. More specifically, at the crossroad between artificial intelligence and human-computer interaction, the goal is to go beyond the Window, Icon, Menu, Pointer (WIMP) paradigm humans are mainly using on desktop computer. Hand interactions either as a standalone modality or as a component of a multimodal modality are one of the most popular and supported techniques across mixed reality prototypes and commercial devices. In this context, this paper presents scoping literature review of hand interactions in mixed reality. The goal of this review is to identify the recent findings on hand interactions about their design and the place of artificial intelligence in their development and behavior. This review resulted in the highlight of the main interaction techniques and their technical requirements between 2017 and 2022 as well as the design of the Metaphor-behavior taxonomy to classify those interactions
New interactive interface design for STEM museums: a case study in VR immersive technology
Novel technologies are used to develop new museum exhibits, aiming to attract visitors’ attention. However, using new technology is not always successful, perhaps because the design of a new exhibit was inappropriate, or users were unfamiliar with interacting with a new device. As a result, choosing alternative technology to create a unique interactive display is critical. The results of using technology best practices enable the designer to help reduce failures.
This research uses virtual reality (VR) immersive technology as a case study to explore how to design a new interactive exhibit in science, technology, engineering and mathematics (STEM) museums. VR has seen increased use in Thailand museums, but people are unfamiliar with it, and few use it daily. It had problems with health concerns such as motion sickness, and the virtual reality head-mounted display (VR HMD) restricts social interaction, which is essential for museum visitors. This research focuses on improving how VR is deployed in STEM museums by proposing a framework for designing a new VR exhibit that supports social interaction. The research question is, how do we create a new interactive display using VR immersive technology while supporting visitor social interaction? The investigation uses mixed methods to construct the proposed framework, including a theoretical review, museum observational study, and experimental study. The in-the-wild study and workshop were conducted to evaluate the proposed framework.
The suggested framework provides guidelines for designing a new VR exhibit. The component of a framework has two main parts. The first part is considering factors for checking whether VR technology suit for creating a new exhibit. The second part is essential components for designing a new VR exhibit includes Content Design, Action Design, Social Interaction Design, System Design, and Safety and Health.
Various kinds of studies were conducted to answer the research question. First, a museum observational study led to an understanding of the characteristics of interactive exhibits in STEM museums, the patterns of social interaction, the range of immersive technology that museums use and the practice of using VR technology in STEM museums. Next, the alternative design for an interactive exhibit study investigates the effect on the user experience of tangible, gesture and VR technologies. It determines the factors that make the user experience different and suggests six aspects to consider when choosing technology.
Third, social interaction design in VR for museum study explores methods to connect players; single player, symmetric connection (VR HMD and VR HMD) and asymmetric connection (VR HMD and PC), to provide social interaction while playing the VR exhibit and investigates social features and social mechanics for visitors to communicate and exchange knowledge. It found that the symmetric connection provides better social interaction than others. However, the asymmetric link is also a way for visitors to exchange knowledge. The study recommends using mixed symmetric and asymmetric connections when deploying VR exhibits in a museum. This was confirmed by the in-the-wild research and validated the framework that indicated it helped staff manage the VR exhibit and provided a co-presence and co-player experience. Fourth, the content design of a display in the virtual environment study examines the effect of design content between 2D and 3D on visitors' learning and memory. It showed that content design with 2D and 3D did not influence visitors to gain knowledge and remember the exhibit’s story. However, the 3D view offers more immersion and emotion than the 2D view. The research proposes using 3D when designing content to evoke a player’s emotion; designing content for a VR exhibit should deliver experience rather than text-based learning. Furthermore, the feedback on the qualitative results of each study provided insight into the design user experience.
Evaluation of the proposed framework is the last part of this research. A study in the wild was conducted to validate the proposed framework in museums. Two VR exhibits were adjusted with features that matched the proposed framework’s suggested components and were deployed in the museum to gather visitors' feedback. It received positive feedback from the visitors, and visitors approved of using VR technology in the museum. The results of user feedback from a workshop to evaluate the helpfulness of the framework showed that the framework's components are appropriate, and the framework is practical when designing a new VR exhibit, particularly for people unfamiliar with VR technology. In addition, the proposed framework of this research may be applied to study emerging technology to create a novel exhibit
- …