167 research outputs found

    A Review of Multimodal Interaction Technique in Augmented Reality Environment

    Get PDF
    Augmented Reality (AR) has proposed several types of interaction techniques such as 3D interactions, natural interactions, tangible interactions, spatial awareness interactions and multimodal interactions. Usually, interaction technique in AR involve unimodal interaction technique that only allows user to interact with AR content by using one modality such as gesture, speech, click, etc. Meanwhile, the combination of more than one modality is called multimodal. Multimodal can contribute to human and computer interaction more efficient and will enhance better user experience. This is because, there are a lot of issues have been found when user use unimodal interaction technique in AR environment such as fat fingers. Recent research has shown that multimodal interface (MMI) has been explored in AR environment and has been applied in various domain. This paper presents an empirical study of some of the key aspects and issues in multimodal interaction augmented reality, touching on the interaction technique and system framework. We reviewed the question of what are the interaction techniques that have been used to perform a multimodal interaction in AR environment and what are the integrated components applied in multimodal interaction AR frameworks. These two questions were used to be analysed in order to find the trends in multimodal field as a main contribution of this paper. We found that gesture, speech and touch are frequently used to manipulate virtual object. Most of the integrated component in MMI AR framework discussed only on the concept of the framework components or the information centred design between the components. Finally, we conclude this paper by providing ideas for future work involving this field

    Understanding Context to Capture when Reconstructing Meaningful Spaces for Remote Instruction and Connecting in XR

    Full text link
    Recent technological advances are enabling HCI researchers to explore interaction possibilities for remote XR collaboration using high-fidelity reconstructions of physical activity spaces. However, creating these reconstructions often lacks user involvement with an overt focus on capturing sensory context that does not necessarily augment an informal social experience. This work seeks to understand social context that can be important for reconstruction to enable XR applications for informal instructional scenarios. Our study involved the evaluation of an XR remote guidance prototype by 8 intergenerational groups of closely related gardeners using reconstructions of personally meaningful spaces in their gardens. Our findings contextualize physical objects and areas with various motivations related to gardening and detail perceptions of XR that might affect the use of reconstructions for remote interaction. We discuss implications for user involvement to create reconstructions that better translate real-world experience, encourage reflection, incorporate privacy considerations, and preserve shared experiences with XR as a medium for informal intergenerational activities.Comment: 26 pages, 5 figures, 4 table

    Is mobile-game based learning effective for international adults learning Maltese?

    Get PDF
    The EULALIA (Enhancing University Language courses with an App powered by game-based learning and tangible user interface activities) project aimed to enhance the learning methodologies of four university language courses for Erasmus students in Italy, Malta, Poland and Spain by developing innovative and effective learning tools based on mobile and game-based learning paradigms and the use of tangible user interfaces. This study focuses on Malta by providing an in-depth view of the impact of game-based applications on enhancing international adult learning of Maltese as a second language (ML2). The findings encourage international adult students to learn ML2 through a game-based application to aid in increasing cultural awareness and better communication with locals. As part of the methodology, pre-surveys and post-surveys were used on a test group comprising 28 pre-surveyed and 9 post-surveyed ML2 adult learners who used the app and a reference group of 24 pre-surveyed and 23 post-surveyed ML2 learners who did not use the app. The results revealed that according to the participants, game-based learning did not improve cognitive function even though the learners were more engaged in language activities, and thus could process and absorb a wider range of information. The research found that game-based learning did not have a statistically significant effect on adult learners’ language proficiency and digital skills.peer-reviewe

    A Scoping Review on Tangible and Spatial Awareness Interaction Technique in Mobile Augmented Reality-Authoring Tool in Kitchen

    No full text
    The interaction paradigm has changed with the emerging technology, mobile augmented reality, and spatial awareness mobile device. The traditional way for designing a kitchen can cause mistake in measurement and user’s imagination is different from kitchen advisor’s sketch design due to limitation of human imagination. Using mobile augmented reality technology, the user can overlay the virtual kitchen outcome that could fit with the actual kitchen environment. Interaction technique is required in order to allow the user to change the characteristic of the virtual kitchen to suite their need. Thus, this paper is intended to propose the tangible and spatial awareness interaction techniques in the kitchen design authoring tool. A scoping review related to the previous research on tangible and spatial awareness interaction is presented in this paper. The proposed techniques were based on the review on the existing interaction technique and the interview section with the Malaysian kitchen designer by understanding the kitchen design elements and flow. The proposed techniques will be further improved by going through the heuristic evaluation with the augmented reality expert

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data

    Designing physical-digital artefacts for the public realm

    Get PDF
    The exploration of new types of everyday interactions enabled by the increasing integration of digital technologies with the physical world is a major research direction for interaction design research (Dourish, 2004), and a focus on materials and materiality is also of growing significance, e.g.: Internet of Things; interactive architecture; the intersection of craft and technology. Increasingly, designer-researchers from a range of material-focused creative design disciplines are starting to address these themes. Previous studies indicate that new approaches, methods and concepts are required to investigate the evolving field of physical-digital synthesis in the built environment. Addressing this, the thesis asks one central question: What resources for design research can help practitioners and researchers from multiple creative design disciplines improve the design of physical-digital artefacts located in the public realm? A detailed Scoping Study explored experimental research methods for this thesis and produced an overview of physical-digital artefacts in outdoor public space. This scoping influenced the subsequent research: an in-depth field study of the design culture and practices of fifty material-focused designer-researchers; four case studies of physical-digital artefacts in outdoor public spaces; a formative creative design workshop with fourteen participants to test the findings from the research. The chief contribution of this thesis to interaction design research is the development of two resources for design research (the Experiential Framework and the Conceptual Materials for Design Research) and the practical application of these new tools as a method for design research in a simulated ‘real-world’ creative workshop setting. Both resources are intended to co-exist and be integrated with established design research methods and emerging approaches. Hence, the outputs from this thesis are intended to support designer-researchers from a range of creative design backgrounds to conceptualise and design physical-digital artefacts for urban outdoor public spaces that provide richer interaction paradigms for future city dwellers

    Augmented Reality Interfaces for Procedural Tasks

    Get PDF
    Procedural tasks involve people performing established sequences of activities while interacting with objects in the physical environment to accomplish particular goals. These tasks span almost all aspects of human life and vary greatly in their complexity. For some simple tasks, little cognitive assistance is required beyond an initial learning session in which a person follows one-time compact directions, or even intuition, to master a sequence of activities. In the case of complex tasks, procedural assistance may be continually required, even for the most experienced users. Approaches for rendering this assistance employ a wide range of written, audible, and computer-based technologies. This dissertation explores an approach in which procedural task assistance is rendered using augmented reality. Augmented reality integrates virtual content with a user's natural view of the environment, combining real and virtual objects interactively, and aligning them with each other. Our thesis is that an augmented reality interface can allow individuals to perform procedural tasks more quickly while exerting less effort and making fewer errors than other forms of assistance. This thesis is supported by several significant contributions yielded during the exploration of the following research themes: What aspects of AR are applicable and beneficial to the procedural task problem? In answering this question, we developed two prototype AR interfaces that improve procedural task accomplishment. The first prototype was designed to assist mechanics carrying out maintenance procedures under field conditions. An evaluation involving professional mechanics showed our prototype reduced the time required to locate procedural tasks and resulted in fewer head movements while transitioning between tasks. Following up on this work, we constructed another prototype that focuses on providing assistance in the underexplored psychomotor phases of procedural tasks. This prototype presents dynamic and prescriptive forms of instruction and was evaluated using a demanding and realistic alignment task. This evaluation revealed that the AR prototype allowed participants to complete the alignment more quickly and accurately than when using an enhanced version of currently employed documentation systems. How does the user interact with an AR application assisting with procedural tasks? The application of AR to the procedural task problem poses unique user interaction challenges. To meet these challenges, we present and evaluate a novel class of user interfaces that leverage naturally occurring and otherwise unused affordances in the native environment to provide a tangible user interface for augmented reality applications. This class of techniques, which we call Opportunistic Controls, combines hand gestures, overlaid virtual widgets, and passive haptics to form an interface that was proven effective and intuitive during quantitative evaluation. Our evaluation of these techniques includes a qualitative exploration of various preferences and heuristics for Opportunistic Control-based designs

    Improving command selection in smart environments by exploiting spatial constancy

    Get PDF
    With the a steadily increasing number of digital devices, our environments are becoming increasingly smarter: we can now use our tablets to control our TV, access our recipe database while cooking, and remotely turn lights on and off. Currently, this Human-Environment Interaction (HEI) is limited to in-place interfaces, where people have to walk up to a mounted set of switches and buttons, and navigation-based interaction, where people have to navigate on-screen menus, for example on a smart-phone, tablet, or TV screen. Unfortunately, there are numerous scenarios in which neither of these two interaction paradigms provide fast and convenient access to digital artifacts and system commands. People, for example, might not want to touch an interaction device because their hands are dirty from cooking: they want device-free interaction. Or people might not want to have to look at a screen because it would interrupt their current task: they want system-feedback-free interaction. Currently, there is no interaction paradigm for smart environments that allows people for these kinds of interactions. In my dissertation, I introduce Room-based Interaction to solve this problem of HEI. With room-based interaction, people associate digital artifacts and system commands with real-world objects in the environment and point toward these real-world proxy objects for selecting the associated digital artifact. The design of room-based interaction is informed by a theoretical analysis of navigation- and pointing-based selection techniques, where I investigated the cognitive systems involved in executing a selection. An evaluation of room-based interaction in three user studies and a comparison with existing HEI techniques revealed that room-based interaction solves many shortcomings of existing HEI techniques: the use of real-world proxy objects makes it easy for people to learn the interaction technique and to perform accurate pointing gestures, and it allows for system-feedback-free interaction; the use of the environment as flat input space makes selections fast; the use of mid-air full-arm pointing gestures allows for device-free interaction and increases awareness of other’s interactions with the environment. Overall, I present an alternative selection paradigm for smart environments that is superior to existing techniques in many common HEI-scenarios. This new paradigm can make HEI more user-friendly, broaden the use cases of smart environments, and increase their acceptance for the average user

    Virtual Reality

    Get PDF
    At present, the virtual reality has impact on information organization and management and even changes design principle of information systems, which will make it adapt to application requirements. The book aims to provide a broader perspective of virtual reality on development and application. First part of the book is named as "virtual reality visualization and vision" and includes new developments in virtual reality visualization of 3D scenarios, virtual reality and vision, high fidelity immersive virtual reality included tracking, rendering and display subsystems. The second part named as "virtual reality in robot technology" brings forth applications of virtual reality in remote rehabilitation robot-based rehabilitation evaluation method and multi-legged robot adaptive walking in unstructured terrains. The third part, named as "industrial and construction applications" is about the product design, space industry, building information modeling, construction and maintenance by virtual reality, and so on. And the last part, which is named as "culture and life of human" describes applications of culture life and multimedia-technology
    • …
    corecore