4,519 research outputs found

    Haptically enabled interactivity and immersive virtual assembly

    Full text link
    Virtual training systems are attracting paramount attention from the manufacturing industries due to their potential advantages over the conventional training practices such as general assembly. Within this virtual training realm for general assembly, a haptically enabled interactive and immersive virtual reality (HIVEx) system is presented. The idea is to imitate real assembly training scenarios by providing comprehensive user interaction as well as by enforcing physical constraints within the virtual environment through the use of haptics technology. The developed system employs a modular system approach providing flexibility of reconfiguration and scalability as well as better utilization of the current multi-core computer architecture. The user interacts with the system using haptics device and data glove while fully immersed into the virtual environment with depth perception. An evaluation module, incorporated into the system, automatically logs and evaluates the information through the simulation providing user performance and improvements over time. A ruggedized portable version of the system is also developed and presented with full system capabilities allowing easy relocation with different factory environments. A number of training scenarios has been developed with varying degree of complexity to exploit the potential of the presented system. The presented system can be employed for teaching and training of existing assembly processes as well as the design of new optimised assembly operations. Furthermore, the presented system can assist in optimizing existing practices by evaluating the effectiveness and the level of knowledge transfer involved in the process. Within the aforementioned conceptual. framework, a working prototype is developed.<br /

    Haptically enable interactive virtual assembly training system development and evaluation

    Full text link
    Virtual training systems are attracting paramount attention from the manufacturing industries due to their potential advantages over the conventional training practices. Significant cost savings can be realized due to the shorter times for the development of different training-scenarios as well as reuse of existing designed engineering (math) models. In addition, use of computer based virtual reality (VR) training systems can shorten the time span from computer aided product design to commercial production due to non-reliance on the hardware parts for training. Within the aforementioned conceptual framework, a haptically enabled interactive and immersive virtual reality (HIVEx) system is presented. Unlike existing VR systems, the presented idea tries to imitate real physical training scenarios by providing comprehensive user interaction, constrained within the physical limitations of the real world. These physical constrains are imposed by the haptics devices in the virtual environment. As a result, in contrast to the existing VR systems that are capable of providing knowledge generally about assembly sequences only, the proposed system helps in cognitive learning and procedural skill development as well, due to its high physically interactive nature.<br /

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Intelligent tutoring in virtual reality for highly dynamic pedestrian safety training

    Get PDF
    This thesis presents the design, implementation, and evaluation of an Intelligent Tutoring System (ITS) with a Virtual Reality (VR) interface for child pedestrian safety training. This system enables children to train practical skills in a safe and realistic virtual environment without the time and space dependencies of traditional roadside training. This system also employs Domain and Student Modelling techniques to analyze user data during training automatically and to provide appropriate instructions and feedback. Thus, the traditional requirement of constant monitoring from teaching personnel is greatly reduced. Compared to previous work, especially the second aspect is a principal novelty for this domain. To achieve this, a novel Domain and Student Modeling method was developed in addition to a modular and extensible virtual environment for the target domain. While the Domain and Student Modeling framework is designed to handle the highly dynamic nature of training in traffic and the ill-defined characteristics of pedestrian tasks, the modular virtual environment supports different interaction methods and a simple and efficient way to create and adapt exercises. The thesis is complemented by two user studies with elementary school children. These studies testify great overall user acceptance and the system’s potential for improving key pedestrian skills through autonomous learning. Last but not least, the thesis presents experiments with different forms of VR input and provides directions for future work.Diese Arbeit behandelt den Entwurf, die Implementierung sowie die Evaluierung eines intelligenten Tutorensystems (ITS) mit einer Virtual Reality (VR) basierten BenutzeroberflĂ€che zum Zwecke von Verkehrssicherheitstraining fĂŒr Kinder. Dieses System ermöglicht es Kindern praktische FĂ€higkeiten in einer sicheren und realistischen Umgebung zu trainieren, ohne den örtlichen und zeitlichen AbhĂ€ngigkeiten des traditionellen, straßenseitigen Trainings unterworfen zu sein. Dieses System macht außerdem von Domain und Student Modelling Techniken gebrauch, um Nutzerdaten wĂ€hrend des Trainings zu analysieren und daraufhin automatisiert geeignete Instruktionen und RĂŒckmeldung zu generieren. Dadurch kann die bisher erforderliche, stĂ€ndige Überwachung durch Lehrpersonal drastisch reduziert werden. Verglichen mit bisherigen Lösungen ist insbesondere der zweite Aspekt eine grundlegende Neuheit fĂŒr diesen Bereich. Um dies zu erreichen wurde ein neuartiges Framework fĂŒr Domain und Student Modelling entwickelt, sowie eine modulare und erweiterbare virtuelle Umgebung fĂŒr diese Art von Training. WĂ€hrend das Domain und Student Modelling Framework so entworfen wurde, um mit der hohen Dynamik des Straßenverkehrs sowie den vage definierten FußgĂ€ngeraufgaben zurecht zu kommen, unterstĂŒtzt die modulare Umgebung unterschiedliche Eingabeformen sowie eine unkomplizierte und effiziente Methode, um Übungen zu erstellen und anzupassen. Die Arbeit beinhaltet außerdem zwei Nutzerstudien mit Grundschulkindern. Diese Studien belegen dem System eine hohe Benutzerakzeptanz und stellt das Potenzial des Systems heraus, wichtige FĂ€higkeiten fĂŒr FußgĂ€ngersicherheit durch autodidaktisches Training zu verbessern. Nicht zuletzt beschreibt die Arbeit Experimente mit verschiedenen Formen von VR Eingaben und zeigt die Richtung fĂŒr zukĂŒnftige Arbeit auf

    User-based gesture vocabulary for form creation during a product design process

    Get PDF
    There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only.There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only

    Directional adposition use in English, Swedish and Finnish

    Get PDF
    Directional adpositions such as to the left of describe where a Figure is in relation to a Ground. English and Swedish directional adpositions refer to the location of a Figure in relation to a Ground, whether both are static or in motion. In contrast, the Finnish directional adpositions edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) solely describe the location of a moving Figure in relation to a moving Ground (Nikanne, 2003). When using directional adpositions, a frame of reference must be assumed for interpreting the meaning of directional adpositions. For example, the meaning of to the left of in English can be based on a relative (speaker or listener based) reference frame or an intrinsic (object based) reference frame (Levinson, 1996). When a Figure and a Ground are both in motion, it is possible for a Figure to be described as being behind or in front of the Ground, even if neither have intrinsic features. As shown by Walker (in preparation), there are good reasons to assume that in the latter case a motion based reference frame is involved. This means that if Finnish speakers would use edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) more frequently in situations where both the Figure and Ground are in motion, a difference in reference frame use between Finnish on one hand and English and Swedish on the other could be expected. We asked native English, Swedish and Finnish speakers’ to select adpositions from a language specific list to describe the location of a Figure relative to a Ground when both were shown to be moving on a computer screen. We were interested in any differences between Finnish, English and Swedish speakers. All languages showed a predominant use of directional spatial adpositions referring to the lexical concepts TO THE LEFT OF, TO THE RIGHT OF, ABOVE and BELOW. There were no differences between the languages in directional adpositions use or reference frame use, including reference frame use based on motion. We conclude that despite differences in the grammars of the languages involved, and potential differences in reference frame system use, the three languages investigated encode Figure location in relation to Ground location in a similar way when both are in motion. Levinson, S. C. (1996). Frames of reference and Molyneux’s question: Crosslingiuistic evidence. In P. Bloom, M.A. Peterson, L. Nadel & M.F. Garrett (Eds.) Language and Space (pp.109-170). Massachusetts: MIT Press. Nikanne, U. (2003). How Finnish postpositions see the axis system. In E. van der Zee & J. Slack (Eds.), Representing direction in language and space. Oxford, UK: Oxford University Press. Walker, C. (in preparation). Motion encoding in language, the use of spatial locatives in a motion context. Unpublished doctoral dissertation, University of Lincoln, Lincoln. United Kingdo

    Bare-handed 3D drawing in augmented reality

    Get PDF
    Head-mounted augmented reality (AR) enables embodied in situ drawing in three dimensions (3D).We explore 3D drawing interactions based on uninstrumented, unencumbered (bare) hands that preserve the user’s ability to freely navigate and interact with the physical environment. We derive three alternative interaction techniques supporting bare-handed drawing in AR from the literature and by analysing several envisaged use cases. The three interaction techniques are evaluated in a controlled user study examining three distinct drawing tasks: planar drawing, path description, and 3D object reconstruction. The results indicate that continuous freehand drawing supports faster line creation than the control point-based alternatives, although with reduced accuracy. User preferences for the different techniques are mixed and vary considerably between the different tasks, highlighting the value of diverse and flexible interactions. The combined effectiveness of these three drawing techniques is illustrated in an example application of 3D AR drawing
    • 

    corecore