26 research outputs found

    Online gamification devices as extensions of the educational printed book

    Get PDF
    In recent years there have been several commercial products designated as "augmented books". These use gamification and augmented reality technologies to provide the reader with more layers of information, thereby fostering the use of the book in new ways. So, in this article we describe part of the research and outcomes of the Portuguese project CHIC – C3, aimed at designing and developing a platform for managing the production of digital content connected with printed books. Furthermore, we developed a model for the gamification of digital content based on the printed book, mainly aimed at educational purposes. A proof of concept for the model was built in the form of a companion platform, supported by the Moodle LMS, fully integrated with the main CHIC website. Readers were able to access the platform, engage in several content related games, and interact with other readers.info:eu-repo/semantics/publishedVersio

    Enhancement and extension of the printed book: an online gamification model to complement educational textbooks

    Get PDF
    Conferência realizada em Bari, Itália, de 3-5 de novembro de 2021.Despite the significant increase in the use of digital devices, and the access to e-books by younger ages, the printed book still remains very important. Nowadays, although many communication processes and information exchanges have a digital support, the importance of using printed paper is acknowledged in many contexts. Both the paper and the digital media have unique advantages: digital media integrate with audiovisual and interactive resources, and the paper book supports interactions such as tactile and kinesthetic feedback given to both hands. In recent years there have been several commercial products designated as "augmented books", using augmented reality technologies to provide the reader with more layers of information, thereby fostering the use of the book in new ways. So, in this concept paper we describe part of the research and outcomes of project CHIC – C3, aimed at designing and developing a platform for managing the production of digital content connected with printed books. Furthermore, we propose a model for the gamification of digital content based on the printed book, mainly aimed at educational purposes. A proof of concept for the model was built in the form of a companion platform, supported by the Moodle LMS, fully integrated with the main CHIC website. Readers are able to access the platform, engage in several content related games, and interact with other readers.info:eu-repo/semantics/acceptedVersio

    Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

    Get PDF
    3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe

    Pre-define rotation amplitudes object rotation in handheld augmented reality

    Get PDF
    Interaction is one of the important topics to be discussed since it includes the interface where the end-user communicates with the augmented reality (AR) system. In handheld AR interface, the traditional interaction techniques are not suitable for some AR applications due to the different attributes of handheld devices that always refer to smartphones and tablets. Currently interaction techniques in handheld AR are known as touch-based technique, mid-air gesture-based technique and device-based technique that can led to a wide discussion in related research areas. However, this paper will focus to discover the device-based interaction technique because it has proven in the previous studies to be more suitable and robust in several aspects. A novel device-based 3D object rotation technique is proposed to solve the current problem in performing 3DOF rotation of 3D object. The goal is to produce a precise and faster 3D object rotation. Therefore, the determination of the rotation amplitudes per second is required before the fully implementation. This paper discusses the implementation in depth and provides a guideline for those who works in related to device-based interaction

    End-to-End Multiview Gesture Recognition for Autonomous Car Parking System

    Get PDF
    The use of hand gestures can be the most intuitive human-machine interaction medium. The early approaches for hand gesture recognition used device-based methods. These methods use mechanical or optical sensors attached to a glove or markers, which hinders the natural human-machine communication. On the other hand, vision-based methods are not restrictive and allow for a more spontaneous communication without the need of an intermediary between human and machine. Therefore, vision gesture recognition has been a popular area of research for the past thirty years. Hand gesture recognition finds its application in many areas, particularly the automotive industry where advanced automotive human-machine interface (HMI) designers are using gesture recognition to improve driver and vehicle safety. However, technology advances go beyond active/passive safety and into convenience and comfort. In this context, one of America’s big three automakers has partnered with the Centre of Pattern Analysis and Machine Intelligence (CPAMI) at the University of Waterloo to investigate expanding their product segment through machine learning to provide an increased driver convenience and comfort with the particular application of hand gesture recognition for autonomous car parking. In this thesis, we leverage the state-of-the-art deep learning and optimization techniques to develop a vision-based multiview dynamic hand gesture recognizer for self-parking system. We propose a 3DCNN gesture model architecture that we train on a publicly available hand gesture database. We apply transfer learning methods to fine-tune the pre-trained gesture model on a custom-made data, which significantly improved the proposed system performance in real world environment. We adapt the architecture of the end-to-end solution to expand the state of the art video classifier from a single image as input (fed by monocular camera) to a multiview 360 feed, offered by a six cameras module. Finally, we optimize the proposed solution to work on a limited resources embedded platform (Nvidia Jetson TX2) that is used by automakers for vehicle-based features, without sacrificing the accuracy robustness and real time functionality of the system

    Natural and intuitive gesture interaction for 3D object manipulation in conceptual design

    Get PDF
    Gesture interaction with three-dimensional (3D) representations is increasingly explored, however there is little research present on the nature of the gestures used. A study was conducted in order to explore gestures designers perform naturally and intuitively while interacting with 3D objects during conceptual design. The findings demonstrate that different designers perform similar gestures for the same activities, and that their interaction with a 3D representation on a 2D screen is consistent with that which would be expected if a physical object were suspended in air in front of them
    corecore