6 research outputs found

    remote laboratory experiments in a virtual immersive learning environment

    Get PDF
    TheVirtual Immersive Learning(VIL) test bench implements a virtual collaborative immersive environment, capable of integrating natural contexts and typical gestures, which may occur during traditional lectures, enhanced with advanced experimental sessions. The system architecture is described, along with the motivations, and the most significant choices, both hardware and software, adopted for its implementation. The novelty of the approach essentially relies on its capability of embedding functionalities that stem from various research results (mainly carried out within the VICOM national project), and "putting the pieces together" in a well-integrated framework. These features, along with its high portability, good flexibility, and, above all, low cost, make this approach appropriate for educational and training purposes, mainly concerning measurements on telecommunication systems, at universities and research centers, as well as enterprises. Moreover, the methodology can be employed for remote access to and sharing of costly measurement equipment in many different activities. The immersive characteristics of the framework are illustrated, along with performance measurements related to a specific application

    On Inter-referential Awareness in Collaborative Augmented Reality

    Get PDF
    For successful collaboration to occur, a workspace must support inter-referential awareness - or the ability for one participant to refer to a set of artifacts in the environment, and for that reference to be correctly interpreted by others. While referring to objects in our everyday environment is a straight-forward task, the non-tangible nature of digital artifacts presents us with new interaction challenges. Augmented reality (AR) is inextricably linked to the physical world, and it is natural to believe that the re-integration of physical artifacts into the workspace makes referencing tasks easier; however, we find that these environments combine the referencing challenges from several computing disciplines, which compound across scenarios. This dissertation presents our studies of this form of awareness in collaborative AR environments. It stems from our research in developing mixed reality environments for molecular modeling, where we explored spatial and multi-modal referencing techniques. To encapsulate the myriad of factors found in collaborative AR, we present a generic, theoretical framework and apply it to analyze this domain. Because referencing is a very human-centric activity, we present the results of an exploratory study which examines the behaviors of participants and how they generate references to physical and virtual content in co-located and remote scenarios; we found that participants refer to content using physical and virtual techniques, and that shared video is highly effective in disambiguating references in remote environments. By implementing user feedback from this study, a follow-up study explores how the environment can passively support referencing, where we discovered the role that virtual referencing plays during collaboration. A third study was conducted in order to better understand the effectiveness of giving and interpreting references using a virtual pointer; the results suggest the need for participants to be parallel with the arrow vector (strengthening the argument for shared viewpoints), as well as the importance of shadows in non-stereoscopic environments. Our contributions include a framework for analyzing the domain of inter-referential awareness, the development of novel referencing techniques, the presentation and analysis of our findings from multiple user studies, and a set of guidelines to help designers support this form of awareness

    Real-time Immersive human-computer interaction based on tracking and recognition of dynamic hand gestures

    Get PDF
    With fast developing and ever growing use of computer based technologies, human-computer interaction (HCI) plays an increasingly pivotal role. In virtual reality (VR), HCI technologies provide not only a better understanding of three-dimensional shapes and spaces, but also sensory immersion and physical interaction. With the hand based HCI being a key HCI modality for object manipulation and gesture based communication, challenges are presented to provide users a natural, intuitive, effortless, precise, and real-time method for HCI based on dynamic hand gestures, due to the complexity of hand postures formed by multiple joints with high degrees-of-freedom, the speed of hand movements with highly variable trajectories and rapid direction changes, and the precision required for interaction between hands and objects in the virtual world. Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound to capture hand position and orientation, and a stereoscopic display system to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of applications, namely, hand gesture based virtual object manipulation and visualisation, hand gesture based direct sign writing, and hand gesture based finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to select, translate, rotate, scale, release and visualise virtual objects (presented using graphics and volume data) in three-dimensional space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using three different signing sequences and a range of complex hand gestures, which consist of various combinations of hand postures (with each finger open, half-bent, closed, adduction and abduction), eight hand orientations in horizontal/vertical plans, three palm facing directions, and various hand movements (which can have eight directions in horizontal/vertical plans, and can be repetitive, straight/curve, clockwise/anti-clockwise). The development includes a special visual interface to give not only a stereoscopic view of hand gestures and movements, but also a structured visual feedback for each stage of the signing sequence. An excellent basis is therefore formed to develop a full HCI based on all human gestures by integrating the proposed system with facial expression and body posture recognition methods. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time

    Dance training and feedback system using wearable wireless sensors.

    Get PDF
    Teaching and learning the art of human body motion practices such as dance are interesting activities and they are usually performed at traditional training centres. Nowadays, learning the art of dance is becoming challenging proposition with a huge time and energy commitment. In recent times, there has been a vast advancement in computing and sensing technologies, and they are easily accessible. Based on these observations, we proposed a wireless sensor-based dance training and feedback system, which is convenient, flexible, and portable. This system is unique in terms of providing prompt feedback with various teaching and learning flexibilities to both trainees and trainers. In this thesis, an architectural framework of generic body movement training system, proposed in [1], is tuned and expanded to develop a dance training and feedback system. The proposed feedback system and its prototype implementation is the main contributions of this thesis. The proposed teaching and learning tool presents a method for generating meaningful feedback by capturing and analyzing the motion data in real time. The usage of the proposed system is demonstrated using Tap dance. Performance metrics are devised to evaluate the performance and a weighted scoring scheme is applied to compute the performance. The functionalities of the feedback system are illustrated using suitable scenarios. A combination of quantitative and qualitative feedbacks can be generated and presented to the trainees in three different forms: textual, graphical, and audio. The system also accommodates varying teaching styles and preferences of different trainers. We believe that such a two-end customization is a unique feature of the proposed system. With further tunning, we expect it will be a useful tool for teaching and learning of dance at the beginner's level.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b180584

    Real-time immersive human-computer interaction based on tracking and recognition of dynamic hand gestures

    Get PDF
    With fast developing and ever growing use of computer based technologies, human-computer interaction (HCI) plays an increasingly pivotal role. In virtual reality (VR), HCI technologies provide not only a better understanding of three-dimensional shapes and spaces, but also sensory immersion and physical interaction. With the hand based HCI being a key HCI modality for object manipulation and gesture based communication, challenges are presented to provide users a natural, intuitive, effortless, precise, and real-time method for HCI based on dynamic hand gestures, due to the complexity of hand postures formed by multiple joints with high degrees-of-freedom, the speed of hand movements with highly variable trajectories and rapid direction changes, and the precision required for interaction between hands and objects in the virtual world. Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound to capture hand position and orientation, and a stereoscopic display system to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of applications, namely, hand gesture based virtual object manipulation and visualisation, hand gesture based direct sign writing, and hand gesture based finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to select, translate, rotate, scale, release and visualise virtual objects (presented using graphics and volume data) in three-dimensional space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using three different signing sequences and a range of complex hand gestures, which consist of various combinations of hand postures (with each finger open, half-bent, closed, adduction and abduction), eight hand orientations in horizontal/vertical plans, three palm facing directions, and various hand movements (which can have eight directions in horizontal/vertical plans, and can be repetitive, straight/curve, clockwise/anti-clockwise). The development includes a special visual interface to give not only a stereoscopic view of hand gestures and movements, but also a structured visual feedback for each stage of the signing sequence. An excellent basis is therefore formed to develop a full HCI based on all human gestures by integrating the proposed system with facial expression and body posture recognition methods. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Implementation, use and analysis of open source learning management system “Moodle” and e-learning for the deaf in Jordan

    Get PDF
    When learning mathematics, deaf children of primary school age experience difficulties due to their disability. In Jordan, little research has been undertaken to understand the problems facing deaf children and their teachers. Frequently, children are educated in special schools for the deaf; the majority of deaf children tend not to be integrated into mainstream education although efforts are made to incorporate them into the system. Teachers in the main stream education system rarely have knowledge and experience to enable deaf students to reach their full potential. The methodological approach used in this research is a mixed one consisting of action research and Human Computer interaction (HCI) research. The target group was deaf children aged nine years (at the third grade) and their teachers in Jordanian schools. Mathematics was chosen as the main focus of this study because it is a universal subject with its own concepts and rules and at this level the teachers in the school have sufficient knowledge and experience to teach mathematics topics competently. In order to obtain a better understanding of the problems faced by teachers and the deaf children in learning mathematics, semi-structured interviews were undertaken and questionnaires distributed to teachers. The main aim at that stage of research was to explore the current use and status of the e-learning environment and LMS within the Jordanian schools for the deaf in Jordan. In later stages of this research, semi-structured interviews and questionnaires were used again to ascertain the effectiveness, usability and readiness of the adopted e-learning environment “Moodle. Finally pre-tests and post-tests used to assess the effectiveness of the e-learning environment and LMS. It is important to note that it was not intended to work with the children directly but were used as test subjects. Based on the requirements and recommendations of the teachers of the deaf, a key requirements scheme was developed. Four open source e-learning environments and LMS evaluated against the developed key requirements. The evaluation was based on a software engineering approache. The outcome of that evaluation was the adoption of an open source e-learning environment and LMS called “Moodle”. Moodle was presented to the teachers for the purpose of testing it. It was found it is the most suitable e-learning environment and LMS to be adapted for use by deaf children in Jordan based on the teachers requirements. Then Moodle was presented to the deaf children’s to use during this research. After use, the activities of the deaf and their teachers were used and analysed in terms of Human Computer Interaction (HCI) analysis. The analysis includes the readiness, usability, user satisfaction, ease of use, learnability, outcome/future use, content, collaboration & communication tools and functionality
    corecore