60 research outputs found

    Comparing Free Hand Menu Techniques for Distant Displays using Linear, Marking and Finger-Count Menus

    Get PDF
    Part 1: Long and Short PapersInternational audienceDistant displays such as interactive Public Displays (IPD) or Interactive Television (ITV) require new interaction techniques as traditional input devices may be limited or missing in these contexts. Free hand interaction, as sensed with computer vision techniques, presents a promising interaction technique. This paper presents the adaptation of three menu techniques for free hand interaction: Linear menu, Marking menu and Finger-Count menu. The first study based on a Wizard-of-OZ protocol focuses on Finger-Counting postures in front of interactive television and public displays. It reveals that participants do choose the most efficient gestures neither before nor after the experiment. Results are used to develop a Finger-Count recognizer. The second experiment shows that all techniques achieve satisfactory accuracy. It also shows that Finger-Count requires more mental demand than other techniques.</p

    Application of augmented reality and robotic technology in broadcasting: A survey

    Get PDF
    As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper

    A New Model for an Augmented Reality Based Content (ARBC) (A Case Study on the Palestinian Curriculum)

    Get PDF
    The digital age has forced people to use technology, this distributed learning is found everywhere, both teachers and students are in need to use new strategies that based on technology such as mobile, Pc, iPad. Those means has the augmented reality as a suitable base and infrastructure. It's not acceptable to introduce the content in a traditional way while we live in the digital age that is filled of techno geeks students. This paper tries to introduce a new model for determining, using, and evaluating augmented reality based content. In this paper, I focused on the Palestinian curriculum and how to convert the static images and shapes into dynamic content. First, I divided the model into five steps; Analyses, determine, produce, use and evaluate. This model takes in account the educational theories such as constructivism and connectivism. Also, this model sees the students as the core of the content and learning process

    Enhanced life-size holographic telepresence framework with real-time three-dimensional reconstruction for dynamic scene

    Get PDF
    Three-dimensional (3D) reconstruction has the ability to capture and reproduce 3D representation of a real object or scene. 3D telepresence allows the user to feel the presence of remote user that was remotely transferred in a digital representation. Holographic display is one of alternatives to discard wearable hardware restriction, it utilizes light diffraction to display 3D images to the viewers. However, to capture a real-time life-size or a full-body human is still challenging since it involves a dynamic scene. The remaining issue arises when dynamic object to be reconstructed is always moving and changes shapes and required multiple capturing views. The life-size data captured were multiplied exponentially when working with more depth cameras, it can cause the high computation time especially involving dynamic scene. To transfer high volume 3D images over network in real-time can also cause lag and latency issue. Hence, the aim of this research is to enhance life-size holographic telepresence framework with real-time 3D reconstruction for dynamic scene. There are three stages have been carried out, in the first stage the real-time 3D reconstruction with the Marching Square algorithm is combined during data acquisition of dynamic scenes captured by life-size setup of multiple Red Green Blue-Depth (RGB-D) cameras. Second stage is to transmit the data that was acquired from multiple RGB-D cameras in real-time and perform double compression for the life-size holographic telepresence. The third stage is to evaluate the life-size holographic telepresence framework that has been integrated with the real-time 3D reconstruction of dynamic scenes. The findings show that by enhancing life-size holographic telepresence framework with real-time 3D reconstruction, it has reduced the computation time and improved the 3D representation of remote user in dynamic scene. By running the double compression for the life-size holographic telepresence, 3D representations in life-size is smooth. It has proven can minimize the delay or latency during acquired frames synchronization in remote communications

    User Interface for ARTable and Microsoft Hololens

    Get PDF
    Tato práce se zaměřuje na použitelnost brýlí Microsoft HoloLens pro rozšířenou realitu v prototypu pracoviště pro spolupráci člověka s robotem - "ARTable". Použití brýlí je demonstrováno vytvořeným uživatelským rozhraním, které pomáhá uživatelům lépe a rychleji porozumět systému ARTable. Umožňuje prostorově vizualizovat naučené programy, aniž by bylo nutné spouštět samotného robota. Uživatel je veden 3D animací jednotlivých programů a hlasem zařízení, což mu pomůže získat jasnou představu o tom, co by se stalo, pokud by program spustil přímo na robotovi. Implementované řešení také umožňuje interaktivně provést uživatele celým procesem programování robota. Použití brýlí umožňuje mimo jiné zobrazit cenné prostorové informace, například vidění robota, tedy zvýraznit ty objekty, které jsou robotem detekovány.This thesis focuses on usability of mixed reality head-mounted display -   Microsoft HoloLens - in a human-robot collaborative workspace - the ARTable. Use of the headset is demonstrated by created user interface which helps regular workers to better and faster understand the ARTable system. It allows to spatially visualize learned programs, without the necessity to run the robot itself. The user is guided by 3D animation of individual programs and by device voice, which helps him to get a clear idea of what will happen if he runs the program directly on the robot. The solution also provides interactive guidance for the user when programming the robot. Using mixed reality displays also enables to visualize valuable spatial information, such as robot perception.

    Hand-Controlled User Interfacing for Head-Mounted Augmented Reality Learning Environments

    Get PDF
    With the rapid expansion of technology and hardware availability within the field of Augmented Reality, building and deploying Augmented Reality learning environments has become more logistically viable than ever before. In this paper, we focus on the development of a new mobile learning experience for a museum by combining multiple technologies to provide additional Human–computer interaction possibilities. This is both to reduce barriers to entry for end-users as well as provide natural interaction methods. Using our method, we implemented a new approach to gesture-based interactions for Augmented Reality interactions by combining two devices, a Leap Motion and a Microsoft HoloLens (1st Generation), via an intermediary device with the use of local-area networking. This was carried out with the intention of comparing this method against alternative forms of Augmented Reality to determine which implementation has the largest impact on adult learners’ ability to retain information. A control group has been used to establish data on memory retention without the use of Augmented Reality technology, along with three focus groups to explore the different methods and locations. Results found that adult learners retain the most overall information when being educated through a traditional lecture, with a statistically significant difference between the methods; however, the use of Augmented Reality resulted in a slower rate of knowledge decay between testing intervals. This contrasts with existing research as adult learners did not respond to the technology in the same way that child and teenage audiences previously have, which suggests that prior research may not be generalisable to all audiences

    From seen to unseen: Designing keyboard-less interfaces for text entry on the constrained screen real estate of Augmented Reality headsets

    Get PDF
    Text input is a very challenging task in the constrained screen real-estate of Augmented Reality headsets. Typical keyboards spread over multiple lines and occupy a significant portion of the screen. In this article, we explore the feasibility of single-line text entry systems for smartglasses. We first design FITE, a dynamic keyboard where the characters are positioned depending on their probability within the current input. However, the dynamic layout leads to mediocre text input and low accuracy. We then introduce HIBEY, a fixed 1-line solution that further decreases the screen real-estate usage by hiding the layout. Despite its hidden layout, HIBEY surprisingly performs much better than FITE, and achieves a mean text entry rate of 9.95 words per minute (WPM) with 96.06% accuracy, which is comparable to other state-of-the-art approaches. After 8 days, participants achieve an average of 13.19 WPM. In addition, HIBEY only occupies 13.14% of the screen real estate at the edge region, which is 62.80% smaller than the default keyboard layout on Microsoft Hololens.Peer reviewe

    A virtual reality environment for training operators for assembly tasks involving human-cobot interactions

    Get PDF
    The introduction of collaborative robots in the industry requires new training methods. Users without experience in collaborative tasks with robots present insecurity at the beginning, reducing their productivity until they become familiar with this type of process. To address this problem the training method must take place in an environment where the user feels comfortable working with the cobot to overcome the insecurities. This thesis aims at defining a training method for users to get used to working with collaborative robots. The method covers every kind of robot and task. In addition, this research looks for a training that takes place outside the production line so as not to affect the productivity of the plant. The document presents the background of training methods in the industry and new trends in this field such as virtual reality and the evolution of interactions with robots. A patent landscape is included to evaluate the current situation in the investigation and development of these fields. This thesis work proposes an interactive and immersive virtual reality training based on WebGL. It consists on a simulation where the operator interacts in real time with a cobot executing a collaborative task. By using WebGL you can access the simulation directly from the browser and without restrictions in the virtual reality equipment. The scenario presents the assembly of a box in collaboration with the YuMi cobot of ABB. The tools, models and techniques used for the implementation are described. Taking advantage of the properties of virtual reality to facilitate the learning of the task, the simulation offers a user assistance system that is explained in detail. This method has been tested in a group of student engineers who performed the simulation in order to evaluate the effectiveness of this proposal to help operators in their learning of collaborative tasks. The results show a greater acceptance and confidence of the users to perform the task with the cobot after the simulation while they learnt the entire process of the task. It is concluded therefore with this thesis that the proposed method is valid for user training in collaborative tasks. It is hoped that this work will serve as a basis for future research in the incorporation of WebGL and virtual reality in the training of industrial processes

    Development of a virtual pet game using oculus rift and leap motion technologies.

    Get PDF
    Recent emerging technology with a Virtual Reality (VR) aspect is very research-driven as well as successful in commercial devices. Within it, the most advantageous technology is the Oculus Rift headset because of its light-weight, low-cost and high quality, which has potential to bring novel VR theory into practice. Furthermore, Leap Motion has emerged as a high precision bared hand tracker which supports VR integration. Thus, the combination of these technologies is promising in many application areas. Gaming is one of these, particularly the life simulation game genre because gaming not only bridges users to familiar technology but also gives them full immersion into the synthetic world. Among the many successful simulation games, the digital pet raising game genre has proven itself in the gamers’ community as well as in relation to advances in motion controller games. This has motivated the development of a virtual reality pet game. So, this research envisages to develop a prototype of a pet-raising game using the Unity game engine based on Leap Motion and Oculus Rift technologies. The prototype contains a variety of pet interactions including feeding, cleaning, throw-catch, tricks training, etc. to enhance the hand motion controlling of Leap Motion as well as playing with first person perspective to give full immersion in terms of VR. After that, the importance of game evaluation is justified via quantitative research approaches, aming to investigate into the interactive technologies like Leap Motion. Kinectimals game based on Xbox Kinect technology, was selected to compare two games in terms of motion controlling similarity. Two experiments which are similar procedure on the developed game and Kinectimals, are conducted in order to collect objective measures such as duration, task and failure rate; plus participant’s subjective reporting following three questionnaires, the After-Scenario Questionnaire (ASQ), IBM computer usability satisfaction questionnaire (CSUQ) and NASA-Task Load Index (NASA-TLX). Those questionnaires included standard questions and additional questions which are specific design for the prototype. Comparing to Kinectimals, the game achieved a high acceptable score in terms of workload, information and interface quality satisfaction. The final prototype received much positive feedback without simulator/motion sickness during long term playing and interface design. Moreover, beside the rich content game playing, some hand gestures including fist, face-up hand, throw-grab activities were the most reliable using Leap Motion. However, hand tracking issues were identified due to the lack of robustness, particular in dynamic gestures. As a result, main contribution is to make VR more accessible to ordinary people via gaming as well as how to apply the immersion into a specific game genre. In spite of some games/applications based on these technologies combinations, the serious experiments verifying their feasibility are limited, which makes this research worth to carry on. The experiment’s findings it is hoped contribute to promoting the pet game genre within a VR setting, in particular immersion role and motion controlling
    corecore