252 research outputs found

    Analyzing Interaction for Automated Adaptation – First Steps in the IAAA Project

    Get PDF
    Because of an aging society and the relevance of computer-based systems in a variety of fields of our life, personalization of software systems is becoming more important by the day in order to prevent usage errors and create a good user experience. However, personalization typically is a time-consuming and costly process if it is done through manual configuration. Automated adaptation to specific users’ needs is, therefore, a useful way to reduce the efforts necessary. The IAAA project focuses on the analysis of user interaction capabilities and the implementation of automated adaptations based on them. However, the success of these endeavors is strongly reliant on a careful selection of interaction modalities as well as profound knowledge of the target group’s general interaction behavior. Therefore, as a first step in the project, an extensive task-based user observation with thorough involvement of the actual target group was conducted in order to determine input devices and modalities that would in a second step become subject of the first prototypic implementations. This paper discusses the general objectives of the IAAA project, describes the methodology and aims behind the user observation and presents its results

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Shopping Using Gesture-Driven Interaction

    Get PDF

    Hand gestures recognition using 3D-CNN

    Get PDF
    Since the emerge of informatic systems, one of the aspects that have helped to the rise of its popularity has been the simplification of the User-Computer communication, commonly known as user interface. Nowadays, the vanguard in this field are the techniques called touchless which, as its name indicates, consist of a kind of communication which do not imply touching any sort of hardware, by means of audio or video. This project involves the recognition of dynamic hand gestures performed with hands using RGB-D (Color and Depth) sequences recorded with a Kinect sensor. In order to do so I have used a technique which combines computer vision and deep learning known as 3D Convolutional Neural Network. My solution is inspired in the one proposed by Molchanov et al in their work [1] where some spatial and temporal data augmentation techniques have been used. In my case I have worked with two different datasets. The first one is a prepared dataset. With it, an accuracy of nearly 65% has been obtained. The second one (which will be named as Telepresence Dataset) has been self-made. With it, I did not get positive results.Desde la aparición de los sistemas informáticos, uno de los aspectos que han ayudado más al aumento de su popularidad ha sido la simplificación de la comunicación Usuario-Ordenador, también conocida como interfaz de usuario. Actualmente la vanguardia de este campo se encuentra en las técnicas conocidas como touchless que, tal y como su nombre indica, consisten en una comunicación que no implique tocar ningún hardware, ya sea mediante audio o video. En este proyecto trabajo el reconocimiento de gestos dinámicos hechos con las manos usando secuencias RGB-D grabadas con un sensor Kinect. Para llevar eso a cabo he usado una técnica que combina computer vision y deep learning conocida como Red Neuronal Convolucional 3D. Mi solución está inspirada en la propuesta por Molchanov et al en su trabajo [1] donde son usadas técnicas de "data augmentation" tanto temporal como espacial. En mi caso he trabajado con dos datasets distintos. El primero estaba preparado. Con él, he conseguido un acierto de casi 65% El segundo (Al cual me referiré como Telepresence Dataset) ha sido creado por mí. Con él, no he obtenido resultados positivos.Des de l’aparició dels sistemes informàtics, un dels aspectes que han ajudat a l’augment de la seva popularitat ha estat la simplificació de la comunicació Usuari-Ordinador, altrament coneguda com interfície d’usuari. Actualment l’avantguarda d’aquest camp es troba en les tècniques conegudes com a touchless que, tal i com el seu nom indica, consisteixen en una comunicació que no impliqui tocar cap hardware, ja sigui mitjançant àudio o vídeo. En aquest projecte treballo el reconeixement de gestos dinàmics fets amb les mans utilitzant seqüències RGB-D gravades amb un sensor Kinect. Per dur a terme això he utilitzat una tècnica que combina computer vision i deep learning coneguda com a Xarxa Neuronal Convolucional 3D. La meva solució està inspirada en la que proposen Molchanov et al en el seu treball [1] on s’utilitzen tècniques de data augmentation tant temporal com espacialment. En el meu cas he treballat amb dos datasets diferents. El primer estava preparat. Amb ell, s’ha aconseguit un encert de quasi 65%. El segon (Al qual em referiré com a Telepresence Dataset) ha estat creat per mi. Amb ell, no he obtingut resultats positius

    Enhancing touchless interaction with the Leap Motion using a haptic glove

    Get PDF

    Designing Touchless Gestural Interfaces for Public Displays

    Get PDF
    Nell\u2019ultimo decennio, molti autori hanno studiato la possibilit\ue0 di utilizzare le interfacce a gesti come strumento innovativo per supportare l\u2019interazione con i computer. Inoltre, le recenti innovazioni tecnologiche hanno permesso di installare display interattivi in ambienti privati e pubblici. Tuttavia, l\u2019interattivit\ue0 di tali display \ue8 spesso basata sull\u2019uso di touchscreen, mentre tecnologie come i dispositivi Kinect-like vengono adottate molto pi\uf9 raramente, soprattutto se si considera l\u2019ambito dei display pubblici. Al giorno d\u2019oggi, l\u2019opportunit\ue0 di studiare le interfacce touchless per i display pubblici \ue8 diventata concreta, e rappresenta il campo di studio di diversi ricercatori. L\u2019obiettivo principale di questa tesi \ue8 quello di descrivere e studiare i problemi legati alla progettazione e all\u2019implementazione di un\u2019interfaccia grafica dedicata all\u2019interazione touchless a gesti con display pubblici. Ci\uf2 implica la necessit\ue0 di superare alcuni problemi tipici, sia dei display pubblici (ad esempio, l\u2019interaction blindness e l\u2019usabilit\ue0 immediata), che delle interfacce touchless (per esempio, comunicare che l\u2019interattivit\ue0 \ue8 gestuale). La tesi, inoltre, include uno studio che analizza quanto la presenza dell\u2019Avatar possa influire sulle interazioni degli utenti, in termini di carico di lavoro percepito, e quanto essa sia in grado di incoraggiare le interazioni a due mani. Poich\ue9 ABaToGI \ue8 stata progettata per i display pubblici, l\u2019interfaccia \ue8 stata anche inclusa in un\u2019installazione pubblica per essere valutata sul campo. I risultati di questo studio (e di quelli precedenti) sono stati quindi riassunti al fine di sviluppare una serie di linee guida per lo sviluppo di nuove interfacce touchless a gesti basata sull\u2019uso di un Avatar. La tesi si conclude con alcuni spunti di ricerca per il futuro.In the last decade, many authors have investigated and studied touchless and gestural interactions as a novel tool for interacting with computers. Moreover, technological innovations have allowed for installations of interactive displays in private and public places. However, interactivity is usually implemented by touchscreens, whereas technologies able to recognize body gestures are more rarely adopted, especially in integration with commercial public displays. Nowadays, the opportunity to investigate touchless interfaces for such systems has become concrete and studied by many researchers. Indeed, this interaction modality offers the possibility to overcome several issues that cannot be solved by touch-based solutions, e.g. keeping a high hygiene level of the screen surface, as well as providing big displays with interactive capabilities. The main goal of this thesis is to describe the design process for implementing touchless gestural interfaces for public displays. This implies the need for overcoming several typical issues of both public displays (e.g. interaction blindness, immediate usability) and touchless interfaces (e.g. communicating touchless interactivity). To this end, a novel Avatar-based Touchless Gestural Interface (or ABaToGI) has been developed, and its design process is described in the thesis, along with the user studies conducted for its evaluation. Moreover, the thesis analyzes how the presence of the Avatar may affect user interactions in terms of perceived cognitive workload, and if it may be able to foster bimanual interactions. Then, as ABaToGI was designed for public displays, it has been installed in an actual deployment in order to be evaluated in-the-wild (i.e. not in a lab setting). The resulting outcomes, along with the previously described studies, have been used to introduce a set of design guidelines for developing future touchless gestural interfaces, with a particular focus on Avatar-based ones. The results of this thesis provide also a basis for future research, which concludes this work

    A Thematic and Reference Analysis of Touchless Technologies

    Get PDF
    The purpose of this research is to explore the utility and current state of touchless technologies. Five categories of technologies are identified as a result of collecting and reviewing literature: facial/biometric recognition, gesture recognition, touchless sensing, personal devices, and voice recognition. A thematic analysis was conducted to evaluate the advantages and disadvantages of the five categories. A reference analysis was also conducted to determine the similarities between articles in each category. Touchless sensing showed to have the most advantages and least similar references. Gesture recognition was the opposite. Comparing analyses shows more reliable technology types are more beneficial and diverse
    • …
    corecore