25 research outputs found

    Review of constraints on vision-based gesture recognition for human–computer interaction

    Get PDF
    The ability of computers to recognise hand gestures visually is essential for progress in human-computer interaction. Gesture recognition has applications ranging from sign language to medical assistance to virtual reality. However, gesture recognition is extremely challenging not only because of its diverse contexts, multiple interpretations, and spatio-temporal variations but also because of the complex non-rigid properties of the hand. This study surveys major constraints on vision-based gesture recognition occurring in detection and pre-processing, representation and feature extraction, and recognition. Current challenges are explored in detail

    The usability of an augmented reality map application on the Microsoft Hololens 2

    Get PDF
    Abstract. Augmented reality (AR) has seen rapid progress in recent years, especially from a consumer standpoint. Hardware, as well as software, is becoming better, cheaper, and more available. As the technology becomes more mainstream, we will see adaptations for many applications currently used on personal computers and smartphones. This thesis aims to explore the adaptation of one such application further by developing and studying the usability and effectiveness of a map application running on one of the most modern AR headsets available to consumers, the Microsoft HoloLens 2. To develop the application, we chose to use the cross-platform game engine Unity. It would provide us an opportunity to develop the application reliably and fast, as the third-party packages available for it would prove to provide plenty of ready-to-use assets and code. In addition, both of the group members had some previous experience in using Unity. While planning the application we studied many research papers to get an understanding of what makes a good AR application. With the application ready for testing, we recruited test subjects from family members who would give us feedback relating to the efficiency and usability of the system as a whole. The test subjects would perform tasks inside the application but also have the opportunity to explore it however much they liked. After the test, they would fill out a questionnaire and participate in an interview, which would then be analyzed further. From analyzing the questionnaire and interview answers, we were able to conclude several things. Firstly the system in its current state provides no additional value in comparison to traditional browser or mobile based map applications. It is also inconvenient, hard to use and unintuitive. Despite these shortcomings, the test subjects saw future potential in the system and found it to be useful and fun to use. The findings suggest that even if the application is developed further, the experience as a whole would still be lacking as AR technology is not ready for mainstream adaptation quite yet

    Move, hold and touch: A framework for Tangible gesture interactive systems

    Get PDF
    © 2015 by the authors. Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF) for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF

    Integrated multimodal interaction framework for virtual reality foot reflexology stress therapy

    Get PDF
    Frameworks in interaction research have seen varying compositions from numerous researchers, and have been applied for either a specific or general purposes in several domains. Previous studies have highlighted virtual reality (VR) in stress therapy, and revealed the potential of foot reflexology therapy using VR technology. However, the interaction framework for foot reflexology through virtual reality requires further investigation. This study presents the design and evaluation of an integrated multimodal interaction framework for virtual reality foot reflexology stress therapy. The components of the proposed framework were identified from the literature review and previous research, which included design principles, technology, structural components, multimodal interaction architecture, and segment composition. This formed the proposed integrated multimodal interaction framework for virtual reality foot reflexology stress therapy. The proposed framework was then validated using expert reviews. This was followed by prototype development, which explored the effectiveness of the virtual reality foot reflexology therapy application on relaxation and stress relief using Smith Relaxation States Inventory (SRSI-3). A pre and post-test intervention quasi experiment was employed in the study for the evaluation. The findings revealed that Virtual Reality Foot Reflexology Stress Therapy (VR–FRST) effectively evokes the relaxation state categories of transcendence, mindfulness, positive energy, and basic relaxation, and also reduces users stress state. This research provides a concise, organized, practical and validated integrated multimodal interaction framework for the design and development of foot reflexology therapy in a virtual environment. This contributes to the field of interaction design for virtual reality developers and complementary therapy for the alternative medical practitioners

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    The composer as technologist : an investigation into compositional process

    Get PDF
    This work presents an investigation into compositional process. This is undertaken where a study of musical gesture, certain areas of cognitive musicology, computer vision technologies and object-orientated programming, provide the basis for a composer (author) to assume the role of a technologist and acquire knowledge and skills to that end. In particular, it focuses on the application and development of a video gesture recognition heuristic to the compositional problems posed. The result is the creation of an interactive musical work with score for violin and electronics that supports the research findings. In addition, the investigative approach into developing technology to solve musical problems that explores practical composition and aesthetic challenges is detailed

    HYBRID ALGORITHM FOR HAND GESTURE RECOGNITION USING LOCAL GABOR FILTER AND MEL FREQUENCY CEPSTRAL COEFFICIENTS

    Get PDF
    Hand gesture is a movement ofhands having meaning to speak, with other people. However, using hand gestures as a medium for communication requires correct recognition of indeed pose and due to this, hand gesture recognition is an active area of research in the vision community. Various algorithms are proposed for gesture recognition, but not optimally designed for accuracy. Accuracy is the most important parameter for any recognition system as compared to other significant parameters. Increase in accuracy leads to decrease in the performance of other parameters; specifically, it leads the algorithm to high complexity

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    Image processing techniques for mixed reality and biometry

    Get PDF
    2013 - 2014This thesis work is focused on two applicative fields of image processing research, which, for different reasons, have become particularly active in the last decade: Mixed Reality and Biometry. Though the image processing techniques involved in these two research areas are often different, they share the key objective of recognizing salient features typically captured through imaging devices. Enabling technologies for augmented/mixed reality have been improved and refined throughout the last years and more recently they seems to have finally passed the demo stage to becoming ready for practical industrial and commercial applications. To this regard, a crucial role will likely be played by the new generation of smartphones and tablets, equipped with an arsenal of sensors connections and enough processing power for becoming the most portable and affordable AR platform ever. Within this context, techniques like gesture recognition by means of simple, light and robust capturing hardware and advanced computer vision techniques may play an important role in providing a natural and robust way to control software applications and to enhance onthe- field operational capabilities. The research described in this thesis is targeted toward advanced visualization and interaction strategies aimed to improve the operative range and robustness of mixed reality applications, particularly for demanding industrial environments... [edited by Author]XIII n.s

    A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems

    Get PDF
    This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute l’interazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei più comuni campi di studio dell’interazione tangibile e dell’interazione gestuale. Sfruttando le abilità innate dell’uomo di manipolare oggetti fisici e di comunicare con i gesti, l’interazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando l’attenzione sul nostro mondo reale, al di là dello schermo dei computer o degli smartphone. Poiché l’interazione gestuale tangibile è un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per l’Interazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nell’ambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: l’interazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e l’interazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalità di interazione con il sistema di infotainment. Per il secondo campo di applicazione, è stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dell’interazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per l’interazione nella casa intelligente, è stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. L’analisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo
    corecore