7 research outputs found

    A Gesture Driven 3D Interface

    Get PDF
    3D visualisation systems have a wide variety of applications, particularly as teaching and simulation tools in a broad range of fields. Manipulation of 3D objects using current regular input devices can be tricky and impractical. While various less common interfaces and methods for the manipulation of 3D objects do exist, many of these are expensive and impractical for the common user. We present a camera-based application where the user can use hand gestures to manipulate and translate objects in a 3D visualisation system in real time. We also provide a highly useable gesture vocabulary by conducting a Wizard of Oz usability experiment. We present a way of interacting with 3D objects by means of a simple and intuitively easy to use interface through the use of hand gestures

    Hand Gesture Interaction with Human-Computer

    Get PDF
    Hand gestures are an important modality for human computer interaction. Compared to many existing interfaces, hand gestures have the advantages of being easy to use, natural, and intuitive. Successful applications of hand gesture recognition include computer games control, human-robot interaction, and sign language recognition, to name a few. Vision-based recognition systems can give computers the capability of understanding and responding to hand gestures. The paper gives an overview of the field of hand gesture interaction with Human- Computer, and describes the early stages of a project about gestural command sets, an issue that has often been neglected. Currently we have built a first prototype for exploring the use of pieand marking menus in gesture-based interaction. The purpose is to study if such menus, with practice, could support the development of autonomous gestural command sets. The scenario is remote control of home appliances, such as TV sets and DVD players, which in the future could be extended to the more general scenario of ubiquitous computing in everyday situations. Some early observations are reported, mainly concerning problems with user fatigue and precision of gestures. Future work is discussed, such as introducing flow menus for reducing fatigue, and control menus for continuous control functions. The computer vision algorithms will also have to be developed further

    Lightweight palm and finger tracking for real-time 3D gesture control

    Get PDF
    peer reviewedWe present a novel technique implementing barehanded interaction with virtual 3D content by employing a time-of-flight camera. The system improves on existing 3D multi-touch systems by working regardless of lighting conditions and supplying a working volume large enough for multiple users. Previous systems were limited either by environmental requirements, working volume, or computational resources necessary for realtime operation. By employing a time-of-flight camera, the system is capable of reliably recognizing gestures at the finger level in real-time at more than 50 fps with commodity computer hardware using our newly developed precision hand and finger-tracking algorithm. Building on this algorithm, the system performs gesture recognition with simple constraint modeling over statistical aggregations of the hand appearances in a working volume of more than 8 cubic meters. Two iterations of user tests were performed on a prototype system, demonstrating the feasibility and usability of the approach as well as providing first insights regarding the acceptance of true barehanded touch-based 3D interaction

    Prosody and Kinesics Based Co-analysis Towards Continuous Gesture Recognition

    Get PDF
    The aim of this study is to develop a multimodal co-analysis framework for continuous gesture recognition by exploiting prosodic and kinesics manifestation of natural communication. Using this framework, a co-analysis pattern between correlating components is obtained. The co-analysis pattern is clustered using K-means clustering to determine how well the pattern distinguishes the gestures. Features of the proposed approach that differentiate it from the other models are its less susceptibility to idiosyncrasies, its scalability, and simplicity. The experiment was performed on Multimodal Annotated Gesture Corpus (MAGEC) that we created for research on understanding non-verbal communication community, particularly the gestures

    Gestur Tangan Sebagai Interaksi Dan Kontrol Dalam Realitas Virtual Menggunakan Google Cardboard Dan Leap Motion

    Get PDF
    Realitas Virtual merupakan salah satu bidang teknologi yang sedang berkembang pesat. Banyak pengembang berlombalomba untuk mengembangkan aplikasi berbasis teknologi realitas virtual. Namun, terdapat keterbatasan dalam berinteraksi dalam lingkungan realitas virtual. Oleh karena itu, perlu dibuat sebuah interaksi dan kontrol berbasis gestur tangan pada lingkungan realitas virtual. Memanfaatkan Leap Motion dalam menangkap gerakan tangan secara real-time, pengguna dapat berinteraksi secara langsung dengan lingkungan virtual. Google Cardboard digunakan sebagai kamera utama pengguna untuk merasakan pengalaman realitas virtual dengan smartphone. Pengujian interaksi Leap Motion dalam realitas virtual dilakukan dengan pembuatan skenario perbandingan interaksi Leap Motion dengan interaksi yang sudah dikenal yaitu gamepad serta kuesioner setelah pengujian. Melalui pengujian tersebut dapat diperoleh kesimpulan bahwa pengguna lebih mudah menggunakan gamepad dibandingkan dengan Leap Motion ketika berinteraksi pada lingkungan virtual. Hal ini disebabkan karena pengguna lebih familiar dengan penggunaan gamepad dibandingkan dengan Leap Motion ========================================================================================================== Virtual reality is one of highly developed technology in recent day. Many developers are trying to develop virtual reality based technology. Unfortunately, there’s a limitation in virtual reality to interact with virtual environment. So, there should be an interaction and control based on hand gesture to interact with virtual environment. Using Leap Motion technology as real-time hand motion tracker, user can interact directly into virtual environment. Google Cardboard is used as the main camera in virtual reality. Interaction testing is used to compare Leap Motion with gamepad as widely known interaction device. Interaction testing is done by performing a scenario and post-test quesionnaire. As the result, user is more familiar with gamepad, compared to Leap Motion as interaction in virtual environmen

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours

    Pre-conference proceedings of the 3rd IFIP TC 13.6 HWID working conference

    Get PDF
    The committees under IFIP include the Technical Committee TC13 on Human – Computer Interaction within which the work of this volume has been conducted. TC 13 on Human-Computer Interaction has as its aim to encourage theoretical and empirical human science research to promote the design and evaluation of human-oriented ICT. Within TC 13 there are different Working Groups concerned with different aspects of Human-Computer Interaction. The flagship event of TC13 is the bi-annual international conference called INTERACT at which both invited and contributed papers are presented. Contributed papers are rigorously refereed and the rejection rate is high. Publications arising from these TC13 events are published as conference proceedings such as the INTERACT proceedings or as collections of selected and edited papers from working conferences and workshops. See http://www.ifip.org/ for aims and scopes of TC13 and its associated Working Group
    corecore