1,534 research outputs found

    Intuitive human-device interaction for video control and feedback

    Get PDF

    Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems

    Get PDF
    When users want to interact with an in-air gesture system, they must first address it. This involves finding where to gesture so that their actions can be sensed, and how to direct their input towards that system so that they do not also affect others or cause unwanted effects. This is an important problem [6] which lacks a practical solution. We present an interaction technique which uses multimodal feedback to help users address in-air gesture systems. The feedback tells them how (“do that”) and where (“there”) to gesture, using light, audio and tactile displays. By doing that there, users can direct their input to the system they wish to interact with, in a place where their gestures can be sensed. We discuss the design of our technique and three experiments investigating its use, finding that users can “do that” well (93.2%–99.9%) while accurately (51mm–80mm) and quickly (3.7s) finding “there”

    Interactive exploration of historic information via gesture recognition

    Get PDF
    Developers of interactive exhibits often struggle to �nd appropriate input devices that enable intuitive control, permitting the visitors to engage e�ectively with the content. Recently motion sensing input devices like the Microsoft Kinect or Panasonic D-Imager have become available enabling gesture based control of computer systems. These devices present an attractive input device for exhibits since the user can interact with their hands and they are not required to physically touch any part of the system. In this thesis we investigate techniques to enable the raw data coming from these types of devices to be used to control an interactive exhibit. Object recognition and tracking techniques are used to analyse the user's hand where movement and clicks are processed. To show the e�ectiveness of the techniques the gesture system is used to control an interactive system designed to inform the public about iconic buildings in the centre of Norwich, UK. We evaluate two methods of making selections in the test environment. At the time of experimentation the technologies were relatively new to the image processing environment. As a result of the research presented in this thesis, the techniques and methods used have been detailed and published [3] at the VSMM (Virtual Systems and Multimedia 2012) conference with the intention of further forwarding the area

    Hand Gesture-based Process Modeling for Updatable Processes

    Get PDF
    The increasing popularity of process models leads to the need of alternative interaction methods to view and manipulate process models. One big research field are the gesturebased manipulation methods. Although there are already works in this research area, they utilize only two dimensions for gesture recognition. The objective of this work is to introduce a system that manipulates process models using a three dimensional hand gesture input interface utilizing the RGB-D camera of the Microsoft Kinect. With this, an input interface can be created that is more natural and, thus, is easier to learn and use than its two dimensional counterpart. This work therefore discusses how gestures are recognized as well as technical implementation aspects (e.g., how process models are painted, accessed and manipulated). Furthermore, it explains the problems arising from the use of the Kinect as a hand tracking system and shows which steps have been taken to solve these problems

    Spatial Programming for Industrial Robots through Task Demonstration

    Get PDF
    We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks

    TiFEE : an input event-handling framework with touchless device support

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Universidade do Porto. Faculdade de Engenharia. 201

    MolecularRift, a Gesture Based Interaction Tool for Controlling Molecules in 3-D

    Get PDF
    Visualization of molecular models is a vital part in modern drug design. Improved visualization methods increases the conceptual understanding and enables faster and better decision making. The introduction of virtual reality goggles such as Oculus Rift has introduced new opportunities for the capabilities of such visualisations. A new interactive visualization tool (MolecularRift), which lets the user experience molecular models in a virtual reality environment, was developed in collaboration with AstraZeneca. In an attempt to create a more natural way to interact with the tool, users can steer and control molecules through hand gestures. The gestures are recorded using depth data from a Mircosoft Kinect v2 sensor and interpreted using per pixel algorithms, which only focus on the captured frames thus freeing the user from additional devices such as cursor, keyboard, touchpad or even piezoresistive gloves. MolecularRift was developed from a usability perspective using an iterative developing process and test group evaluations. The iterations allowed an agile process where features easily could be evaluated to monitor behavior and performance, resulting in a user-optimized tool. We conclude with reflections on virtual reality's capabilities in chemistry and possibilities for future projects.Virtual reality är framtiden. Nya tekniker utvecklas konstant och parallellt med att datakapaciteten förbättras finner vi nya sätt att använda dem ihop. Vi har utvecklat ett nytt interaktivt visualiserings verktyg (Molecular Rift) som låter användaren uppleva molekylära modeller i en virtuell verklighet. I dagens medicinindustri är man i ständigt behov av nya metoder för att visualisera potentiella läkemedel i 3-D. Det finns flera verktyg idag som används för att visualisera molekyler i 3-D stereo. Våra nyframtagna tekniker inom virtuell verklighet presenterar möjligheter för medicinutvecklare att ”gå in” i de molekylära strukturerna och uppleva dem på ett helt nytt sätt

    Implementation of a Smart Ward System in a Hi-Tech Hospital by Using a Kinect Sensor Camera

    Get PDF
    With the evolution of sophisticated image capturing device, the potential charecteristics of image processing are being harnessed in various commercial applications. This paper presents a unique idea of using image processing in the field of health monitoring system. Research have been carried out in communication and networking for the design of Smart Ward Systems used in Health Monitoring System that empowers hospital staff to focus more on patient’s dynamic movement by enabling the collection of data at the bedside and remove the need for duplication and double-handling. The proposed work is designed for detecting and recognizing the gesture obtained through the Kinect camera. The training includes data collection and feature extraction. The trained data is then classified using k-nearest neighbors, support vector machines and artificial neural networks. To adopt the best classifier, this paper compares the accuracy of all the above techniques. Mode selection operation has been tested with three different classifiers and support vector machine is proven to be best out of them. The evaluation is done based on various performances metrics like classification effectiveness, accuracy and recognition rate
    • …
    corecore