13 research outputs found

    Human-computer interaction based on hand gestures using RGB-D sensors

    Get PDF
    In this paper we present a new method for hand gesture recognition based on an RGB-D sensor. The proposed approach takes advantage of depth information to cope with the most common problems of traditional video-based hand segmentation methods: cluttered backgrounds and occlusions. The algorithm also uses colour and semantic information to accurately identify any number of hands present in the image. Ten different static hand gestures are recognised, including all different combinations of spread fingers. Additionally, movements of an open hand are followed and 6 dynamic gestures are identified. The main advantage of our approach is the freedom of the user’s hands to be at any position of the image without the need of wearing any specific clothing or additional devices. Besides, the whole method can be executed without any initial training or calibration. Experiments carried out with different users and in different environments prove the accuracy and robustness of the method which, additionally, can be run in real-time

    Wearable Structured Light System in Non-Rigid Configuration

    Get PDF
    Traditionally, structured light methods have been studied in rigid configurations. In these configurations the position and orientation between the light emitter and the camera are fixed and known beforehand. In this paper we break with this rigidness and present a new structured light system in non-rigid configuration. This system is composed by a wearable standard perspective camera and a simple laser emitter. Our non-rigid configuration permits free motion of the light emitter with respect to the camera. The point-based pattern emitted by the laser permits us to easily establish correspondences between the image from the camera and a virtual one generated from the light emitter. Using these correspondences, our method computes rotation and translation up to scale of the planes of the scene where the point pattern is projected and reconstructs them. This constitutes a very useful tool for navigation applications in indoor environments, which are mainly composed of planar surfaces

    Konzept für ein VR-System zur intuitiven Modellierung durch natürliche Interaktion

    Get PDF
    Aus der Einführung "Kreative Ideen sind die Grundlage für Unternehmen, um eigene Produkte an sich verändernde Marktbedingungen anzupassen, neue Möglichkeiten zu nutzen und auf dem Markt zu bestehen (Shalley et al. 2004). Organisationen versuchen deshalb, kreativitätsfördernde Arbeitsbedingungen zu schaffen, indem die Unternehmenskultur und Arbeitsumgebungen entsprechend gestaltet oder spezielle Werkzeuge zur Verfügung gestellt werden. Ein im Produktentwicklungsprozess häufig eingesetztes Werkzeug ist das parametrisch assoziative CAD-System (Computer-Aided Design). Die effiziente Bedienung einer umfangreichen WIMP (Window, Icon, Menu, Pointing)- basierten Software muss durch umfangreiche Schulungen erlernt und regelmäßig angewendet werden, um die Modellierfähigkeiten zu erhalten. Die zur Bedienung erforderliche kognitive Leistung führt häufig zur Beeinträchtigung der Kreativität eines Konstruktionsingenieurs (Chandrasegaran et al. 2013), v. a. bei wenig geübten Nutzergruppen. Am Übergang zwischen Konzeptphase und der frühen Entwurfsphase (Arbeitsabschnitt 5 der VDI 2221) wird deshalb vorwiegend mit Skizzen und noch nicht im parametrischen CAD-System gearbeitet (VDI 2223 2004). Für die Erstellung der Vorentwürfe wäre es im Sinne des „Frontloading“ jedoch wünschenswert, früh erste rechnergestützte Methoden zur Gestaltung einsetzen zu können. Durch den Einsatz von virtueller Realität (VR) eröffnet sich die Möglichkeit zur Entwicklung intuitiver Interaktionsmethoden, die eventuell neue Modellierstrategien ermöglichen. Diese erlauben dem Produktentwickler, natürlich mit den virtuellen Modellen umzugehen. Durch die im Vergleich zum parametrisch assoziativen CAD-System intuitive Bedienung würde die Kreativität bei der groben Gestaltung der Vorentwürfe weniger eingeschränkt. ...

    Sensors and Technologies in Spain: State-of-the-Art

    Get PDF
    The aim of this special issue was to provide a comprehensive view on the state-of-the-art sensor technology in Spain. Different problems cause the appearance and development of new sensor technologies and vice versa, the emergence of new sensors facilitates the solution of existing real problems. [...

    Reconocimiento e interpretación de gestos con dispositivo Leap

    Get PDF
    Con el objetivo de encontrar una relación más intuitiva entre personas y ordenadores, en los últimos años se han producido grandes avances en el estudio y aplicación de la interacción hombre-máquina (HCI, Human Computer Interaction). En esta área se ubican el reconocimiento de voz, las pantallas táctiles de smartphones y tablets y el reconocimiento gestual, presente para el gran público a raíz de la salida al mercado de varios dispositivos en el campo del entretenimiento. En este contexto, a principios de 2013 empezó a venderse el dispositivo Leap Motion, que ha supuesto una pequeña revolución en el mundo de la HCI. Es de destacar por su gran precisión, pequeño tamaño y bajo coste. Frente a otros dispositivos, que hacen el tracking en un entorno más amplio, de un cuerpo entero, y a una distancia de un metro o algo más, el Leap permite exclusivamente el tracking de dedos y manos. En el presente Trabajo Fin de Máster se realiza un estudio del Leap, analizando las posibilidades de desarrollo que ofrece e implementando gestos que sean sencillos de realizar, pero a la vez precisos, para hacer el sistema robusto. Asimismo, se elabora un vocabulario gestual y se aplica a un caso práctico: una cocina de inducción

    Systematic literature review of hand gestures used in human computer interaction interfaces

    Get PDF
    Gestures, widely accepted as a humans' natural mode of interaction with their surroundings, have been considered for use in human-computer based interfaces since the early 1980s. They have been explored and implemented, with a range of success and maturity levels, in a variety of fields, facilitated by a multitude of technologies. Underpinning gesture theory however focuses on gestures performed simultaneously with speech, and majority of gesture based interfaces are supported by other modes of interaction. This article reports the results of a systematic review undertaken to identify characteristics of touchless/in-air hand gestures used in interaction interfaces. 148 articles were reviewed reporting on gesture-based interaction interfaces, identified through searching engineering and science databases (Engineering Village, Pro Quest, Science Direct, Scopus and Web of Science). The goal of the review was to map the field of gesture-based interfaces, investigate the patterns in gesture use, and identify common combinations of gestures for different combinations of applications and technologies. From the review, the community seems disparate with little evidence of building upon prior work and a fundamental framework of gesture-based interaction is not evident. However, the findings can help inform future developments and provide valuable information about the benefits and drawbacks of different approaches. It was further found that the nature and appropriateness of gestures used was not a primary factor in gesture elicitation when designing gesture based systems, and that ease of technology implementation often took precedence

    Human-Computer Interaction Based on Hand Gestures Using RGB-D Sensors

    Get PDF
    In this paper we present a new method for hand gesture recognition based on an RGB-D sensor. The proposed approach takes advantage of depth information to cope with the most common problems of traditional video-based hand segmentation methods: cluttered backgrounds and occlusions. The algorithm also uses colour and semantic information to accurately identify any number of hands present in the image. Ten different static hand gestures are recognised, including all different combinations of spread fingers. Additionally, movements of an open hand are followed and 6 dynamic gestures are identified. The main advantage of our approach is the freedom of the user's hands to be at any position of the image without the need of wearing any specific clothing or additional devices. Besides, the whole method can be executed without any initial training or calibration. Experiments carried out with different users and in different environments prove the accuracy and robustness of the method which, additionally, can be run in real-time

    Development of an active vision system for robot inspection of complex objects

    Get PDF
    Dissertação de mestrado integrado em Engenharia Mecânica (área de especialização em Sistemas Mecatrónicos)The dissertation presented here is in the scope of the IntVis4Insp project between University of Minho and the company Neadvance. It focuses on the development of a 3D hand tracking system that must be capable of extracting the hand position and orientation to prepare a manipulator for automatic inspection of leather pieces. This work starts with a literature review about the two main methods for collecting the necessary data to perform 3D hand tracking. These divide into glove-based methods and vision-based methods. The first ones work with some kind of support mounted on the hand that holds all the necessary sensors to measure the desired parameters. While the second ones recur to one or more cameras to capture the hands and through computer vision algorithms track their position and configuration. The selected method for this work was the vision-based method Openpose. For each recorded image, this application can locate 21 hand keypoints on each hand that together form a skeleton of the hands. This application is used in the tracking system developed throughout this dissertation. Its information is used in a more complete pipeline where the location of those hand keypoints is crucial to track the hands in videos of the demonstrated movements. These videos were recorded with an RGB-D camera, the Microsoft Kinect, which provides a depth value for every RGB pixel recorded. With the depth information and the 2D location of the hand keypoints in the images, it was possible to obtain the 3D world coordinates of these points considering the pinhole camera model. To define the hand, position a point is selected among the 21 for each hand, but for the hand orientation, it was necessary to develop an auxiliary method called “Iterative Pose Estimation Method” (ITP), which estimates the complete 3D pose of the hands. This method recurs only to the 2D locations of every hand keypoint, and the complete 3D world coordinates of the wrists to estimate the right 3D world coordinates of all the remaining points on the hand. This solution solves the problems related to hand occlusions that a prone to happen due to the use of only one camera to record the inspection videos. Once the world location of all the points in the hands is accurately estimated, their orientation can be defined by selecting three points forming a plane.A dissertação aqui apresentada insere-se no âmbito do projeto IntVis4Insp entre a Universidade do Minho e a empresa Neadavance, e foca-se no desenvolvimento de um sistema para extração da posição e orientação das mãos no espaço para posterior auxílio na manipulação automática de peças de couro, com recurso a manipuladores robóticos. O trabalho inicia-se com uma revisão literária sobre os dois principais métodos existentes para efetuar a recolha de dados necessária à monitorização da posição e orientação das mãos ao longo do tempo. Estes dividem-se em métodos baseados em luvas ou visão. No caso dos primeiros, estes recorrem normalmente a algum tipo de suporte montado na mão (ex.: luva em tecido), onde estão instalados todos os sensores necessários para a medição dos parâmetros desejados. Relativamente a sistemas de visão estes recorrem a uma câmara ou conjunto delas para capturar as mãos e por via de algoritmos de visão por computador determinam a sua posição e configuração. Foi selecionado para este trabalho um algoritmo de visão por computador denominado por Openpose. Este é capaz de, em cada imagem gravada e para cada mão, localizar 21 pontos pertencentes ao seu esqueleto. Esta aplicação é inserida no sistema de monitorização desenvolvido, sendo utilizada a sua informação numa arquitetura mais completa onde é efetuada a extração da localização dos pontos chave de cada mão nos vídeos de demonstração dos movimentos de inspeção. A gravação destes vídeos é efetuada com uma câmara RGB-D, a Microsoft Kinect, que fornece um valor de profundidade para cada pixel RGB gravado. Com os dados de profundidade e a localização dos pontos chave nas imagens foi possível obter as coordenadas 3D no mundo destes pontos considerando o modelo pinhole para a câmara. No caso da posição da mão é selecionado um ponto de entre os 21 para a definir ao longo do tempo, no entanto, para o cálculo da orientação foi desenvolvido um método auxiliar para estimação da pose tridimensional da mão denominado por “Iterative Pose Estimation Method” (ITP). Este método recorre aos dados 2D do Openpose e às coordenadas 3D do pulso de cada mão para efetuar a correta estimação das coordenadas 3D dos restantes pontos da mão. Isto permite essencialmente resolver problemas com oclusões da mão, muito frequentes com o uso de uma só câmara na gravação dos vídeos. Uma vez estimada corretamente a posição 3D no mundo dos vários pontos da mão, a sua orientação pode ser definida com recurso a quaisquer três pontos que definam um plano

    User-based gesture vocabulary for form creation during a product design process

    Get PDF
    There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only.There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only
    corecore