8 research outputs found

    Wieldy Finger and Hand Motion Detection for Human Computer Interaction

    Full text link
    We have developed a gesture based interface for human computer interaction under the research field of computer vision.Earlier system have used the costlier system devices to make an effective interaction with systems, instead we have worked on the web cam based gesture input system.Our goal was to propound lesser cost, wieldy, object detection technique using blobs for detection of fingers.And to give number of count of the same.In addition, we have also implemented the hand gesture recognition

    Tiny hand gesture recognition without localization via a deep convolutional network

    Get PDF
    Visual hand-gesture recognition is being increasingly desired for human-computer interaction interfaces. In many applications, hands only occupy about 10% of the image, whereas the most of it contains background, human face, and human body. Spatial localization of the hands in such scenarios could be a challenging task and ground truth bounding boxes need to be provided for training, which is usually not accessible. However, the location of the hand is not a requirement when the criteria is just the recognition of a gesture to command a consumer electronics device, such as mobiles phones and TVs. In this paper, a deep convolutional neural network is proposed to directly classify hand gestures in images without any segmentation or detection stage that could discard the irrelevant not-hand areas. The designed hand-gesture recognition network can classify seven sorts of hand gestures in a user-independent manner and on real time, achieving an accuracy of 97.1% in the dataset with simple backgrounds and 85.3% in the dataset with complex backgrounds

    Review on Classification Methods used in Image based Sign Language Recognition System

    Get PDF
    Sign language is the way of communication among the Deaf-Dumb people by expressing signs. This paper is present review on Sign language Recognition system that aims to provide communication way for Deaf and Dumb pople. This paper describes review of Image based sign language recognition system. Signs are in the form of hand gestures and these gestures are identified from images as well as videos. Gestures are identified and classified according to features of Gesture image. Features are like shape, rotation, angle, pixels, hand movement etc. Features are finding by various Features Extraction methods and classified by various machine learning methods. Main pupose of this paper is to review on classification methods of similar systems used in Image based hand gesture recognition . This paper also describe comarison of various system on the base of classification methods and accuracy rate

    Hand gesture recognition using color and depth images enhanced with hand angular pose data

    No full text

    Real-Time Depth-Based Hand Detection and Tracking

    Get PDF
    This paper illustrates the hand detection and tracking method that operates in real time on depth data. To detect a hand region, we propose the classifier that combines a boosting and a cascade structure. The classifier uses the features of depth-difference at the stage of detection as well as learning. The features of each candidate segment are to be computed by subtracting the averages of depth values of subblocks from the central depth value of the segment. The features are selectively employed according to their discriminating power when constructing the classifier. To predict a hand region in a successive frame, a seed point in the next frame is to be determined. Starting from the seed point, a region growing scheme is applied to obtain a hand region. To determine the central point of a hand, we propose the so-called Depth Adaptive Mean Shift algorithm. DAM-Shift is a variant of CAM-Shift (Bradski, 1998), where the size of the search disk varies according to the depth of a hand. We have evaluated the proposed hand detection and tracking algorithm by comparing it against the existing AdaBoost (Friedman et al., 2000) qualitatively and quantitatively. We have analyzed the tracking accuracy through performance tests in various situations

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    Reconhecimento do alfabeto de Libras usando sensor Kinect e marcadores visuais

    Get PDF
    Monografia (graduação)—Universidade de Brasília, Faculdade de Tecnologia, Curso de Graduação em Engenharia de Controle e Automação, 2014.Este trabalho tem como intuito apresentar o desenvolvimento de uma solução para o reconhecimento do alfabeto manual da Língua Brasileira de Sinais (Libras). Devido à grande complexidade e semelhança dos sinais, é alta a exigência por precisão e eficiência no sistema de reconhecimento baseado em processamento de vídeo. Para conseguir o desempenho desejado foi utilizada a combinação do sensor Kinect para análise de profundidade e segmentação das imagens e uma câmera RGB de alta resolução que, juntamente a uma luva com marcadores visuais, possibilitam o rastreamento das posições dos dedos da mão durante a execução dos sinais. Tal configuração permite extrair as características morfológicas que descrevem um sinal com as mãos e o representando em um vetor 12-dimensional baseado nas distâncias e ângulos relativos entre os marcadores. Para o reconhecimento criou-se para cada uma das 26 letras do alfabeto um vetor 12-dimensional representando o seu sinal.This work has as purpose to present the development of a solution to the hand alphabet recognition of the Brazilian Sign Language (Libras). Due to the high complexity and similarity of the signs, it has a high demand for precision and efficiency in the recognition system. To achieve the desired performance, the Kinect sensor, used for analysis of depth and segmentation of images, is combined with a high resolution camera works together a glove with visual markers enable the tracking of finger position during the execution of the signs. Thereby, achieving the extraction of morphological features that describe a hand posture sign and representing in a 12-dimensional vector based on relative distances and angles between the markers. For recognition it is created for each one of the 26 letters of the hand alphabet a 12-dimensional vector representing its signal
    corecore