1,291 research outputs found

    Real Time SIBI Sign Language Recognition Based on K-Nearest Neighbor

    Get PDF
    Persons with disabilities also have the right to communicate with each other, both with normal people and people with other disabilities. People with disabilities will be difficult to communicate with other people. They use 'sign language' to communicate. That's why other normal people will be difficult to communicate with them. Because there are not many normal people that can understand the 'sign language'. System which can help to communicate with disabilities people are needed. In this paper, we proposed sign language recognition for Sistem Isyarat Bahasa Indonesia (SIBI) using leap motion based on K-Nearest Neighbor. Technology of leap motion controller will generate the existence of coordinate points on each bone in hand. As an input, we used the value of distance between the coordinates of each bone distal to the position of the palm, which were measured using Euclidean Distance. This feature of distance will be used for training and testing data on K-Nearest Neighbor method. The experiment result shows that the best accuracy is 0,78 and error 0,22 with proposed parameter of K = 5

    New Method for Optimization of License Plate Recognition system with Use of Edge Detection and Connected Component

    Full text link
    License Plate recognition plays an important role on the traffic monitoring and parking management systems. In this paper, a fast and real time method has been proposed which has an appropriate application to find tilt and poor quality plates. In the proposed method, at the beginning, the image is converted into binary mode using adaptive threshold. Then, by using some edge detection and morphology operations, plate number location has been specified. Finally, if the plat has tilt, its tilt is removed away. This method has been tested on another paper data set that has different images of the background, considering distance, and angel of view so that the correct extraction rate of plate reached at 98.66%.Comment: 3rd IEEE International Conference on Computer and Knowledge Engineering (ICCKE 2013), October 31 & November 1, 2013, Ferdowsi Universit Mashha

    Detection of major ASL sign types in continuous signing for ASL recognition

    Get PDF
    In American Sign Language (ASL) as well as other signed languages, different classes of signs (e.g., lexical signs, fingerspelled signs, and classifier constructions) have different internal structural properties. Continuous sign recognition accuracy can be improved through use of distinct recognition strategies, as well as different training datasets, for each class of signs. For these strategies to be applied, continuous signing video needs to be segmented into parts corresponding to particular classes of signs. In this paper we present a multiple instance learning-based segmentation system that accurately labels 91.27% of the video frames of 500 continuous utterances (including 7 different subjects) from the publicly accessible NCSLGR corpus (Neidle and Vogler, 2012). The system uses novel feature descriptors derived from both motion and shape statistics of the regions of high local motion. The system does not require a hand tracker

    A Survey on Metric Learning for Feature Vectors and Structured Data

    Full text link
    The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult. This has led to the emergence of metric learning, which aims at automatically learning a metric from data and has attracted a lot of interest in machine learning and related fields for the past ten years. This survey paper proposes a systematic review of the metric learning literature, highlighting the pros and cons of each approach. We pay particular attention to Mahalanobis distance metric learning, a well-studied and successful framework, but additionally present a wide range of methods that have recently emerged as powerful alternatives, including nonlinear metric learning, similarity learning and local metric learning. Recent trends and extensions, such as semi-supervised metric learning, metric learning for histogram data and the derivation of generalization guarantees, are also covered. Finally, this survey addresses metric learning for structured data, in particular edit distance learning, and attempts to give an overview of the remaining challenges in metric learning for the years to come.Comment: Technical report, 59 pages. Changes in v2: fixed typos and improved presentation. Changes in v3: fixed typos. Changes in v4: fixed typos and new method

    Tree Edit Distance Learning via Adaptive Symbol Embeddings

    Full text link
    Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which we call embedding edit distance learning (BEDL) and which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that BEDL improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes.Comment: Paper at the International Conference of Machine Learning (2018), 2018-07-10 to 2018-07-15 in Stockholm, Swede

    Automated Bangla sign language translation system for alphabets by means of MobileNet

    Get PDF
    Individuals with hearing and speaking impairment communicate using sign language. The movement of hand, body and expressions of face are the means by which the people, who are unable to hear and speak, can communicate. Bangla sign alphabets are formed with one or two hand movements. There are some features which differentiates the signs. To detect and recognize the signs, analyzing its shape and comparing its features is necessary. This paper aims to propose a model and build a computer systemthat can recognize Bangla Sign Lanugage alphabets and translate them to corresponding Bangla letters by means of deep convolutional neural network (CNN). CNN has been introduced in this model in form of a pre-trained model called “MobileNet” which produced an average accuracy of 95.71% in recognizing 36 Bangla Sign Language alphabets

    Analysis of Image Classification Deep Learning Algorithm

    Get PDF
    This study explores the use of TensorFlow 2 and Python for image classification problems. Image categorization is an important area in computer vision, with several real-world applications such as object identification/recognition, medical imaging, and autonomous driving. This work studies TensorFlow 2 and its image categorization capabilities. We also demonstrate how to construct an image classification model using Python and TensorFlow 2. This analysis of image classification neural network problems with the use of Convolutional Neural Network (CNN) on the German and the Chinese traffic sign datasets is an engineering task. Ultimately, this work provides step-by-step guidance for creating an image classification model using TensorFlow 2 and Python, while also showcasing its potential to tackle image classification issues across various domains

    Deep Learning Approach For Sign Language Recognition

    Get PDF
    Sign language is a method of communication that uses hand movements between fellow people with hearing loss. Problems occur when communication between normal people with hearing disorders, because not everyone understands sign language, so the model is needed for sign language recognition. This study aims to make the model of the introduction of hand sign language using a deep learning approach. The model used is Convolutional Neural Network (CNN). This model is tested using the ASL alphabet database consisting of 27 categories, where each category consists of 3000 images or a total of 87,000 images of 200 x 200 pixels of hand signals. First is the process of resizing the image input to 32 x 32 pixels. Furthermore, separating the dataset for training and validation respectively 75% and 25%. The test results indicate that the proposed model has good performance with a value of 99% accuracy. Experiment results show that preprocessing images using background correction can improve model performance

    Practical color-based motion capture

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 93-101).Motion capture systems track the 3-D pose of the human body and are widely used for high quality content creation, gestural user input and virtual reality. However, these systems are rarely deployed in consumer applications due to their price and complexity. In this thesis, we propose a motion capture system built from commodity components that can be deployed in a matter of minutes. Our approach uses one or more webcams and a color garment to track either the user's upper body or hands for motion capture and user input. We demonstrate that custom designed color garments can simplify difficult computer vision problems and lead to efficient and robust algorithms for hand and upper body tracking. Specifically, our highly descriptive color patterns alleviate ambiguities that are commonly encountered when tracking only silhouettes or edges, allowing us to employ a nearest-neighbor approach to track either the hands or the upper body at interactive rates. We also describe a robust color calibration system that enables our color-based tracking to work against cluttered backgrounds and under multiple illuminants. We demonstrate our system in several real-world indoor and outdoor settings and describe proof-of-concept applications enabled by our system that we hope will provide a foundation for new interactions in computer aided design, animation control and augmented reality.by Robert Yuanbo Wang.Ph.D
    • …
    corecore