1,826 research outputs found

    New Method for Optimization of License Plate Recognition system with Use of Edge Detection and Connected Component

    Full text link
    License Plate recognition plays an important role on the traffic monitoring and parking management systems. In this paper, a fast and real time method has been proposed which has an appropriate application to find tilt and poor quality plates. In the proposed method, at the beginning, the image is converted into binary mode using adaptive threshold. Then, by using some edge detection and morphology operations, plate number location has been specified. Finally, if the plat has tilt, its tilt is removed away. This method has been tested on another paper data set that has different images of the background, considering distance, and angel of view so that the correct extraction rate of plate reached at 98.66%.Comment: 3rd IEEE International Conference on Computer and Knowledge Engineering (ICCKE 2013), October 31 & November 1, 2013, Ferdowsi Universit Mashha

    Automatic recognition of fingerspelled words in British Sign Language

    Get PDF
    We investigate the problem of recognizing words from video, fingerspelled using the British Sign Language (BSL) fingerspelling alphabet. This is a challenging task since the BSL alphabet involves both hands occluding each other, and contains signs which are ambiguous from the observer’s viewpoint. The main contributions of our work include: (i) recognition based on hand shape alone, not requiring motion cues; (ii) robust visual features for hand shape recognition; (iii) scalability to large lexicon recognition with no re-training. We report results on a dataset of 1,000 low quality webcam videos of 100 words. The proposed method achieves a word recognition accuracy of 98.9%

    Isolated Sign Language Characters Recognition

    Get PDF
     People with normal senses use spoken language to communicate with others. This method cannot be used by those with hearing and speech impaired. These two groups of people will have difficulty when they try to communicate to each other using their own language. Sign language is not easy to learn, as there are various sign languages, and not many tutors are available. This study focuses on the character recognition based on manual alphabet. In general, the characters are divided into letters and numbers. Letters were divided into several groups according to their gestures. Characters recognition was done by comparing the photograph of a character with a gesture dictionary that has been previously developed. The gesture dictionary was created using the normalized Euclidian distance. Character recognition was performed by using the nearest neighbor method and sum of absolute error. Overall, the level of accuracy of the proposed method was 96.36%

    Automatic Indian Sign Language Recognition for Continuous Video Sequence

    Get PDF
    Sign Language Recognition has become the active area of research nowadays. This paper describes a novel approach towards a system to recognize the different alphabets of Indian Sign Language in video sequence automatically. The proposed system comprises of four major modules: Data Acquisition, Pre-processing, Feature Extraction and Classification. Pre-processing stage involves Skin Filtering and histogram matching after which Eigen vector based Feature Extraction and Eigen value weighted Euclidean distance based Classification Technique was used. 24 different alphabets were considered in this paper where 96% recognition rate was obtained.Keywords: Eigen value, Eigen vector, Euclidean Distance (ED),Human Computer Interaction, Indian Sign Language (ISL), Skin Filtering.Cite as:Joyeeta Singh, Karen Das "Automatic Indian Sign Language Recognition for Continuous Video Sequence", ADBU J.Engg.Tech., 2(1)(2015) 0021105(5pp

    Simultaneous Localization and Recognition of Dynamic Hand Gestures

    Full text link
    A framework for the simultaneous localization and recognition of dynamic hand gestures is proposed. At the core of this framework is a dynamic space-time warping (DSTW) algorithm, that aligns a pair of query and model gestures in both space and time. For every frame of the query sequence, feature detectors generate multiple hand region candidates. Dynamic programming is then used to compute both a global matching cost, which is used to recognize the query gesture, and a warping path, which aligns the query and model sequences in time, and also finds the best hand candidate region in every query frame. The proposed framework includes translation invariant recognition of gestures, a desirable property for many HCI systems. The performance of the approach is evaluated on a dataset of hand signed digits gestured by people wearing short sleeve shirts, in front of a background containing other non-hand skin-colored objects. The algorithm simultaneously localizes the gesturing hand and recognizes the hand-signed digit. Although DSTW is illustrated in a gesture recognition setting, the proposed algorithm is a general method for matching time series, that allows for multiple candidate feature vectors to be extracted at each time step.National Science Foundation (CNS-0202067, IIS-0308213, IIS-0329009); Office of Naval Research (N00014-03-1-0108

    An image processing technique for the improvement of Sign2 using a dual camera approach

    Get PDF
    A non-intrusive translation system to transform American Sign Language to digital text forms the pivotal point of discussion in the following thesis. With so many techniques which are being introduced for the same purpose in the present technological arena, this study lays claim to that relatively less trodden path of developing an unobtrusive, user-friendly and straightforward solution. The phase 1 of the Sign2 Project dealt with a single camera approach to achieve the same end of creating a translation system and my present investigation endeavors to develop a solution to improve the accuracy of results employing the methodology pursued in the Phase1 of the project. The present study is restricted to spelling out the American Sign Language alphabet and hence the only area of concentration would be the hand of the subject. This is as opposed to considering the entire ASL vocabulary which involves a more complex range of physical movement and intricate gesticulation. This investigation involved 3 subjects signing the ASL alphabet repetitively which were later used as a reference to recognize the letters in the words signed by the same subjects. Though the subject matter of this study does not differ by much from the Phase 1, the employment of an additional camera as a means to achieve better accuracy in results has been employed. The reasoning behind this approach is to attempt a closer imitation of the human depth perception. The best and most convincing information about the three dimensional world is attained by binocular vision and this theory is exploited in the current approach. For the purpose of this study, a humble attempt to come closer to the concept of binocular vision is made and only one aspect, that of the binocular disparity, is attempted to be emulated. The inference drawn from this analysis has proven the improved precision with which the ‘fist’ letters were identified. Owing to the fewer number of subjects and technical snags, the comprehensive body of data has been deprived to an extent but this thesis promises to deliver a basic foundation on which to build the future study and lays the guidelines to achieve a more complete and successful translation system

    Continual Learning of Hand Gestures for Human-Robot Interaction

    Full text link
    In this paper, we present an efficient method to incrementally learn to classify static hand gestures. This method allows users to teach a robot to recognize new symbols in an incremental manner. Contrary to other works which use special sensors or external devices such as color or data gloves, our proposed approach makes use of a single RGB camera to perform static hand gesture recognition from 2D images. Furthermore, our system is able to incrementally learn up to 38 new symbols using only 5 samples for each old class, achieving a final average accuracy of over 90\%. In addition to that, the incremental training time can be reduced to a 10\% of the time required when using all data available
    • …
    corecore