2,498 research outputs found

    New Method for Optimization of License Plate Recognition system with Use of Edge Detection and Connected Component

    Full text link
    License Plate recognition plays an important role on the traffic monitoring and parking management systems. In this paper, a fast and real time method has been proposed which has an appropriate application to find tilt and poor quality plates. In the proposed method, at the beginning, the image is converted into binary mode using adaptive threshold. Then, by using some edge detection and morphology operations, plate number location has been specified. Finally, if the plat has tilt, its tilt is removed away. This method has been tested on another paper data set that has different images of the background, considering distance, and angel of view so that the correct extraction rate of plate reached at 98.66%.Comment: 3rd IEEE International Conference on Computer and Knowledge Engineering (ICCKE 2013), October 31 & November 1, 2013, Ferdowsi Universit Mashha

    Facilitating American Sign Language learning for hearing parents of deaf children via mobile devices

    Get PDF
    In the United States, between 90 and 95% of deaf children are born to hearing parents. In most circumstances, the birth of a deaf child is the first experience these parents have with American Sign Language (ASL) and the Deaf community. Parents learn ASL as a second language to provide their children with language models and to be able to communicate with their children more effectively, but they face significant challenges. To address these challenges, I have developed a mobile learning application, SMARTSign, to help parents of deaf children learn ASL vocabulary. I hypothesize that providing a method for parents to learn and practice ASL words associated with popular children's stories on their mobile phones would help improve their ASL vocabulary and abilities more than if words were grouped by theme. I posit that parents who learn vocabulary associated with children's stories will use the application more, which will lead to more exposure to ASL and more learned vocabulary. My dissertation consists of three studies. First I show that novices are able to reproduce signs presented on mobile devices with high accuracy regardless of source video resolution. Next, I interview hearing parents with deaf children to discover the difficulties they have with current methods for learning ASL. When asked which methods of presenting signs they preferred, participants were most interested in learning vocabulary associated with children's stories. Finally, I deploy SMARTSign to parents for four weeks. Participants learning story vocabulary used the application more often and had higher sign recognition scores than participants who learned vocabulary based on word types. The condition did not affect participants' ability to produce the signed vocabulary.PhDCommittee Chair: Starner, Thad; Committee Member: Abowd, Gregory; Committee Member: Bruckman, Amy; Committee Member: Guzdial, Mark; Committee Member: Quinto-Pozos, David; Committee Member: Singleton, Jenn

    Data-driven machine translation for sign languages

    Get PDF
    This thesis explores the application of data-driven machine translation (MT) to sign languages (SLs). The provision of an SL MT system can facilitate communication between Deaf and hearing people by translating information into the native and preferred language of the individual. We begin with an introduction to SLs, focussing on Irish Sign Language - the native language of the Deaf in Ireland. We describe their linguistics and mechanics including similarities and differences with spoken languages. Given the lack of a formalised written form of these languages, an outline of annotation formats is discussed as well as the issue of data collection. We summarise previous approaches to SL MT, highlighting the pros and cons of each approach. Initial experiments in the novel area of example-based MT for SLs are discussed and an overview of the problems that arise when automatically translating these manual-visual languages is given. Following this we detail our data-driven approach, examining the MT system used and modifications made for the treatment of SLs and their annotation. Through sets of automatically evaluated experiments in both language directions, we consider the merits of data-driven MT for SLs and outline the mainstream evaluation metrics used. To complete the translation into SLs, we discuss the addition and manual evaluation of a signing avatar for real SL output

    Equal Accessibility for Sign Language under the Convention on the Rights of Persons with Disabilities

    Get PDF

    Implementation, use and analysis of open source learning management system “Moodle” and e-learning for the deaf in Jordan

    Get PDF
    When learning mathematics, deaf children of primary school age experience difficulties due to their disability. In Jordan, little research has been undertaken to understand the problems facing deaf children and their teachers. Frequently, children are educated in special schools for the deaf; the majority of deaf children tend not to be integrated into mainstream education although efforts are made to incorporate them into the system. Teachers in the main stream education system rarely have knowledge and experience to enable deaf students to reach their full potential. The methodological approach used in this research is a mixed one consisting of action research and Human Computer interaction (HCI) research. The target group was deaf children aged nine years (at the third grade) and their teachers in Jordanian schools. Mathematics was chosen as the main focus of this study because it is a universal subject with its own concepts and rules and at this level the teachers in the school have sufficient knowledge and experience to teach mathematics topics competently. In order to obtain a better understanding of the problems faced by teachers and the deaf children in learning mathematics, semi-structured interviews were undertaken and questionnaires distributed to teachers. The main aim at that stage of research was to explore the current use and status of the e-learning environment and LMS within the Jordanian schools for the deaf in Jordan. In later stages of this research, semi-structured interviews and questionnaires were used again to ascertain the effectiveness, usability and readiness of the adopted e-learning environment “Moodle. Finally pre-tests and post-tests used to assess the effectiveness of the e-learning environment and LMS. It is important to note that it was not intended to work with the children directly but were used as test subjects. Based on the requirements and recommendations of the teachers of the deaf, a key requirements scheme was developed. Four open source e-learning environments and LMS evaluated against the developed key requirements. The evaluation was based on a software engineering approache. The outcome of that evaluation was the adoption of an open source e-learning environment and LMS called “Moodle”. Moodle was presented to the teachers for the purpose of testing it. It was found it is the most suitable e-learning environment and LMS to be adapted for use by deaf children in Jordan based on the teachers requirements. Then Moodle was presented to the deaf children’s to use during this research. After use, the activities of the deaf and their teachers were used and analysed in terms of Human Computer Interaction (HCI) analysis. The analysis includes the readiness, usability, user satisfaction, ease of use, learnability, outcome/future use, content, collaboration & communication tools and functionality

    Automatic recognition of Arabic alphabets sign language using deep learning

    Get PDF
    Technological advancements are helping people with special needs overcome many communications’ obstacles. Deep learning and computer vision models are innovative leaps nowadays in facilitating unprecedented tasks in human interactions. The Arabic language is always a rich research area. In this paper, different deep learning models were applied to test the accuracy and efficiency obtained in automatic Arabic sign language recognition. In this paper, we provide a novel framework for the automatic detection of Arabic sign language, based on transfer learning applied on popular deep learning models for image processing. Specifically, by training AlexNet, VGGNet and GoogleNet/Inception models, along with testing the efficiency of shallow learning approaches based on support vector machine (SVM) and nearest neighbors algorithms as baselines. As a result, we propose a novel approach for the automatic recognition of Arabic alphabets in sign language based on VGGNet architecture which outperformed the other trained models. The proposed model is set to present promising results in recognizing Arabic sign language with an accuracy score of 97%. The suggested models are tested against a recent fully-labeled dataset of Arabic sign language images. The dataset contains 54,049 images, which is considered the first large and comprehensive real dataset of Arabic sign language to the furthest we know

    Convention on the Rights of Persons with Disabilities

    Get PDF
    The Convention on the Rights of Persons with Disabilities and its Optional Protocol (A/RES/61/106) was adopted on 13 December 2006 at the United Nations Headquarters in New York, and was opened for signature on 30 March 2007 and entered into force on 3 May 2008. It is the first comprehensive human rights treaty of the 21st century and is the first human rights convention to be open for signature by regional integration organizations. The Convention has particular significance for mine action as it details the rights of survivors of mines and ERW. While the Convention does not identify new rights, it provides guidance on how to ensure that persons with disabilities can exercise their existing rights without discrimination. It provides a solid legal framework for the provision of assistance to survivors of mines and ERW
    corecore