5,263 research outputs found

    PARLOMA – A Novel Human-Robot Interaction System for Deaf-blind Remote Communication

    Get PDF
    Deaf-blindness forces people to live in isolation. Up to now there is no existing technological solution enabling two (or many) Deaf-blind persons to communicate remotely among them in tactile Sign Language (t-SL). When resorting to t-SL, Deaf-blind persons can communicate only with persons physically present in the same place, because they are required to reciprocally explore their hands to exchange messages. We present a preliminary version of PARLOMA, a novel system to enable remote communication between Deaf-blind persons. It is composed of a low-cost depth sensor as the only input device, paired with a robotic hand as output device. Essentially, any user can perform handshapes in front of the depth sensor. The system is able to recognize a set of handshapes that are sent over the web and reproduced by an anthropomorphic robotic hand. PARLOMA can work as a “telephone” for Deaf-blind people. Hence, it will dramatically improve life quality of Deaf-blind persons. PARLOMA has been designed in strict collaboration with the main Italian Deaf-blind associations, in order to include end-users in the design phase

    Indian Sign Language Numbers Recognition using Intel RealSense Camera

    Get PDF
    The use of gesture based interaction with devices has been a significant area of research in the field of computer science since many years. The main idea of these kind of interactions is to ease the user experience by providing high degree of freedom and provide more interactive way of communication with the technology in a natural way. The significant areas of applications of gesture recognition are in video gaming, human computer interaction, virtual reality, smart home appliances, medical systems, robotics and several others. With the availability of the devices such as Kinect, Leap Motion and Intel RealSense cameras accessing the depth as well as color information has become available to the public with affordable costs. The Intel RealSense camera is a USB powered controller that can be supported with few hardware requirements such as Windows 8 and above. This is one such camera that can be used to track the human body information similar to the Kinect and Leap Motion. It was designed specifically to provide more minute information about the different parts of the human body such as face, hand etc. This camera was designed to give users more natural and intuitive interactions with the smart devices by providing some features such as creating 3D avatars, high quality 3D prints, high-quality graphic gaming visuals, virtual reality and others. The main aim of this study is to try to analyze hand tracking information and build a training model in order to decide if this camera is suitable for sign language. In this study, we have extracted the joint information of 22 joint labels per single hand .We trained the model to identify the Indian Sign Language(ISL) numbers from 0-9. Through this study we analyzed that multi-class SVM model showed higher accuracy of 93.5% when compared to the decision tree and KNN models

    Heterogeneous hand gesture recognition using 3D dynamic skeletal data

    Get PDF
    International audienceHand gestures are the most natural and intuitive non-verbal communication medium while interacting with a computer, and related research efforts have recently boosted interest. Additionally, the identifiable features of the hand pose provided by current commercial inexpensive depth cameras can be exploited in various gesture recognition based systems, especially for Human-Computer Interaction. In this paper, we focus our attention on 3D dynamic gesture recognition systems using the hand pose information. Specifically, we use the natural structure of the hand topology-called later hand skeletal data-to extract effective hand kinematic descriptors from the gesture sequence. Descriptors are then encoded in a statistical and temporal representation using respectively a Fisher kernel and a multi-level temporal pyramid. A linear SVM classifier can be applied directly on the feature vector computed over the whole presegmented gesture to perform the recognition. Furthermore, for early recognition from continuous stream, we introduced a prior gesture detection phase achieved using a binary classifier before the final gesture recognition. The proposed approach is evaluated on three hand gesture datasets containing respectively 10, 14 and 25 gestures with specific challenging tasks. Also, we conduct an experiment to assess the influence of depth-based hand pose estimation on our approach. Experimental results demonstrate the potential of the proposed solution in terms of hand gesture recognition and also for a low-latency gesture recognition. Comparative results with state-of-the-art methods are reported

    What About Inclusive Education and ICT in Italy: a Scoping Study

    Get PDF
    Strategies and approaches to inclusion in the classroom are important in developing a high quality, inclusive experience for students with Special Education Needs. Generally, strategies are not geared towards specific exceptionalities, but are instead designed to be implemented across exceptionality categories. Pavone (2014) and de Anna, Gaspari, Mura (2015) determined through their systematic literature review and research results that co-operation among staff, commitment and accountability to the teaching of all students, differentiation of instruction, and recognizing “that social interaction is the means through which student knowledge is developed” are key to successful inclusion of students with SEN. This paper looks at the issue of school inclusion by referring to the most recent laws about the inclusive education of students with special educational needs in Italy. Inclusive education means that all students attend and are welcomed by their neighbourhood schools in age-appropriate, regular classes and are supported to learn, contribute and participate in all aspects of the life of the school. Inclusive education is about how we develop and design our schools, classrooms, programs and activities so that all students learn and participate together. So ICT should be considered as a key tool for promoting equity in educational opportunities, that is using ICT to support the learning of learners with disabilities and special educational needs in inclusive settings within compulsory education. The paper also argues how the Italian teachers can realized good practices for inclusion through the use of ICT

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Deep Learning Approach For Sign Language Recognition

    Get PDF
    Sign language is a method of communication that uses hand movements between fellow people with hearing loss. Problems occur when communication between normal people with hearing disorders, because not everyone understands sign language, so the model is needed for sign language recognition. This study aims to make the model of the introduction of hand sign language using a deep learning approach. The model used is Convolutional Neural Network (CNN). This model is tested using the ASL alphabet database consisting of 27 categories, where each category consists of 3000 images or a total of 87,000 images of 200 x 200 pixels of hand signals. First is the process of resizing the image input to 32 x 32 pixels. Furthermore, separating the dataset for training and validation respectively 75% and 25%. The test results indicate that the proposed model has good performance with a value of 99% accuracy. Experiment results show that preprocessing images using background correction can improve model performance
    • …
    corecore