6 research outputs found

    Application of Neural Networks in Sign Language Gesture Recognition

    Get PDF
    Neural Networks have a wide range of applications in the area of gesture recognition. This research gives a method to recognize English alphabets from A to Z in real-time using Neural Networks from a database of signs performed by a signer using a camera. A feature vector giving the dimensions of each sigh is calculated and processed using Neural Networks. The methodology is carried out using MATLAB 7.0 software

    Communicating through Distraction: A Study of Deaf Drivers and Their Communication Style in a Driving Environment

    Get PDF
    This study will investigate the driving habits of deaf drivers and the manners in which they adapt to their driving experience. The lack of an auditory sense presents some unique challenges. While it is clear that driving is a predominantly visual task, auditory stimulation is still a part of the driving experience. This study seeks to determine how deaf drivers cope in a driving environment despite hearing loss. The results of the study will help to inform policy that can make the driving experience safer

    Generalized multi-stream hidden Markov models.

    Get PDF
    For complex classification systems, data is usually gathered from multiple sources of information that have varying degree of reliability. In fact, assuming that the different sources have the same relevance in describing all the data might lead to an erroneous behavior. The classification error accumulates and can be more severe for temporal data where each sample is represented by a sequence of observations. Thus, there is compelling evidence that learning algorithms should include a relevance weight for each source of information (stream) as a parameter that needs to be learned. In this dissertation, we assumed that the multi-stream temporal data is generated by independent and synchronous streams. Using this assumption, we develop, implement, and test multi- stream continuous and discrete hidden Markov model (HMM) algorithms. For the discrete case, we propose two new approaches to generalize the baseline discrete HMM. The first one combines unsupervised learning, feature discrimination, standard discrete HMMs and weighted distances to learn the codebook with feature-dependent weights for each symbol. The second approach consists of modifying the HMM structure to include stream relevance weights, generalizing the standard discrete Baum-Welch learning algorithm, and deriving the necessary conditions to optimize all model parameters simultaneously. We also generalize the minimum classification error (MCE) discriminative training algorithm to include stream relevance weights. For the continuous HMM, we introduce a. new approach that integrates the stream relevance weights in the objective function. Our approach is based on the linearization of the probability density function. Two variations are proposed: the mixture and state level variations. As in the discrete case, we generalize the continuous Baum-Welch learning algorithm to accommodate these changes, and we derive the necessary conditions for updating the model parameters. We also generalize the MCE learning algorithm to derive the necessary conditions for the model parameters\u27 update. The proposed discrete and continuous HMM are tested on synthetic data sets. They are also validated on various applications including Australian Sign Language, audio classification, face classification, and more extensively on the problem of landmine detection using ground penetrating radar data. For all applications, we show that considerable improvement can be achieved compared to the baseline HMM and the existing multi-stream HMM algorithms
    corecore