33,949 research outputs found

    ASL Recognition Quality Analysis Based on Sensory Gloves and MLP Neural Network

    Get PDF
    A simulated human hand model has been built using a virtual reality program which converts printed letters into a human hand figure that represents American Sign Language (ASL), this program was built using forward and inverse kinematics equations of a human hand. The inputs to the simulation program are normal language letters and the outputs are the human hand figures that represent ASL letters. In this research, a hardware system was designed to recognize the human hand manual alphabet of the ASL utilizing a hardware glove sensor design and using artificial neural network for enhancing the recognition process of ASL and for converting the ASL manual alphabet into printed letters. The hardware system uses flex sensors which are positioned on gloves to obtain the finger joint angle data when shown each letter of ASL. In addition, the system uses DAQ 6212 to interface the sensors and the PC. We trained and tested our hardware system for (ASL) manual alphabet words and names recognition and the recognition results have the accuracy of 90.19% and the software system for converting printed English names and words into (ASL) have 100% accuracy

    IOBSERVER: species recognition via computer vision

    Get PDF
    This paper is about the design of an automated computer vision system that is able to recognize the species of fish individuals that are classified into a fishing vessel and produces a report file with that information. This system is called iObserver and it is a part of project Life-iSEAS (Life program).A very first version of the system has been tested at the oceanographic vessel “Miguel Oliver”. At the time of writing a more advanced prototype is being tested onboard other oceanographic vessel: “Vizconde de Eza”. We will describe the hardware design and the algorithms used by the computer vision software.Peer Reviewe

    Sign Language Fingerspelling Classification from Depth and Color Images using a Deep Belief Network

    Full text link
    Automatic sign language recognition is an open problem that has received a lot of attention recently, not only because of its usefulness to signers, but also due to the numerous applications a sign classifier can have. In this article, we present a new feature extraction technique for hand pose recognition using depth and intensity images captured from a Microsoft Kinect sensor. We applied our technique to American Sign Language fingerspelling classification using a Deep Belief Network, for which our feature extraction technique is tailored. We evaluated our results on a multi-user data set with two scenarios: one with all known users and one with an unseen user. We achieved 99% recall and precision on the first, and 77% recall and 79% precision on the second. Our method is also capable of real-time sign classification and is adaptive to any environment or lightning intensity.Comment: Published in 2014 Canadian Conference on Computer and Robot Visio

    A Teacher in the Living Room? Educational Media for Babies, Toddlers, and Preschoolers

    Get PDF
    Examines available research, and arguments by proponents and critics, of electronic educational media use by young children. Examines educational claims in marketing and provides recommendations for developing research and product standards

    SymbolDesign: A User-centered Method to Design Pen-based Interfaces and Extend the Functionality of Pointer Input Devices

    Full text link
    A method called "SymbolDesign" is proposed that can be used to design user-centered interfaces for pen-based input devices. It can also extend the functionality of pointer input devices such as the traditional computer mouse or the Camera Mouse, a camera-based computer interface. Users can create their own interfaces by choosing single-stroke movement patterns that are convenient to draw with the selected input device and by mapping them to a desired set of commands. A pattern could be the trace of a moving finger detected with the Camera Mouse or a symbol drawn with an optical pen. The core of the SymbolDesign system is a dynamically created classifier, in the current implementation an artificial neural network. The architecture of the neural network automatically adjusts according to the complexity of the classification task. In experiments, subjects used the SymbolDesign method to design and test the interfaces they created, for example, to browse the web. The experiments demonstrated good recognition accuracy and responsiveness of the user interfaces. The method provided an easily-designed and easily-used computer input mechanism for people without physical limitations, and, with some modifications, has the potential to become a computer access tool for people with severe paralysis.National Science Foundation (IIS-0093367, IIS-0308213, IIS-0329009, EIA-0202067

    Pickup usability dominates: a brief history of mobile text entry research and adoption

    Get PDF
    Text entry on mobile devices (e.g. phones and PDAs) has been a research challenge since devices shrank below laptop size: mobile devices are simply too small to have a traditional full-size keyboard. There has been a profusion of research into text entry techniques for smaller keyboards and touch screens: some of which have become mainstream, while others have not lived up to early expectations. As the mobile phone industry moves to mainstream touch screen interaction we will review the range of input techniques for mobiles, together with evaluations that have taken place to assess their validity: from theoretical modelling through to formal usability experiments. We also report initial results on iPhone text entry speed
    • 

    corecore