6 research outputs found

    Powerpoint Controller Using Speech Recognition

    Full text link
    During presentation, it is hard to maintain the slide because we need to stand in front of the room and often not able to touch the computer. The presenters need to take attention at both their voice, and body language such eye contact, facial expression, posture, gesture, and body orientation. Microsoft PowerPoint is a simple but very useful tool to create digital presentation. Even though it is simple to use, but this application required the presenter to take control while using it, such as to star the slide show or moving it to the next slide. The purpose of this research is to minimize physical contact between user and the computer during the presentation by controlling the move of the slide using voice. This research will implement the Hidden Markov Model algorithm and Sphinx-4 library

    Implementation of Real-Time Static Hand Gesture Recognition Using Artificial Neural Network

    Full text link
    This paper implements static hand gesture recognition in recognizing the alphabetical sign from “A” to “Z”, number from “0” to “9”, and additional punctuation mark such as “Period”, “Question Mark”, and “Space” in Sistem Isyarat Bahasa Indonesia (SIBI). Hand gestures are obtained by evaluating the contourrepresentation from image segmentation of the glove wore by user. Then, it is classified using Artificial Neural Network (ANN) based on the training model previously built from 100 images for each gesture. The accuracy rate of hand gesture translation is calculated to be 90%. Moreover, speech translation recognizes NATO phonetic letter as the speech input for translation

    Implementation of Real-Time Static Hand Gesture Recognition Using Artificial Neural Network

    Get PDF
    This paper implements static hand gesture recognition in recognizing the alphabetical sign from “A” to “Z”, number from “0” to “9”, and additional punctuation mark such as “Period”, “Question Mark”, and “Space” in Sistem Isyarat Bahasa Indonesia (SIBI). Hand gestures are obtained by evaluating the contour representation from image segmentation of the glove wore by user. Then, it is classified using Artificial Neural Network (ANN) based on the training model previously built from 100 images for each gesture. The accuracy rate of hand gesture translation is calculated to be 90%. Moreover, speech translation recognizes NATO phonetic letter as the speech input for translation

    RED LEMON GROUP CHAT ANDROID APPLICATION USING TEXT-TO-SPEECH AND AUTOMATIC SPEECH RECOGNITION SYSTEM

    Get PDF
    Communication is the most essential thing that people do in daily activity, many way to people in communicate between each other. Commonly peoples choose the easy and practical way for any communication, using a smart phone is the exact way, it easy to operate and allow bring to everywhere. Meanwhile there are many research who claim mobile phone is one of several cause in traffic accident this because people tend to keep using the smartphone while driving which people have a limit concentration and limit physical bodies. Nowadays many smartphone offered some free applications to communicate both messaging or calls and have distinct feature of each. However most of all feature have not able to solve the kind of issued above. Red Lemon is group chat application that have smart solution to be able communicate even in some activity that need extra concentration. It able to messaging and make a call using speech recognition system and able to get information of the message without touch the phone using text-to-speech system, it will read out all the message received. Hence it very useful for many kind of peoples condition
    corecore