38,431 research outputs found

    Adapting Speech Recognition in Augmented Reality for Mobile Devices in Outdoor Environments

    Get PDF
    This paper describes the process of integrating automatic speech recognition (ASR) into a mobile application and explores the benefits and challenges of integrating speech with augmented reality (AR) in outdoor environments. The augmented reality allows end-users to interact with the information displayed and perform tasks, while increasing the user’s perception about the real world by adding virtual information to it. Speech is the most natural way of communication: it allows hands-free interaction and may allow end-users to quickly and easily access a range of features available. Speech recognition technology is often available in most of the current mobile devices, but it often uses Internet to receive the corresponding transcript from remote servers, e.g., Google speech recognition. However, in some outdoor environments, Internet is not always available or may be offered at poor quality. We integrated an off-line automatic speech recognition module into an AR application for outdoor usage that does not require Internet. Currently, speech interaction is used within the application to access five different features, namely: to take a photo, shoot a film, communicate, messaging related tasks, and to request information, either geographic, biometric, or climatic. The application makes available solutions to manage and interact with the mobile device, offering good usability. We have compared the online and off-line speech recognition systems in order to assess their adequacy to the tasks. Both systems were tested under different conditions, commonly found in outdoor environments, such as: Internet access quality, presence of noise, and distractions.info:eu-repo/semantics/publishedVersio

    Implementasi Optical Character Recognition Berbasis Backpropagation untuk Text to Speech Perangkat Android

    Get PDF
     Procedures using text to speech application on a mobile device generally at this time is user must manually enter the word to be actualized in speech. In this study, designed a words input system for text to speech application using digital image processing. This system makes users simply to do the words capturing that will be voiced without manually typing in the text area input.The method used in this system includes image acquisition, image pre-processing, character segmentation, character recognition, and integration with text to speech engine on mobile devices. Image acquisition was performed using the camera on a mobile device to capture the word to be entered. Character recognition using back propagation algorithm. Image processing system successfully created and then integrated with Google Text to Speech engine.Character recognition system in this study using a model of neural networks (ANN) with an accuracy of 97.58%. The system is able to recognize some types of font that is Arial, Calibri, and Verdana. The mean recognition accuracy on the test sample used in this study 94.7% with distance shooting conditions within the range 3-8 cm and the camera upright position facing the letter

    On the Compression of Recurrent Neural Networks with an Application to LVCSR acoustic modeling for Embedded Speech Recognition

    Full text link
    We study the problem of compressing recurrent neural networks (RNNs). In particular, we focus on the compression of RNN acoustic models, which are motivated by the goal of building compact and accurate speech recognition systems which can be run efficiently on mobile devices. In this work, we present a technique for general recurrent model compression that jointly compresses both recurrent and non-recurrent inter-layer weight matrices. We find that the proposed technique allows us to reduce the size of our Long Short-Term Memory (LSTM) acoustic model to a third of its original size with negligible loss in accuracy.Comment: Accepted in ICASSP 201

    DolphinAtack: Inaudible Voice Commands

    Full text link
    Speech recognition (SR) systems such as Siri or Google Now have become an increasingly popular human-computer interaction method, and have turned various systems into voice controllable systems(VCS). Prior work on attacking VCS shows that the hidden voice commands that are incomprehensible to people can control the systems. Hidden voice commands, though hidden, are nonetheless audible. In this work, we design a completely inaudible attack, DolphinAttack, that modulates voice commands on ultrasonic carriers (e.g., f > 20 kHz) to achieve inaudibility. By leveraging the nonlinearity of the microphone circuits, the modulated low frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the speech recognition systems. We validate DolphinAttack on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions. We validate that it is feasible to detect DolphinAttack by classifying the audios using supported vector machine (SVM), and suggest to re-design voice controllable systems to be resilient to inaudible voice command attacks.Comment: 15 pages, 17 figure
    • …
    corecore