1,462 research outputs found

    Touchless Typing using Head Movement-based Gestures

    Full text link
    Physical contact-based typing interfaces are not suitable for people with upper limb disabilities such as Quadriplegia. This paper, thus, proposes a touch-less typing interface that makes use of an on-screen QWERTY keyboard and a front-facing smartphone camera mounted on a stand. The keys of the keyboard are grouped into nine color-coded clusters. Users pointed to the letters that they wanted to type just by moving their head. The head movements of the users are recorded by the camera. The recorded gestures are then translated into a cluster sequence. The translation module is implemented using CNN-RNN, Conv3D, and a modified GRU based model that uses pre-trained embedding rich in head pose features. The performances of these models were evaluated under four different scenarios on a dataset of 2234 video sequences collected from 22 users. The modified GRU-based model outperforms the standard CNN-RNN and Conv3D models for three of the four scenarios. The results are encouraging and suggest promising directions for future research.Comment: *The two lead authors contributed equally. The dataset and code are available upon request. Please contact the last autho

    TWO-HANDED TYPING METHOD ON AN ARBITRARY SURFACE

    Get PDF
    A computing device may detect user input, such as finger movements resembling typing on an invisible virtual keyboard in the air or on any surface, to enable typing. The computing device may use sensors (e.g., accelerometers, cameras, piezoelectric sensors, etc.) to detect the user’s finger movements, such as the user’s fingers moving through the air and/or contacting a surface. The computing device may then decode (or, in other words, convert, interpret, analyze, etc.) the detected finger movements to identify corresponding inputs representative of characters (e.g., alphanumeric characters, national characters, special characters, etc.). To reduce input errors, the computing device may decode the detected finger movements, at least in part, based on contextual information, such as preceding characters, words, and/or the like entered via previously detected user inputs. Similarly, the computing device may apply machine learning techniques and adjust parameters, such as a signal-to-noise ratio, to improve the accuracy of input-entry. In some examples, the computing device may implement specific recognition, prediction, and correction algorithms to improve the accuracy of input-entry. In this way, the computing device may accommodate biasing in finger movements that may be specific to a user entering the input

    On the efficient representation and execution of deep acoustic models

    Full text link
    In this paper we present a simple and computationally efficient quantization scheme that enables us to reduce the resolution of the parameters of a neural network from 32-bit floating point values to 8-bit integer values. The proposed quantization scheme leads to significant memory savings and enables the use of optimized hardware instructions for integer arithmetic, thus significantly reducing the cost of inference. Finally, we propose a "quantization aware" training process that applies the proposed scheme during network training and find that it allows us to recover most of the loss in accuracy introduced by quantization. We validate the proposed techniques by applying them to a long short-term memory-based acoustic model on an open-ended large vocabulary speech recognition task.Comment: Accepted conference paper: "The Annual Conference of the International Speech Communication Association (Interspeech), 2016
    • …
    corecore