432,614 research outputs found

    Wearable Capacitive-based Wrist-worn Gesture Sensing System

    Get PDF
    Gesture control plays an increasingly significant role in modern human-machine interactions. This paper presents an innovative method of gesture recognition using flexible capacitive pressure sensor attached on user’s wrist towards computer vision and connecting senses on fingers. The method is based on the pressure variations around the wrist when the gesture changes. Flexible and ultrathin capacitive pressure sensors are deployed to capture the pressure variations. The embedding of sensors on a flexible substrate and obtain the relevant capacitance require a reliable approach based on a microcontroller to measure a small change of capacitive sensor. This paper is addressing these challenges, collect and process the measured capacitance values through a developed programming on LabVIEW to reconstruct the gesture on computer. Compared to the conventional approaches, the wrist-worn sensing method offerings a low-cost, lightweight and wearable prototype on the user’s body. The experimental result shows that the potentiality and benefits of this approach and confirms that accuracy and number of recognizable gestures can be improved by increasing number of sensor

    An EMG Gesture Recognition System with Flexible High-Density Sensors and Brain-Inspired High-Dimensional Classifier

    Full text link
    EMG-based gesture recognition shows promise for human-machine interaction. Systems are often afflicted by signal and electrode variability which degrades performance over time. We present an end-to-end system combating this variability using a large-area, high-density sensor array and a robust classification algorithm. EMG electrodes are fabricated on a flexible substrate and interfaced to a custom wireless device for 64-channel signal acquisition and streaming. We use brain-inspired high-dimensional (HD) computing for processing EMG features in one-shot learning. The HD algorithm is tolerant to noise and electrode misplacement and can quickly learn from few gestures without gradient descent or back-propagation. We achieve an average classification accuracy of 96.64% for five gestures, with only 7% degradation when training and testing across different days. Our system maintains this accuracy when trained with only three trials of gestures; it also demonstrates comparable accuracy with the state-of-the-art when trained with one trial

    Paradise Lost Revisited: GM and the UAW in Historical Perspective

    Get PDF
    Purpose Analysis of historic relationship between GM and Union of Automobile Workers (UAW) from 1936 through the moment of bankruptcy of GM in 2009. How can this historic relationship be explained from the viewpoint of evolving labor and industrial relations in the US? Design/methodology/approach Historical and comparative analyses. Secondary analysis. Findings Over time the relationship has been a dynamic and flexible one. In the first decades the most important objective of the UAW was the recognition of the union by GM. From the second half of the 1940s until the 1970s the main attention of both parties shifted towards a dynamic wage policy. Finally, from the 1970s onwards the safeguarding of job security became the main objective of the UAW, whereas GM tried to maximize its room of maneuver to transform its Fordist production system into a more flexible one. Research limitations/implications The present study provides a starting point for further in-depth research towards the historic relationship between GM & the UAW. Originality/value Longitudinal approach of development of labor-management relationship between two opposite parties in differing economic and technological contexts

    Incorporating Speech Recognition into a Natural User Interface

    Get PDF
    The Augmented/ Virtual Reality (AVR) Lab has been working to study the applicability of recent virtual and augmented reality hardware and software to KSC operations. This includes the Oculus Rift, HTC Vive, Microsoft HoloLens, and Unity game engine. My project in this lab is to integrate voice recognition and voice commands into an easy to modify system that can be added to an existing portion of a Natural User Interface (NUI). A NUI is an intuitive and simple to use interface incorporating visual, touch, and speech recognition. The inclusion of speech recognition capability will allow users to perform actions or make inquiries using only their voice. The simplicity of needing only to speak to control an on-screen object or enact some digital action means that any user can quickly become accustomed to using this system. Multiple programs were tested for use in a speech command and recognition system. Sphinx4 translates speech to text using a Hidden Markov Model (HMM) based Language Model, an Acoustic Model, and a word Dictionary running on Java. PocketSphinx had similar functionality to Sphinx4 but instead ran on C. However, neither of these programs were ideal as building a Java or C wrapper slowed performance. The most ideal speech recognition system tested was the Unity Engine Grammar Recognizer. A Context Free Grammar (CFG) structure is written in an XML file to specify the structure of phrases and words that will be recognized by Unity Grammar Recognizer. Using Speech Recognition Grammar Specification (SRGS) 1.0 makes modifying the recognized combinations of words and phrases very simple and quick to do. With SRGS 1.0, semantic information can also be added to the XML file, which allows for even more control over how spoken words and phrases are interpreted by Unity. Additionally, using a CFG with SRGS 1.0 produces a Finite State Machine (FSM) functionality limiting the potential for incorrectly heard words or phrases. The purpose of my project was to investigate options for a Speech Recognition System. To that end I attempted to integrate Sphinx4 into a user interface. Sphinx4 had great accuracy and is the only free program able to perform offline speech dictation. However it had a limited dictionary of words that could be recognized, single syllable words were almost impossible for it to hear, and since it ran on Java it could not be integrated into the Unity based NUI. PocketSphinx ran much faster than Sphinx4 which would've made it ideal as a plugin to the Unity NUI, unfortunately creating a C# wrapper for the C code made the program unusable with Unity due to the wrapper slowing code execution and class files becoming unreachable. Unity Grammar Recognizer is the ideal speech recognition interface, it is flexible in recognizing multiple variations of the same command. It is also the most accurate program in recognizing speech due to using an XML grammar to specify speech structure instead of relying solely on a Dictionary and Language model. The Unity Grammar Recognizer will be used with the NUI for these reasons as well as being written in C# which further simplifies the incorporation

    Development of a Voice-Controlled Human-Robot Interface

    Get PDF
    The goal of this thesis is to develop a voice-controlled human-robot interface (HRI) which allows a person to control and communicate with a robot. Dragon NaturallySpeaking, a commercially available automatic speech recognition engine, was chosen for the development of the proposed HRI. In order to achieve the goal, the Dragon software is used to create custom commands (or macros) which must satisfy the tasks of (a) directly controlling the robot with voice, (b) writing a robot program with voice, and (c) developing a HRI which allows the human and robot to communicate with each other using speech. The key is to generate keystrokes upon recognizing the speech and three types of macro including step-by-step, macro recorder, and advanced scripting. Experiment was conducted in three phases to test the functionality of the developed macros in accomplishing all three tasks. The result showed that advanced scripting macro is the only type of macro that works. It is also the most suitable for the task because it is quick and easy to create and can be used to develop flexible and natural voice command. Since the output of macro is a series of keystrokes, which forms a syntax for the robot program, macros developed by the Dragon software can be used to communicate with virtually any robots by making an adjustment on the output keystroke

    Cutting tool tracking and recognition based on infrared and visual imaging systems using principal component analysis (PCA) and discrete wavelet transform (DWT) combined with neural networks

    Get PDF
    The implementation of computerised condition monitoring systems for the detection cutting tools’ correct installation and fault diagnosis is of a high importance in modern manufacturing industries. The primary function of a condition monitoring system is to check the existence of the tool before starting any machining process and ensure its health during operation. The aim of this study is to assess the detection of the existence of the tool in the spindle and its health (i.e. normal or broken) using infrared and vision systems as a non-contact methodology. The application of Principal Component Analysis (PCA) and Discrete Wavelet Transform (DWT) combined with neural networks are investigated using both types of data in order to establish an effective and reliable novel software program for tool tracking and health recognition. Infrared and visual cameras are used to locate and track the cutting tool during the machining process using a suitable analysis and image processing algorithms. The capabilities of PCA and Discrete Wavelet Transform (DWT) combined with neural networks are investigated in recognising the tool’s condition by comparing the characteristics of the tool to those of known conditions in the training set. The experimental results have shown high performance when using the infrared data in comparison to visual images for the selected image and signal processing algorithms
    • 

    corecore