166,999 research outputs found

    Recognition of 3D arm movements using neural networks

    Get PDF
    [[abstract]]There are many different approaches to recognition of spatio-temporal patterns. Each has its own merits and disadvantages. In this paper we present a neural-network-based approach to spatio-temporal pattern recognition. The effectiveness of this method is evaluated by recognizing 3D arm movements involved in Taiwanese sign language (TSL).[[conferencetype]]國際[[conferencedate]]19990710~19990716[[booktype]]紙本[[conferencelocation]]Washington, DC, US

    Recognition of elementary arm movements using orientation of a tri-axial accelerometer located near the wrist

    No full text
    In this paper we present a method for recognising three fundamental movements of the human arm (reach and retrieve, lift cup to mouth, rotation of the arm) by determining the orientation of a tri-axial accelerometer located near the wrist. Our objective is to detect the occurrence of such movements performed with the impaired arm of a stroke patient during normal daily activities as a means to assess their rehabilitation. The method relies on accurately mapping transitions of predefined, standard orientations of the accelerometer to corresponding elementary arm movements. To evaluate the technique, kinematic data was collected from four healthy subjects and four stroke patients as they performed a number of activities involved in a representative activity of daily living, 'making-a-cup-of-tea'. Our experimental results show that the proposed method can independently recognise all three of the elementary upper limb movements investigated with accuracies in the range 91–99% for healthy subjects and 70–85% for stroke patients

    Robustness Evaluation of Machine Learning Models for Robot Arm Action Recognition in Noisy Environments

    Full text link
    In the realm of robot action recognition, identifying distinct but spatially proximate arm movements using vision systems in noisy environments poses a significant challenge. This paper studies robot arm action recognition in noisy environments using machine learning techniques. Specifically, a vision system is used to track the robot's movements followed by a deep learning model to extract the arm's key points. Through a comparative analysis of machine learning methods, the effectiveness and robustness of this model are assessed in noisy environments. A case study was conducted using the Tic-Tac-Toe game in a 3-by-3 grid environment, where the focus is to accurately identify the actions of the arms in selecting specific locations within this constrained environment. Experimental results show that our approach can achieve precise key point detection and action classification despite the addition of noise and uncertainties to the dataset.Comment: Accepted at ICASS

    Dynamic Calibration of EMG Signals for Control of a Wearable Elbow Brace

    Get PDF
    Musculoskeletal injuries can severely inhibit performance of activities of daily living. In order to regain function, rehabilitation is often required. Assistive devices for use in rehabilitation are an avenue explored to increase arm mobility by guiding therapeutic exercises or assisting with motion. Electromyography (EMG), which are the muscle activity signals, may be able to provide an intuitive interface between the patient and the device if appropriate classification models allow smart systems to relate these signals to the desired device motion. Unfortunately, there is a gap in the accuracy of pattern recognition models classifying motion in constrained laboratory environments, and large reductions in accuracy when used for detecting dynamic unconstrained movements. An understanding of combinations of motion factors (limb positions, forces, velocities) in dynamic movements affecting EMG, and ways to use information about these motion factors in control systems is lacking. The objectives of this thesis were to quantify how various motion factors affect arm muscle activations during dynamic motion, and to use these motion factors and EMG signals for detecting interaction forces between the person and the environment during motion. To address these objectives, software was developed and implemented to collect a unique dataset of EMG signals while healthy individuals performed unconstrained arm motions with combinations of arm positions, interaction forces with the environment, velocities, and types of motion. An analysis of the EMG signals and their use in training classification models to predict characteristics (arm positions, force levels, and velocities) of intended motion was completed. The results quantify how EMG features change significantly with variations in arm positions, interaction forces, and motion velocities. The results also show that pattern recognition models, usually used to detect movements, were able to detect intended characteristics of motion based solely on EMG signals, even during complex activities of daily living. Arm position during elbow flexion--extension was predicted with 83.02 % accuracy by a support vector machine model using EMG signal inputs. Prediction of force, the motion characteristic that cannot be measured without impeding motion, was improved from 76.85 % correct to 79.17 % accurate during elbow flexion--extension by providing measurable arm position and velocity information as additional inputs to a linear discriminant analysis model. The accuracy of force prediction was improved by 5.2 % (increased from 59.38 % to 64.58 %) during an activity of daily living when motion speeds were included as an input to a linear discriminant analysis model in addition to EMG signals. Future work should expand on using motion characteristics and EMG signals to identify interactions between a person and the environment, in order to guide high level tuning of control models working towards controlling wearable elbow braces during dynamic movements

    SOVEREIGN: An Autonomous Neural System for Incrementally Learning Planned Action Sequences to Navigate Towards a Rewarded Goal

    Full text link
    How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goaloriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and sizeinvariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.Riverside Reserach Institute; Defense Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0225); National Science Foundation (IRI 90-24877, SBE-0345378); Office of Naval Research (N00014-92-J-1309, N00014-91-J-4100, N00014-01-1-0624, N00014-01-1-0624); Pacific Sierra Research (PSR 91-6075-2

    3D-Printed Hand Controlled by Arm Gestures to Verify the Robustness and Reliability of a Low Cost Surface Electromyography System

    Get PDF
    The study focuses on the development of a low-cost surface electromyography and 3D-printed hand gesture-recognition system. The complete system captures four (4) channels of EMG data through sEMG amplifier circuits interfaced to an Arduino prototyping board. This data is sent to a workstation wherein the graphical user interface shows the pre-processed signal. The gestures are used as control for the movements of the 3D-printed arm
    corecore