19 research outputs found

    Wearable vibrotactile stimulation for upper extremity rehabilitation in chronic stroke: clinical feasibility trial using the VTS Glove

    Full text link
    Objective: Evaluate the feasibility and potential impacts on hand function using a wearable stimulation device (the VTS Glove) which provides mechanical, vibratory input to the affected limb of chronic stroke survivors. Methods: A double-blind, randomized, controlled feasibility study including sixteen chronic stroke survivors (mean age: 54; 1-13 years post-stroke) with diminished movement and tactile perception in their affected hand. Participants were given a wearable device to take home and asked to wear it for three hours daily over eight weeks. The device intervention was either (1) the VTS Glove, which provided vibrotactile stimulation to the hand, or (2) an identical glove with vibration disabled. Participants were equally randomly assigned to each condition. Hand and arm function were measured weekly at home and in local physical therapy clinics. Results: Participants using the VTS Glove showed significantly improved Semmes-Weinstein monofilament exam, reduction in Modified Ashworth measures in the fingers, and some increased voluntary finger flexion, elbow and shoulder range of motion. Conclusions: Vibrotactile stimulation applied to the disabled limb may impact tactile perception, tone and spasticity, and voluntary range of motion. Wearable devices allow extended application and study of stimulation methods outside of a clinical setting

    Activity Recognition of Assembly Tasks Using Body-Worn Microphones and Accelerometers

    Get PDF
    In order to provide relevant information to mobile users, such as workers engaging in the manual tasks of maintenance and assembly, a wearable computer requires information about the user's specific activities. This work focuses on the recognition of activities that are characterized by a hand motion and an accompanying sound. Suitable activities can be found in assembly and maintenance work. Here, we provide an initial exploration into the problem domain of continuous activity recognition using on-body sensing. We use a mock "wood workshop” assembly task to ground our investigation. We describe a method for the continuous recognition of activities (sawing, hammering, filing, drilling, grinding, sanding, opening a drawer, tightening a vise, and turning a screwdriver) using microphones and three-axis accelerometers mounted at two positions on the user's arms. Potentially "interesting” activities are segmented from continuous streams of data using an analysis of the sound intensity detected at the two different locations. Activity classification is then performed on these detected segments using linear discriminant analysis (LDA) on the sound channel and hidden Markov models (HMMs) on the acceleration data. Four different methods at classifier fusion are compared for improving these classifications. Using user-dependent training, we obtain continuous average recall and precision rates (for positive activities) of 78 percent and 74 percent, respectively. Using user-independent training (leave-one-out across five users), we obtain recall rates of 66 percent and precision rates of 63 percent. In isolation, these activities were recognized with accuracies of 98 percent, 87 percent, and 95 percent for the user-dependent, user-independent, and user-adapted cases, respectively

    Symbiotic interfaces for wearable face recognition

    No full text
    We introduce a wearable face detection method that exploits constraints in face scale and orientation imposed by the proximity of participants in near social interactions. Using this method we describe a wearable system that perceives “social engagement,” i.e., when the wearer begins to interact with other individuals. One possible application is improving the interfaces of portable consumer electronics, such as cellular phones, to avoid interrupting the user during face-to-face interactions. Our experimental system proved> 90 % accurate when tested on wearable video data captured at a professional conference. Over three hundred individuals were captured, and the data was separated into independent training and test sets. A goal is to incorporate user interface in mobile machine recognition systems to improve performance. The user may provide real-time feedback to the system or may subtly cue the system through typical daily activities, such as turning to face a speaker, as to when conditions for recognition are favorable.

    Learning Visual Models of Social Engagement

    No full text
    We introduce a face detector for wearable computers that exploits constraints in face scale and orientation imposed by the proximity of participants in near social interactions. Using this method we describe a wearable system that perceives “social engagement,” i.e., when the wearer begins to interact with other individuals. Our experimental system proved> 90 % accurate when tested on wearable video data captured at a professional conference. Over 300 individuals were captured during social engagement, and the data was separated into independent training and test sets. A metric for balancing the performance of face detection, localization, and recognition in the context of a wearable interface is discussed. Recognizing social engagement with a user’s wearable computer provides context data that can be useful in determining when the user is interruptible. In addition, social engagement detection may be incorporated into a user interface to improve the quality of mobile face recognition software. For example, the user may cue the face recognition system in a socially graceful way by turning slightly away and then toward a speaker when conditions for recognition are favorable.

    Electronic Communication by Deaf Teenagers

    Get PDF
    We present a qualitative, exploratory study to examine the space of electronic based communication (e.g. instant messaging, short message service, email) by Deaf teenagers in the greater Atlanta metro area. We answer the basic questions of who, what, where, when, and how to understand Deaf teenage use of electronic, mobile communication technologies. Our findings reveal that both Deaf and hearing teens share similar communication goals such as communicating quickly, effectively, and with a variety of people. Distinctions between the two populations emerge from language differences. The teenagers perspectives allow us to view electronic communication not from a technologist's point of view, but from the use-centric view of teenagers who are indifferent to the underlying infrastructure supporting this communication. This study suggests several unique features of the Deaf teens' communication as well as further research questions and directions for study
    corecore