171 research outputs found

    Visual recognition of American sign language using hidden Markov models

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1995.Includes bibliographical references (leaves 48-52).by Thad Eugene Starner.M.S

    Calibration-Free Estimation of User-Specific Bending of a Head-Mountable Device

    Get PDF
    Disclosed is an approach for determining camera rotation relative to an individual’s eye, which could then be used for applications such as calibration-free estimation of user-specific bending of a device (e.g., a head-mountable device (HMD)), among others. Due to biological conditions, an average gaze vector of an eye most often corresponds to a straight-forward gaze by an individual. Therefore, the average gaze vector is often known with respect to a coordinate system of an individual’s eye. Additionally, when the HMD is worn by a user, the HMD uses an eye-facing camera to determine the average gaze vector of the user’s eye with respect to the eye-facing camera’s coordinate system. Given information about the average gaze vector with respect to multiple coordinate systems, the HMD uses this information as basis for determining an orientation of the eye-facing camera when the HMD is worn by the user. By then comparing the determined orientation to a known orientation of the eye-facing camera when the HMD is unworn or otherwise not bent, the HMD determines rotation of the eye-facing camera in three-dimensional space, which corresponds to an extent that the HMD (e.g., the HMD’s frame) has bent from an unworn position to a worn position

    Wearable computing and contextual awareness

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.Includes bibliographical references (leaves 231-248).Computer hardware continues to shrink in size and increase in capability. This trend has allowed the prevailing concept of a computer to evolve from the mainframe to the minicomputer to the desktop. Just as the physical hardware changes, so does the use of the technology, tending towards more interactive and personal systems. Currently, another physical change is underway, placing computational power on the user's body. These wearable machines encourage new applications that were formerly infeasible and, correspondingly, will result in new usage patterns. This thesis suggests that the fundamental improvement offered by wearable computing is an increased sense of user context. I hypothesize that on-body systems can sense the user's context with little or no assistance from environmental infrastructure. These body-centered systems that "see" as the user sees and "hear" as the user hears, provide a unique "first-person" viewpoint of the user's environment. By exploiting models recovered by these systems, interfaces are created which require minimal directed action or attention by the user. In addition, more traditional applications are augmented by the contextual information recovered by these systems. To investigate these issues, I provide perceptually sensible tools for recovering and modeling user context in a mobile, everyday environment. These tools include a downward-facing, camera-based system for establishing the location of the user; a tag-based object recognition system for augmented reality; and several on-body gesture recognition systems to identify various user tasks in constrained environments. To address the practicality of contextually-aware wearable computers, issues of power recovery, heat dissipation, and weight distribution are examined. In addition, I have encouraged a community of wearable computer users at the Media Lab through design, management, and support of hardware and software infrastructure. This unique community provides a heightened awareness of the use and social issues of wearable computing. As much as possible, the lessons from this experience will be conveyed in the thesis.by Thad Eugene Starner.Ph.D

    Self-managed Speech Therapy

    Get PDF
    Speech defects are typically addressed by having the patient or learner undergo several sessions with speech therapists, who apply specialized therapeutic tools. Speech therapies tend to be expensive, require the scheduling of appointments, and do not lend themselves easily to self-paced self-improvement. This disclosure presents techniques that automatically provide speech-improvement feedback, thereby enabling self-managed speech therapy. Given a speech utterance by a user, the techniques cause display of a sequence of images of speech-organ positions, e.g., tongue, lips, throat muscles, etc., that correspond to the actual utterance as well as a targeted, ideal utterance. Further phonetic feedback is provided to the user using visual, tactile, spectrogram, or other modes, such that a speaker who is hard of learning can work towards a target pronunciation. The techniques also apply to foreign language learning

    GART: The Gesture and Activity Recognition Toolkit

    Get PDF
    Presented at the 12th International Conference on Human-Computer Interaction, Beijing, China, July 2007.The original publication is available at www.springerlink.comThe Gesture and Activity Recognition Toolit (GART) is a user interface toolkit designed to enable the development of gesture-based applications. GART provides an abstraction to machine learning algorithms suitable for modeling and recognizing different types of gestures. The toolkit also provides support for the data collection and the training process. In this paper, we present GART and its machine learning abstractions. Furthermore, we detail the components of the toolkit and present two example gesture recognition applications

    Wearable vibrotactile stimulation for upper extremity rehabilitation in chronic stroke: clinical feasibility trial using the VTS Glove

    Full text link
    Objective: Evaluate the feasibility and potential impacts on hand function using a wearable stimulation device (the VTS Glove) which provides mechanical, vibratory input to the affected limb of chronic stroke survivors. Methods: A double-blind, randomized, controlled feasibility study including sixteen chronic stroke survivors (mean age: 54; 1-13 years post-stroke) with diminished movement and tactile perception in their affected hand. Participants were given a wearable device to take home and asked to wear it for three hours daily over eight weeks. The device intervention was either (1) the VTS Glove, which provided vibrotactile stimulation to the hand, or (2) an identical glove with vibration disabled. Participants were equally randomly assigned to each condition. Hand and arm function were measured weekly at home and in local physical therapy clinics. Results: Participants using the VTS Glove showed significantly improved Semmes-Weinstein monofilament exam, reduction in Modified Ashworth measures in the fingers, and some increased voluntary finger flexion, elbow and shoulder range of motion. Conclusions: Vibrotactile stimulation applied to the disabled limb may impact tactile perception, tone and spasticity, and voluntary range of motion. Wearable devices allow extended application and study of stimulation methods outside of a clinical setting

    Automated data gathering and training tool for personalized "Itchy Nose"

    Get PDF
    In "Itchy Nose" we proposed a sensing technique for detecting finger movements on the nose for supporting subtle and discreet interaction. It uses the electrooculography sensors embedded in the frame of a pair of eyeglasses for data gathering and uses machine-learning technique to classify different gestures. Here we further propose an automated training and visualization tool for its classifier. This tool guides the user to make the gesture in proper timing and records the sensor data. It automatically picks the ground truth and trains a machine-learning classifier with it. With this tool, we can quickly create trained classifier that is personalized for the user and test various gestures.Postprin
    • …
    corecore