22 research outputs found

    A novel video game peripheral for detecting fine hand motion and providing haptic feedback

    Get PDF
    Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 51-53).This thesis documents the design and implementation of a game controller glove that employs optical tracking technology to detect movement of the hand and fingers. The vision algorithm captures an image from a webcam in real-time and determines the centroids of colored sections on a glove worn by the player; assigning a distinctive identifier for each section which is associated with a 3D model retrieved from a preexisting library. A Vivitouch artificial muscle module is also mounted to the top of the glove to provide vibratory haptic feedback to the user. The system has been user tested and a number of potential use scenarios have been conceived for integration of the controller in various gaming applications.by Samantha N. Powers and Lauren K. Gust.S.B

    The Mole: a pressure-sensitive mouse

    Get PDF
    The traditional mouse enables the positioning of a cursor in a 2D plane, as well as the interaction of binary elements within that plane (e.g., buttons, links, icons). While this basic functionality is sufficient for interacting with every modern computing environment, it makes little use of the human hand\u27s ability to perform complex multi-directional movements. Devices developed to capture these multi-directional capabilities typically lack the familiar form and function of the mouse. This thesis details the design and development of a pressure-sensitive device called The Mole. The Mole retains the familiar form and function of the mouse while passively measuring the magnitude of normal hand force (i.e., downward force normal to the 2D operating surface). The measurement of this force lends itself to the development of novel interactions, far beyond what is possible with a typical mouse. This thesis demonstrates two such interactions: the positioning of a cursor in 3D space, and the simultaneous manipulation of cursor position and graphic tool parameters

    Measurement of Three-Dimensional Welding Torch Orientation for Manual Arc Welding Process

    Get PDF
    Methods and systems are provided herein for measuring 3D apparatus (e.g., manual tool or tool accessory) orientation. Example implementations use an auto-nulling algorithm that incorporates a quaternion-based unscented Kalman filter. Example implementations use a miniature inertial measurement unit endowed with a tri-axis gyro and a tri-axis accelerometer. The auto-nulling algorithm serves as an in-line calibration procedure to compensate for the gyro drift, which has been verified to significantly improve the estimation accuracy in three-dimensions, especially in the heading estimation

    Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection

    Get PDF
    In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lösen. Dies erfordert auch die Berücksichtigung veränderter Schwerkraftbedingungen

    Passive and active assistive writing devices in suppressing hand tremor

    Get PDF
    Patients with hand tremor disease frequently experience difficulties in performing their daily tasks, especially in handwriting activities. In order to prevent the ingestion of drugs and intervention of surgeries, a non-invasive solution was presented to improve their writing capabilities. In this study, there were two novel inventions of the hand-held device named as TREMORX and Active Assistive Writing Device (AAWD) with the approaches of passive and active elements respectively. For validation, the patient with tremor was assisted in using a normal pen and TREMORX to perform a handwriting task at the sitting and standing postures. For AAWD, the active suppressing element was the servo motor to control the hand tremor act on the writing tool tip and an accelerometer will measure the necessary parameters values for feedback control signal. The classic Proportional (P) controller and Proportional-Integral- Derivative (PID) were presented. The P controller was tuned with a meta-heuristic method by adjusting the parameters into several values to examine the response and robustness of the controller in suppressing the tremor. The evaluation was based on decreasing the coherence magnitude on the frequency response analysis. To optimise the performances, two types of Evolutionary Algorithms (EA) were employed which were Genetic Algorithm (GA) and Particle Swarm Optimisation (PSO). The optimisation techniques were integrated into the PID controller system to generate the optimum performances in controlling the tremor. For the simulation study, the parametric model representing the actual system of the AAWD was presented. The main objectives of this analysis were to determine the optimum value of PID parameters based on EA optimisation techniques. The determined parameters for both optimisations were then injected into the experimental environment to test and evaluate the performance of the controllers. The findings of the study exhibited that the PID controller for both EA optimisation provided excellent performances in suppressing the tremor signal act on the AAWD in comparison to the classic pure P controller. Based on the fitness evaluation, the GA optimisation significantly enhanced the PID controller performance compared to PSO optimisation. The handwriting performance using both TRREMORX and AAWD was recorded and from a visual justification, it showed that the quality of legibility was improved as compared with using normal handwriting devices. These outcomes provided an important contribution towards achieving novel methods in suppressing hand tremor by means of the invention of the handheld writing devices incorporated with intelligent control techniques

    Arm-Hand-Finger Video Game Interaction

    Get PDF
    Despite the growing popularity and expansion of video game interaction techniques and research in the area of hand gesture recognition, the application of hand gesture video game interaction using arm, hand, and finger motion has not been extensively explored. Most current gesture-based approaches to video game interaction neglect the use of the fingers for interaction, but inclusion of the fingers will allow for more natural and unique interaction and merits further research. To implement arm, hand and finger-based interaction for the video game domain, several problems must be solved including gesture recognition, segmentation, hand visualization, and video game interaction that responds to arm, hand, and finger input. Solutions to each of these problems have been implemented. The potential of this interaction style is illustrated through the introduction of an arm, hand, and finger controlled video game system that responds to players' hand gestures. It includes a finger-gesture recognizer as well as a video game system employing various interaction styles. This consists of a first person shooter game, a driving game, and a menu interaction system. Several users interacted with and played these games, and this form of interaction is especially suitable for real time interaction in first-person games. This is perhaps the first implementation of its kind for video game interaction. Based on test results, arm, hand, and finger interaction a viable form of interaction that deserves further research. This implementation bridges the gap between existing gesture interaction methods and more advanced virtual reality techniques. It successfully combines the solutions to each problem mentioned above into a single, working video game system. This type of interaction has proved to be more intuitive than existing gesture controls in many situations and also less complex to implement than a full virtual reality setup. It allows more control by using the hands' natural motion and allows each hand to interact independently. It can also be reliably implemented using today's technology. This implementation is a base system that can be greatly expanded on. Many possibilities for future work can be applied to this form of interaction

    Architectures for Real-Time Automatic Sign Language Recognition on Resource-Constrained Device

    Get PDF
    Powerful, handheld computing devices have proliferated among consumers in recent years. Combined with new cameras and sensors capable of detecting objects in three-dimensional space, new gesture-based paradigms of human computer interaction are becoming available. One possible application of these developments is an automated sign language recognition system. This thesis reviews the existing body of work regarding computer recognition of sign language gestures as well as the design of systems for speech recognition, a similar problem. Little work has been done to apply the well-known architectural patterns of speech recognition systems to the domain of sign language recognition. This work creates a functional prototype of such a system, applying three architectures seen in speech recognition systems, using a Hidden Markov classifier with 75-90% accuracy. A thorough search of the literature indicates that no cloud-based system has yet been created for sign language recognition and this is the first implementation of its kind. Accordingly, there have been no empirical performance analyses regarding a cloud-based Automatic Sign Language Recognition (ASLR) system, which this research provides. The performance impact of each architecture, as well as the data interchange format, is then measured based on response time, CPU, memory, and network usage across an increasing vocabulary of sign language gestures. The results discussed herein suggest that a partially-offloaded client-server architecture, where feature extraction occurs on the client device and classification occurs in the cloud, is the ideal selection for all but the smallest vocabularies. Additionally, the results indicate that for the potentially large data sets transmitted for 3D gesture classification, a fast binary interchange protocol such as Protobuf has vastly superior performance to a text-based protocol such as JSON
    corecore