23 research outputs found

    A survey of haptics in serious gaming

    Get PDF
    Serious gaming often requires high level of realism for training and learning purposes. Haptic technology has been proved to be useful in many applications with an additional perception modality complementary to the audio and the vision. It provides novel user experience to enhance the immersion of virtual reality with a physical control-layer. This survey focuses on the haptic technology and its applications in serious gaming. Several categories of related applications are listed and discussed in details, primarily on haptics acts as cognitive aux and main component in serious games design. We categorize haptic devices into tactile, force feedback and hybrid ones to suit different haptic interfaces, followed by description of common haptic gadgets in gaming. Haptic modeling methods, in particular, available SDKs or libraries either for commercial or academic usage, are summarized. We also analyze the existing research difficulties and technology bottleneck with haptics and foresee the future research directions

    A Wearable Control Interface for Tele-operated Robots

    Get PDF
    Department of Mehcanical EngineeringThis thesis presents a wearable control interface for the intuitive control of tele-operated robots, which aim to overcome the limitations of conventional uni-directional control interfaces. The control interface is composed of a haptic control interface and a tele-operated display system. The haptic control interface can measure user???s motion while providing force feedback. Thus, the user can control a tele-operated robot arm by moving his/her arm in desired configurations while feeling the interaction forces between the robot and the environment. Immersive visual feedback is provided to the user with the tele-operated display system and a predictive display algorithm. An exoskeleton structure was designed as a candidate of the control interface structure considering the workspace and anatomy of the human arm to ensure natural movement. The translational motion of human shoulder joint and the singularity problem of exoskeleton structures were addressed by the tilted and vertically translating shoulder joint. The proposed design was analyzed using forward and inverse kinematics methods. Because the shoulder elevation affects all of the joint angles, the angles were calculated by applying an inverse kinematics method in an iterative manner. The proposed design was tested in experiments with a kinematic prototype. Two force-controllable cable-driven actuation mechanisms were developed for the actuation of haptic control interfaces. The mechanisms were designed to have lightweight and compact structures for high haptic transparency. One mechanism is an asymmetric cable-driven mechanism that can simplify the cable routing structure by replacing a tendon to a linear spring, which act as an antagonistic force source to the other tendon. High performance force control was achieved by a rotary series elastic mechanism and a robust controller, which combine a proportional and differential (PD) controller optimized by a linear quadratic (LQ) method with a disturbance observer (DOB) and a zero phase error tracking (ZPET) feedforward filter. The other actuation mechanism is a series elastic tendon-sheath actuation mechanism. Unlike previously developed tendon-sheath actuation systems, the proposed mechanism can deliver desired force even in multi-DOF systems by modeling and feedforwardly compensating the friction. The pretension change, which can be a significant threat in the safety of tendon-sheath actuation systems, is reduced by adopting series elastic elements on the motor side. Prototypes of the haptic control interfaces were developed with the proposed actuation mechanisms, and tested in the interaction with a virtual environment or a tele-operation experiment. Also, a visual feedback system is developed adopting a head mounted display (HMD) to the control interface. Inspired by a kinematic model of a human head-neck complex, a robot neck-camera system was built to capture the field of view in a desired orientation. To reduce the sickness caused by the time-varying bidirectional communication delay and operation delay of the robot neck, a predictive display algorithm was developed based on the kinematic model of the human and robot neck-camera system, and the geometrical model of a camera. The performance of the developed system was tested by experiments with intentional delays.clos

    Wearable Hand Exoskeleton Systems for Virtual Reality and Rehabilitation

    Get PDF
    Department of Mechanical Engineeringthe aim is to overcome the limitations of conventional systems in terms of both wearability and portability. As the hand receives diverse physical information and manipulates different type of objects, conventional systems contain many sensors and actuators, and are both large and heavy. Thus, hand exoskeleton systems exhibiting high wearability and portability while measuring finger motions and delivering forces would be highly valuable. For VR hand exoskeleton systems, a wearable hand exoskeleton system with force-controllable actuator modules was developed to ensure free finger motion and force mode control. The linkage structure ensures motion with three degrees of freedom (DOF) and provides a large fingertip workspacethe finger postures assumed when interacting with objects are appropriate. A series elastic actuator (SEA) with an actuator and an elastic element was used to fabricate compact actuator modules. Actuator friction was eliminated using a friction compensation algorithm. A proportional differential (PD) controller, optimized by a linear quadratic (LQ) method featuring a disturbance observer (DOB), was used to ensure accurate force mode control even during motion. The force control performance of the actuator module was verified in force generation experiments including stationary and arbitrary end-effector motions. The forces applied to the fingertips, which are the principal parts of the hand that interact with objects, were kinematically analyzed via both simulations and experiments. To overcome the weak point of previous system, a wearable hand exoskeleton system featuring finger motion measurement and force feedback was developed and evaluated in terms of user experience (UX). The finger structures for the thumb, index, and middle fingers, which play important roles when grasping objects, satisfy full range of motion (ROM). The system estimates all joint angles of these three digits using a dedicated algorithmmeasurement accuracy was experimentally evaluated to verify system performance. The UX performance was evaluated by 15 undergraduate students who completed questionnaires assessing usability and utilitarian value following trials conducted in the laboratory. All subjects were highly satisfied with both usability and the utilitarian nature of the system, not only because control and feedback were intuitive but also because performance was accurate. For rehabilitation, a highly portable exoskeleton featuring flexion/extension finger exercises was developed. The exoskeleton features two four-bar linkages reflecting the natural metacarpophalangeal (MCP) and proximal phalangeal (PIP) joint angles. During optimization, the design parameters were adjusted to reflect normal finger trajectories, which vary by finger length and finger joint ROM. To allow for passive physical impedance, a spring was installed to generate the forces that guided the fingers. The moments transmitted to the MCP and PIP joints were estimated via finite element method (FEM) analysis and the cross-sectional areas of the links were manually designed by reference to the expected joint moments. Finger motion and force distribution experiments verified that the system guided the fingers effectively, allowed for the desired finger motions, and distributed the required moments to the joints (as revealed by FEM analysis).This thesis reports the development of hand exoskeleton systems, for use in virtual reality (VR) environments and for hand rehabilitationclos

    Touching on elements for a non-invasive sensory feedback system for use in a prosthetic hand

    Get PDF
    Hand amputation results in the loss of motor and sensory functions, impacting activities of daily life and quality of life. Commercially available prosthetic hands restore the motor function but lack sensory feedback, which is crucial to receive information about the prosthesis state in real-time when interacting with the external environment. As a supplement to the missing sensory feedback, the amputee needs to rely on visual and audio cues to operate the prosthetic hand, which can be mentally demanding. This thesis revolves around finding potential solutions to contribute to an intuitive non-invasive sensory feedback system that could be cognitively less burdensome and enhance the sense of embodiment (the feeling that an artificial limb belongs to one’s own body), increasing acceptance of wearing a prosthesis.A sensory feedback system contains sensors to detect signals applied to the prosthetics. The signals are encoded via signal processing to resemble the detected sensation delivered by actuators on the skin. There is a challenge in implementing commercial sensors in a prosthetic finger. Due to the prosthetic finger’s curvature and the fact that some prosthetic hands use a covering rubber glove, the sensor response would be inaccurate. This thesis shows that a pneumatic touch sensor integrated into a rubber glove eliminates these errors. This sensor provides a consistent reading independent of the incident angle of stimulus, has a sensitivity of 0.82 kPa/N, a hysteresis error of 2.39±0.17%, and a linearity error of 2.95±0.40%.For intuitive tactile stimulation, it has been suggested that the feedback stimulus should be modality-matched with the intention to provide a sensation that can be easily associated with the real touch on the prosthetic hand, e.g., pressure on the prosthetic finger should provide pressure on the residual limb. A stimulus should also be spatially matched (e.g., position, size, and shape). Electrotactile stimulation has the ability to provide various sensations due to it having several adjustable parameters. Therefore, this type of stimulus is a good candidate for discrimination of textures. A microphone can detect texture-elicited vibrations to be processed, and by varying, e.g., the median frequency of the electrical stimulation, the signal can be presented on the skin. Participants in a study using electrotactile feedback showed a median accuracy of 85% in differentiating between four textures.During active exploration, electrotactile and vibrotactile feedback provide spatially matched modality stimulations, providing continuous feedback and providing a displaced sensation or a sensation dispatched on a larger area. Evaluating commonly used stimulation modalities using the Rubber Hand Illusion, modalities which resemble the intended sensation provide a more vivid illusion of ownership for the rubber hand.For a potentially more intuitive sensory feedback, the stimulation can be somatotopically matched, where the stimulus is experienced as being applied on a site corresponding to their missing hand. This is possible for amputees who experience referred sensation on their residual stump. However, not all amputees experience referred sensations. Nonetheless, after a structured training period, it is possible to learn to associate touch with specific fingers, and the effect persisted after two weeks. This effect was evaluated on participants with intact limbs, so it remains to evaluate this effect for amputees.In conclusion, this thesis proposes suggestions on sensory feedback systems that could be helpful in future prosthetic hands to (1) reduce their complexity and (2) enhance the sense of body ownership to enhance the overall sense of embodiment as an addition to an intuitive control system

    Human-robot interaction for telemanipulation by small unmanned aerial systems

    Get PDF
    This dissertation investigated the human-robot interaction (HRI) for the Mission Specialist role in a telemanipulating unmanned aerial system (UAS). The emergence of commercial unmanned aerial vehicle (UAV) platforms transformed the civil and environmental engineering industries through applications such as surveying, remote infrastructure inspection, and construction monitoring, which normally use UAVs for visual inspection only. Recent developments, however, suggest that performing physical interactions in dynamic environments will be important tasks for future UAS, particularly in applications such as environmental sampling and infrastructure testing. In all domains, the availability of a Mission Specialist to monitor the interaction and intervene when necessary is essential for successful deployments. Additionally, manual operation is the default mode for safety reasons; therefore, understanding Mission Specialist HRI is important for all small telemanipulating UAS in civil engineering, regardless of system autonomy and application. A 5 subject exploratory study and a 36 subject experimental study were conducted to evaluate variations of a dedicated, mobile Mission Specialist interface for aerial telemanipulation from a small UAV. The Shared Roles Model was used to model the UAS human-robot team, and the Mission Specialist and Pilot roles were informed by the current state of practice for manipulating UAVs. Three interface camera view designs were tested using a within-subjects design, which included an egocentric view (perspective from the manipulator), exocentric view (perspective from the UAV), and mixed egocentric-exocentric view. The experimental trials required Mission Specialist participants to complete a series of tasks with physical, visual, and verbal requirements. Results from these studies found that subjects who preferred the exocentric condition performed tasks 50% faster when using their preferred interface; however, interface preferences did not affect performance for participants who preferred the mixed condition. This result led to a second finding that participants who preferred the exocentric condition were distracted by the egocentric view during the mixed condition, likely caused by cognitive tunneling, and the data suggest tradeoffs between performance improvements and attentional costs when adding information in the form of multiple views to the Mission Specialist interface. Additionally, based on this empirical evaluation of multiple camera views, the exocentric view was recommended for use in a dedicated Mission Specialist telemanipulation interface. Contributions of this thesis include: i) conducting the first focused HRI study of aerial telemanipulation, ii) development of an evaluative model for telemanipulation performance, iii) creation of new recommendations for aerial telemanipulation interfacing, and iv) contribution of code, hardware designs, and system architectures to the open-source UAV community. The evaluative model provides a detailed framework, a complement to the abstraction of the Shared Roles Model, that can be used to measure the effects of changes in the system, environment, operators, and interfacing factors on performance. The practical contributions of this work will expedite the use of manipulating UAV technologies by scientists, researchers, and stakeholders, particularly those in civil engineering, who will directly benefit from improved manipulating UAV performance

    Information theoretic approach to tactile encoding and discrimination

    Get PDF
    The human sense of touch integrates feedback from a multitude of touch receptors, but how this information is represented in the neural responses such that it can be extracted quickly and reliably is still largely an open question. At the same time, dexterous robots equipped with touch sensors are becoming more common, necessitating better methods for representing sequentially updated information and new control strategies that aid in extracting relevant features for object manipulation from the data. This thesis uses information theoretic methods for two main aims: First, the neural code for tactile processing in humans is analyzed with respect to how much information is transmitted about tactile features. Second, machine learning approaches are used in order to influence both what data is gathered by a robot and how it is represented by maximizing information theoretic quantities. The first part of this thesis contains an information theoretic analysis of data recorded from primary tactile neurons in the human peripheral somatosensory system. We examine the differences in information content of two coding schemes, namely spike timing and spike counts, along with their spatial and temporal characteristics. It is found that estimates of the neurons’ information content based on the precise timing of spikes are considerably larger than for spikes counts. Moreover, the information estimated based on the timing of the very first elicited spike is at least as high as that provided by spike counts, but in many cases considerably higher. This suggests that first spike latencies can serve as a powerful mechanism to transmit information quickly. However, in natural object manipulation tasks, different tactile impressions follow each other quickly, so we asked whether the hysteretic properties of the human fingertip affect neural responses and information transmission. We find that past stimuli affect both the precise timing of spikes and spike counts of peripheral tactile neurons, resulting in increased neural noise and decreased information about ongoing stimuli. Interestingly, the first spike latencies of a subset of afferents convey information primarily about past stimulation, hinting at a mechanism to resolve ambiguity resulting from mechanical skin properties. The second part of this thesis focuses on using machine learning approaches in a robotics context in order to influence both what data is gathered and how it is represented by maximizing information theoretic quantities. During robotic object manipulation, often not all relevant object features are known, but have to be acquired from sensor data. Touch is an inherently active process and the question arises of how to best control the robot’s movements so as to maximize incoming information about the features of interest. To this end, we develop a framework that uses active learning to help with the sequential gathering of data samples by finding highly informative actions. The viability of this approach is demonstrated on a robotic hand-arm setup, where the task involves shaking bottles of different liquids in order to determine the liquid’s viscosity from tactile feedback only. The shaking frequency and the rotation angle of shaking are optimized online. Additionally, we consider the problem of how to better represent complex probability distributions that are sequentially updated, as approaches for minimizing uncertainty depend on an accurate representation of that uncertainty. A mixture of Gaussians representation is proposed and optimized using a deterministic sampling approach. We show how our method improves on similar approaches and demonstrate its usefulness in active learning scenarios. The results presented in this thesis highlight how information theory can provide a principled approach for both investigating how much information is contained in sensory data and suggesting ways for optimization, either by using better representations or actively influencing the environment

    Erfassung der feinmotorischen Performanz beim On-Orbit Servicing mittels Telemanipulation und Unterwassersimulation

    Get PDF
    Bei einer großen Anzahl von MontagetĂ€tigkeiten hĂ€ngt der Erfolg oder die QualitĂ€t des Resultates nicht zuletzt von der GĂŒte der feinmotorischen Leistung des Bearbeiters ab. Dies bezieht sich nicht nur auf TĂ€tigkeiten auf der Erde, sondern auch auf den extraterrestrischen Raum. Gerade im Bereich des On-orbit-Servicing und damit bei Reparaturen und MontagetĂ€tigkeiten im Orbit wird die Performanz der Menschen durch die widrigen Ă€ußeren UmstĂ€nde beeinflusst. Dabei sind Fehler in dieser Umgebung nicht selten mit Gefahren fĂŒr die beteiligten Personen verbunden. Neben den starken Temperaturschwankungen, dem Vakuum sowie der Mikrogravitation herrschen im Orbit hohe Konzentrationen an belastender Strahlung. Aus diesen GrĂŒnden ist der Mensch wĂ€hrend entsprechender AußenbordeinsĂ€tze mit besonderen RaumfahrtanzĂŒgen, den sogenannten Extravehicular Activity Spacesuits (EVA Spacesuits), ausgestattet (Thomas, 2006). Diese AnzĂŒge bieten zwar einerseits Schutz vor den widrigen Bedingungen im All, schrĂ€nken den Menschen jedoch andererseits in seiner Beweglichkeit und seinem Blickfeld ein. Neben dieser EinschrĂ€nkung wirkt die ungewohnte Mikrogravitation zusĂ€tzlich beeintrĂ€chtigend auf den Organismus und den menschlichen Bewegungsapparat (Parrish, 1999; Thomas, 2006). Diese EinschrĂ€nkungen können zu Einbußen hinsichtlich der feinmotorischen Leistung bei den entsprechenden Montage- und ReparaturtĂ€tigkeiten im Orbit fĂŒhren. ZusĂ€tzlich sind die kostenaufwendigen und zeitintensiven AußenbordeinsĂ€tze trotz weitreichender Vorsichtsmaßnahmen nach wie vor mit einem erheblichen Sicherheitsrisiko fĂŒr den Menschen verbunden. Aus diesem Grund wird nach Alternativen und UnterstĂŒtzungsmöglichkeiten fĂŒr die entsprechenden AußenbordeinsĂ€tze von Astronauten1 gesucht (McCain, 1991).Eine Möglichkeit zur UnterstĂŒtzung von Astronauten beim On-orbit-Servicing könnten die Telemanipulationssysteme darstellen (King, 2001). Diese Systeme bestehen grundsĂ€tzlich aus einem Operator, einem Rechnersystem und einem Teleoperator (Hung, 2003). Durch diesen Aufbau können Aktionen in einer entfernten Umgebung durchgefĂŒhrt werden. Folglich könnten mithilfe entsprechender Telemanipulationssysteme Reparaturen und Wartungsarbeiten vorgenommen werden, ohne dass ein Mensch direkt vor Ort in der entfernten Umgebung physisch prĂ€sent sein muss. Bereits wĂ€hrend der Robotik-Komponenten-Verifikation auf der Internationalen Raumstation (ROKVISS) des Deutschen Zentrums fĂŒr Luft- und Raumfahrt (DLR) in Kooperation mit der EuropĂ€ischen Weltraumbehörde (ESA) konnte ein Telemanipulationssystem seine Raumfahrttauglichkeit unter Beweis stellen (Albu-SchĂ€ffer, 2006). Eine Grundlage fĂŒr diesen Aufbau lieferte der DLR-Leichtbauroboter. Diese Roboter finden in mehreren Telemanipulationssystemen Verwendung. Eine nĂ€here Betrachtung entsprechender Telemanipulationssysteme im Hinblick auf ihre Tauglichkeit, Astronauten beim On-orbit-Servicing zu unterstĂŒtzen, ist daher sinnvoll. Dabei sollte darauf hingewiesen werden, dass die Telemanipulationssysteme ihrerseits Faktoren aufweisen, die die Performanz des Benutzers beeintrĂ€chtigen könnten, z. B. eine Zeitverzögerung bei der DatenĂŒbertragung (Keshavarzpour, 2008)

    Tangible auditory interfaces : combining auditory displays and tangible interfaces

    Get PDF
    Bovermann T. Tangible auditory interfaces : combining auditory displays and tangible interfaces. Bielefeld (Germany): Bielefeld University; 2009.Tangible Auditory Interfaces (TAIs) investigates into the capabilities of the interconnection of Tangible User Interfaces and Auditory Displays. TAIs utilise artificial physical objects as well as soundscapes to represent digital information. The interconnection of the two fields establishes a tight coupling between information and operation that is based on the human's familiarity with the incorporated interrelations. This work gives a formal introduction to TAIs and shows their key features at hand of seven proof of concept applications
    corecore