372 research outputs found

    Full-hand electrotactile feedback using electronic skin and matrix electrodes for high-bandwidth human–machine interfacing

    Get PDF
    Tactile feedback is relevant in a broad range of human–machine interaction systems (e.g. teleoperation, virtual reality and prosthetics). The available tactile feedback interfaces comprise few sensing and stimulation units, which limits the amount of information conveyed to the user. The present study describes a novel technology that relies on distributed sensing and stimulation to convey comprehensive tactile feedback to the user of a robotic end effector. The system comprises six flexible sensing arrays (57 sensors) integrated on the fingers and palm of a robotic hand, embedded electronics (64 recording channels), a multichannel stimulator and seven flexible electrodes (64 stimulation pads) placed on the volar side of the subject’s hand. The system was tested in seven subjects asked to recognize contact positions and identify contact sliding on the electronic skin, using distributed anode configuration (DAC) and single dedicated anode configuration. The experiments demonstrated that DAC resulted in substantially better performance. Using DAC, the system successfully translated the contact patterns into electrotactile profiles that the subjects could recognize with satisfactory accuracy (i.e. median{IQR} of 88.6{11}% for static and 93.3{5}% for dynamic patterns). The proposed system is an important step towards the development of a high-density human–machine interfacing between the user and a robotic han

    Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoples‘ Building Exploration in the Context of Orientation and Mobility

    Get PDF
    Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger fĂŒr eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell prĂ€sentiert, was den Zugang fĂŒr blinde und sehbehinderte Menschen einschrĂ€nkt. Die grundlegendste Barriere ist oft die elementare Orientierung und MobilitĂ€t (und folglich die soziale MobilitĂ€t), einschließlich der Erlangung von Kenntnissen ĂŒber unbekannte GebĂ€ude vor deren Besuch. Um solche Barrieren zu ĂŒberbrĂŒcken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugĂ€nglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen RealitĂ€t (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte InteraktivitĂ€t, die Aktualisierbarkeit und die Möglichkeit, virtuelle RĂ€ume und Modelle als Abbilder von realen RĂ€umen zu erkunden, ohne dass reale Gefahren und die begrenzte VerfĂŒgbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit. Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle ReprĂ€sentationen realer GebĂ€ude im Kontext von Orientierung und MobilitĂ€t berĂŒhrbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. ZusĂ€tzlich bietet diese Arbeit einen Überblick ĂŒber die verschiedenen Interaktionen. Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung fĂŒr blinde und sehbehinderte Nutzer und der Weg dorthin durch die EinfĂŒhrung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des GerĂ€ts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. BegrĂŒndet und motiviert werden die folgenden Kapitel durch explorative AnsĂ€tze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung). Die folgenden Kapitel fĂŒhren empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der HĂ€nde mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden gerĂ€teunabhĂ€ngige technologische Möglichkeiten und auch Herausforderungen fĂŒr weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und rĂ€umlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren.Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality. Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions. Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches. The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner)

    Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    Get PDF

    Human-Machine Interfaces using Distributed Sensing and Stimulation Systems

    Get PDF
    As the technology moves towards more natural human-machine interfaces (e.g. bionic limbs, teleoperation, virtual reality), it is necessary to develop a sensory feedback system in order to foster embodiment and achieve better immersion in the control system. Contemporary feedback interfaces presented in research use few sensors and stimulation units to feedback at most two discrete feedback variables (e.g. grasping force and aperture), whereas the human sense of touch relies on a distributed network of mechanoreceptors providing a wide bandwidth of information. To provide this type of feedback, it is necessary to develop a distributed sensing system that could extract a wide range of information during the interaction between the robot and the environment. In addition, a distributed feedback interface is needed to deliver such information to the user. This thesis proposes the development of a distributed sensing system (e-skin) to acquire tactile sensation, a first integration of distributed sensing system on a robotic hand, the development of a sensory feedback system that compromises the distributed sensing system and a distributed stimulation system, and finally the implementation of deep learning methods for the classification of tactile data. It\u2019s core focus addresses the development and testing of a sensory feedback system, based on the latest distributed sensing and stimulation techniques. To this end, the thesis is comprised of two introductory chapters that describe the state of art in the field, the objectives, and the used methodology and contributions; as well as six studies that tackled the development of human-machine interfaces

    Design and Effect of Continuous Wearable Tactile Displays

    Get PDF
    Our sense of touch is one of our core senses and while not as information rich as sight and hearing, it tethers us to reality. Our skin is the largest sensory organ in our body and we rely on it so much that we don\u27t think about it most of the time. Tactile displays - with the exception of actuators for notifications on smartphones and smartwatches - are currently understudied and underused. Currently tactile cues are mostly used in smartphones and smartwatches to notify the user of an incoming call or text message. Specifically continuous displays - displays that do not just send one notification but stay active for an extended period of time and continuously communicate information - are rarely studied. This thesis aims at exploring the utilization of our vibration perception to create continuous tactile displays. Transmitting a continuous stream of tactile information to a user in a wearable format can help elevate tactile displays from being mostly used for notifications to becoming more like additional senses enabling us to perceive our environment in new ways. This work provides a serious step forward in design, effect and use of continuous tactile displays and their use in human-computer interaction. The main contributions include: Exploration of Continuous Wearable Tactile Interfaces This thesis explores continuous tactile displays in different contexts and with different types of tactile information systems. The use-cases were explored in various domains for tactile displays - Sports, Gaming and Business applications. The different types of continuous tactile displays feature one- or multidimensional tactile patterns, temporal patterns and discrete tactile patterns. Automatic Generation of Personalized Vibration Patterns In this thesis a novel approach of designing vibrotactile patterns without expert knowledge by leveraging evolutionary algorithms to create personalized vibration patterns - is described. This thesis presents the design of an evolutionary algorithm with a human centered design generating abstract vibration patterns. The evolutionary algorithm was tested in a user study which offered evidence that interactive generation of abstract vibration patterns is possible and generates diverse sets of vibration patterns that can be recognized with high accuracy. Passive Haptic Learning for Vibration Patterns Previous studies in passive haptic learning have shown surprisingly strong results for learning Morse Code. If these findings could be confirmed and generalized, it would mean that learning a new tactile alphabet could be made easier and learned in passing. Therefore this claim was investigated in this thesis and needed to be corrected and contextualized. A user study was conducted to study the effects of the interaction design and distraction tasks on the capability to learn stimulus-stimulus-associations with Passive Haptic Learning. This thesis presents evidence that Passive Haptic Learning of vibration patterns induces only a marginal learning effect and is not a feasible and efficient way to learn vibration patterns that include more than two vibrations. Influence of Reference Frames for Spatial Tactile Stimuli Designing wearable tactile stimuli that contain spatial information can be a challenge due to the natural body movement of the wearer. An important consideration therefore is what reference frame to use for spatial cues. This thesis investigated allocentric versus egocentric reference frames on the wrist and compared them for induced cognitive load, reaction time and accuracy in a user study. This thesis presents evidence that using an allocentric reference frame drastically lowers cognitive load and slightly lowers reaction time while keeping the same accuracy as an egocentric reference frame, making a strong case for the utilization of allocentric reference frames in tactile bracelets with several tactile actuators

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    Limited Information Shared Control and its Applications to Large Vehicle Manipulators

    Get PDF
    Diese Dissertation beschĂ€ftigt sich mit der kooperativen Regelung einer mobilen Arbeitsmaschine, welche aus einem Nutzfahrzeug und einem oder mehreren hydraulischen Manipulatoren besteht. Solche Maschinen werden fĂŒr Aufgaben in der Straßenunterhaltungsaufgaben eingesetzt. Die Arbeitsumgebung des Manipulators ist unstrukturiert, was die Bestimmung einer Referenztrajektorie erschwert oder unmöglich macht. Deshalb wird in dieser Arbeit ein Ansatz vorgeschlagen, welcher nur das Fahrzeug automatisiert, wĂ€hrend der menschliche Bediener ein Teil des Systems bleibt und den Manipulator steuert. Eine solche Teilautomatisierung des Gesamtsystems fĂŒhrt zu einer speziellen Klasse von Mensch-Maschine-Interaktionen, welche in der Literatur noch nicht untersucht wurde: Eine kooperative Regelung zwischen zwei Teilsystemen, bei der die Automatisierung keine Informationen von dem vom Menschen gesteuerten Teilsystem hat. Deswegen wird in dieser Arbeit ein systematischer Ansatz der kooperativen Regelung mit begrenzter Information vorgestellt, der den menschlichen Bediener unterstĂŒtzen kann, ohne die Referenzen oder die SystemzustĂ€nde des Manipulators zu messen. Außerdem wird ein systematisches Entwurfskonzept fĂŒr die kooperative Regelung mit begrenzter Information vorgestellt. FĂŒr diese Entwurfsmethode werden zwei neue Unterklassen der sogenannten Potenzialspiele eingefĂŒhrt, die eine systematische Berechnung der Parameter der entwickelten kooperativen Regelung ohne manuelle Abstimmung ermöglichen. Schließlich wird das entwickelte Konzept der kooperativen Regelung am Beispiel einer großen mobilen Arbeitsmaschine angewandt, um seine Vorteile zu ermitteln und zu bewerten. Nach der Analyse in Simulationen wird die praktische Anwendbarkeit der Methode in drei Experimenten mit menschlichen Probanden an einem Simulator untersucht. Die Ergebnisse zeigen die Überlegenheit des entwickelten kooperativen Regelungskonzepts gegenĂŒber der manuellen Steuerung und der nicht-kooperativen Steuerung hinsichtlich sowohl der objektiven Performanz als auch der subjektiven Bewertung der Probanden. Somit zeigt diese Dissertation, dass die kooperative Regelung mobiler Arbeitsmaschinen mit den entwickelten theoretischen Konzepten sowohl hilfreich als auch praktisch anwendbar ist
    • 

    corecore