104 research outputs found

    Nonlinear Modeling and Control of Driving Interfaces and Continuum Robots for System Performance Gains

    Get PDF
    With the rise of (semi)autonomous vehicles and continuum robotics technology and applications, there has been an increasing interest in controller and haptic interface designs. The presence of nonlinearities in the vehicle dynamics is the main challenge in the selection of control algorithms for real-time regulation and tracking of (semi)autonomous vehicles. Moreover, control of continuum structures with infinite dimensions proves to be difficult due to their complex dynamics plus the soft and flexible nature of the manipulator body. The trajectory tracking and control of automobile and robotic systems requires control algorithms that can effectively deal with the nonlinearities of the system without the need for approximation, modeling uncertainties, and input disturbances. Control strategies based on a linearized model are often inadequate in meeting precise performance requirements. To cope with these challenges, one must consider nonlinear techniques. Nonlinear control systems provide tools and methodologies for enabling the design and realization of (semi)autonomous vehicle and continuum robots with extended specifications based on the operational mission profiles. This dissertation provides an insight into various nonlinear controllers developed for (semi)autonomous vehicles and continuum robots as a guideline for future applications in the automobile and soft robotics field. A comprehensive assessment of the approaches and control strategies, as well as insight into the future areas of research in this field, are presented.First, two vehicle haptic interfaces, including a robotic grip and a joystick, both of which are accompanied by nonlinear sliding mode control, have been developed and studied on a steer-by-wire platform integrated with a virtual reality driving environment. An operator-in-the-loop evaluation that included 30 human test subjects was used to investigate these haptic steering interfaces over a prescribed series of driving maneuvers through real time data logging and post-test questionnaires. A conventional steering wheel with a robust sliding mode controller was used for all the driving events for comparison. Test subjects operated these interfaces for a given track comprised of a double lane-change maneuver and a country road driving event. Subjective and objective results demonstrate that the driver’s experience can be enhanced up to 75.3% with a robotic steering input when compared to the traditional steering wheel during extreme maneuvers such as high-speed driving and sharp turn (e.g., hairpin turn) passing. Second, a cellphone-inspired portable human-machine-interface (HMI) that incorporated the directional control of the vehicle as well as the brake and throttle functionality into a single holistic device will be presented. A nonlinear adaptive control technique and an optimal control approach based on driver intent were also proposed to accompany the mechatronic system for combined longitudinal and lateral vehicle guidance. Assisting the disabled drivers by excluding extensive arm and leg movements ergonomically, the device has been tested in a driving simulator platform. Human test subjects evaluated the mechatronic system with various control configurations through obstacle avoidance and city road driving test, and a conventional set of steering wheel and pedals were also utilized for comparison. Subjective and objective results from the tests demonstrate that the mobile driving interface with the proposed control scheme can enhance the driver’s performance by up to 55.8% when compared to the traditional driving system during aggressive maneuvers. The system’s superior performance during certain vehicle maneuvers and approval received from the participants demonstrated its potential as an alternative driving adaptation for disabled drivers. Third, a novel strategy is designed for trajectory control of a multi-section continuum robot in three-dimensional space to achieve accurate orientation, curvature, and section length tracking. The formulation connects the continuum manipulator dynamic behavior to a virtual discrete-jointed robot whose degrees of freedom are directly mapped to those of a continuum robot section under the hypothesis of constant curvature. Based on this connection, a computed torque control architecture is developed for the virtual robot, for which inverse kinematics and dynamic equations are constructed and exploited, with appropriate transformations developed for implementation on the continuum robot. The control algorithm is validated in a realistic simulation and implemented on a six degree-of-freedom two-section OctArm continuum manipulator. Both simulation and experimental results show that the proposed method could manage simultaneous extension/contraction, bending, and torsion actions on multi-section continuum robots with decent tracking performance (e.g. steady state arc length and curvature tracking error of 3.3mm and 130mm-1, respectively). Last, semi-autonomous vehicles equipped with assistive control systems may experience degraded lateral behaviors when aggressive driver steering commands compete with high levels of autonomy. This challenge can be mitigated with effective operator intent recognition, which can configure automated systems in context-specific situations where the driver intends to perform a steering maneuver. In this article, an ensemble learning-based driver intent recognition strategy has been developed. A nonlinear model predictive control algorithm has been designed and implemented to generate haptic feedback for lateral vehicle guidance, assisting the drivers in accomplishing their intended action. To validate the framework, operator-in-the-loop testing with 30 human subjects was conducted on a steer-by-wire platform with a virtual reality driving environment. The roadway scenarios included lane change, obstacle avoidance, intersection turns, and highway exit. The automated system with learning-based driver intent recognition was compared to both the automated system with a finite state machine-based driver intent estimator and the automated system without any driver intent prediction for all driving events. Test results demonstrate that semi-autonomous vehicle performance can be enhanced by up to 74.1% with a learning-based intent predictor. The proposed holistic framework that integrates human intelligence, machine learning algorithms, and vehicle control can help solve the driver-system conflict problem leading to safer vehicle operations

    How the architecture of the CityCar enhances personal mobility and supporting industries

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 121-124).Growing populations, increasing middle-class, and rapid urbanization - for today's urban dweller, all of these escalating factors continue to contribute to problems of excessive energy use, road congestion, pollution due to carbon emissions, and inefficient personal transit. Considering that the average vehicle in a city weighs thousands of pounds, usually caries only one person per trip, and expends significant proportions of its gasoline simply searching for resources such as parking, new efficient and intelligent modes of transportation are in need of exploration. This dissertation presents the design and development of an electric vehicle called the "CityCar" that confronts the aforementioned problems of urban mobility with a novel vehicle architecture. The assembly of the CityCar derives from a subset of "urban modular electric vehicle" (uMEV) components in which five core units are combined to create a variety of solutions for urban personal mobility. Drastically decreasing the granularity of the vehicle's subcomponents into larger interchangeable modules, the uMEV platform expands options for fleet customization while simultaneously addressing the complex rapport between automotive manufacturers and their suppliers through a responsibility shift among their respective subcomponents. Transforming its anatomy from complex mechanically-dominant entities to electrically-dominant modular components enables unique design features within the uMEV fleet. The CityCar for example exploits technologies such as a folding chassis to reduce its footprint by 40% and Robot Wheels that each are allotted between 72 to 120-degrees of rotation to together enable a seven-foot turning circle. Just over 1,000 pounds, its lightweight zero-emitting electric platform, comprised of significantly fewer parts, curbs negative externalities that today's automobiles create in city environments. Additionally, the vehicle platform developed from the assembly of several core units empowers a consortium of suppliers to self-coordinate through a unique modular business model. Lastly, the CityCar specific uMEV confronts problems within urban transit by providing a nimble folding mobility solution tailored specifically to crowded cities. Benefits, such as a 5:1 parking density and its reduced maintenance demands, are especially reinforced in the context of shared personal transportation services like Mobility-on-Demand.by William Lark, Jr.Ph.D

    How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRR’s Rehabilitation Engineering Research Centers

    Get PDF
    Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a “total approach to rehabilitation”, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970’s, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program

    A LiDAR Based Semi-Autonomous Collision Avoidance System and the Development of a Hardware-in-the-Loop Simulator to Aid in Algorithm Development and Human Studies

    Get PDF
    In this paper, the architecture and implementation of an embedded controller for a steering based semi-autonomous collision avoidance system on a 1/10th scale model is presented. In addition, the development of a 2D hardware-in-the-loop simulator with vehicle dynamics based on the bicycle model is described. The semi-autonomous collision avoidance software is fully contained onboard a single-board computer running embedded GNU/Linux. To eliminate any wired tethers that limit the system’s abilities, the driver operates the vehicle at a user-control-station through a wireless Bluetooth interface. The user-control-station is outfitted with a game-controller that provides standard steering wheel and pedal controls along with a television monitor equipped with a wireless video receiver in order to provide a real-time driver’s perspective video feed. The hardware-in-the-loop simulator was developed in order to aid in the evaluation and further development of the semi-autonomous collision avoidance algorithms. In addition, a post analysis tool was created to numerically and visually inspect the controller’s responses. The ultimate goal of this project was to create a wireless 1/10th scale collision avoidance research platform to facilitate human studies surrounding driver assistance and active safety systems in automobiles. This thesis is a continuation of work done by numerous Cal Poly undergraduate and graduate students

    A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems

    Get PDF
    This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute l’interazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei più comuni campi di studio dell’interazione tangibile e dell’interazione gestuale. Sfruttando le abilità innate dell’uomo di manipolare oggetti fisici e di comunicare con i gesti, l’interazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando l’attenzione sul nostro mondo reale, al di là dello schermo dei computer o degli smartphone. Poiché l’interazione gestuale tangibile è un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per l’Interazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nell’ambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: l’interazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e l’interazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalità di interazione con il sistema di infotainment. Per il secondo campo di applicazione, è stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dell’interazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per l’interazione nella casa intelligente, è stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. L’analisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo

    Ctrl Shift: How Crip Alt Ctrl Designers Change the Game and Reimagine Access

    Get PDF
    My journey as a disabled arts practitioner has been one of invention, hacking, and re-imagining what input systems could be. I have created my own modalities for creating work, rather than relying on commercially available options. This is a common practice within the disabled community, as individuals often modify and hack their surroundings to make them more usable. For example, ADAPT activists took sledgehammers to smash curb cuts and poured curb ramps with cement bags, ultimately leading to the widespread adoption of curb cuts as a standard architectural feature. As Yergeau notes, this type of "criptastic hacking" represents a creative resistance.(Yergeau, 2012) My interfaces and art projects are a combination of science fiction world-building, technology prototyping, and experimentation with novel ways of experiencing the world that work for my ability. I have been building interactive objects for over 20 years, and my bespoke controller games are both pieces I find comfortable to play and conceptual proposals that I share with the games community to spark consideration for alternative ways of interacting with games culture. This interdisciplinary design research herein crosses a range of disciplines, drawing inspiration from radical forms of cognitive science, games studies, feminist studies, HCI, crip technoscience, radical science fiction, disability studies, and making practices. What has emerged through studying my own practice and the practices of others during this research is a criptastic design framework for creating playful experiences. My research aims to gain a deeper understanding of the ways that hacking and remaking the world manifests as modifications to the design process itself. I created four versions of a physical alt ctrl game and conducted a design study with disabled artists and alt ctrl game creators. The game, Bot Party, was developed through a series of public exhibitions and explored my relationship between criptastic bespoke interface design and embodied experiences of group play. Bot Party involves physical interaction among players in groups to understand my own ways of designing, while the study looks three other disabled designers to understand the ways in which their process is similar or different to my own. By conducting this work, I aim to contribute to the larger conversation within the games studies community about the importance of accessibility and inclusivity in game design. The results highlight the need for continued exploration and development in this area, specifically in design methods. The study’s findings as they relate to my own practice revealed the importance of considering a set of values and design processes in relation to disability when creating games and playful experiences. With this perspective, I propose an initial framework that outlines possible key themes for disabled game designers. Using values as a starting point for creating deeply accessible games, this framework serves as a starting point for future research into accessible game design. This framework seeks to subvert the notion that accessibility is a list of UX best practices, audio descriptions, captions, and haptic additions and moves towards embedding within game design the values and practices used by disabled designers from the outset of the creative process. Access can be a creative framework. An important point to make is that my efforts to do a PhD resist the academic ableism limiting the participation of people who are not from a normative background. The act of creating this PhD has eaten at the edge of my ability, and the research here was often conducted in pain under extremely trying circumstances. This perspective is relevant because it often informed my design choices and thinking. Additionally, it was conducted at a university where I experienced active discrimination from members of staff who simply refused to believe in disabilities they could not see, and in one case writing down my disability was, “self-ascribed.” To work, I had to move outside the academy and seek out workshops which gave me accessible, ergonomic equipment as is discussed in the Bot Party section. This bears mentioning because it reflects on how threatening disabilities can be within academic settings and how even providing basic levels of accessibility remains a challenge for academic institutions. The above framework could benefit academia if used to redesign postgraduate academic research practices within the academy from a place of Crip-informed pedagogy. This is future work that this academic researcher hopes to explore in depth within their academic journey. It is important to note, much of the most relevant research to this thesis around disability studies and technology has emerged in recent years and as a result, was included iteratively in the literature review. It has informed the third study and my iterative design practice as part of the journey; however, I began this work before much of the writing in the literature review existed, including the creation of Bot Party’s first iterations. Finding this scholarship and these authors has been a kinning. Kinship, according to Gavin Van Horn, “can be considered a noun…shared and storied relations and memories that inhere in people and places; or more metaphorical imaginings that unite us to faith traditions, cultures, countries, or the planet…Perhaps this kinship-in-action should be called kinning.” (Horn et al., 2021) Kinning happened throughout this work and this thesis served me as a place for discovery, contemplation, and empowerment. It is my hope sections of it will serve this function for others within my community. I found kinship with other authors working in the field of disability studies and technology, particularly with Alison Kafer, who offers a critique of Donna Haraway's cyborg in her book "Feminist Queer Crip." (Kafer, 2013) Kafer's work highlights the limitations of Haraway's cyborg as a figure of empowerment for marginalized bodies and identities, and instead advocates for a crip-queer-feminist perspective on technology and embodiment. Additionally, the author has also found resonance in the work of Aimi Hamraie and Kelly Fritsh, whose work in disability studies and HCI has been instrumental in shaping this research. Specifically, their concept of "crip technoscience" has been a key framework for understanding technology creation by disabled technologists. (Hamraie and Fritsch, 2019) Overall, it is my hope that this thesis will serve as a generative resource for others within the community on this journey, particularly for those who are working towards a more inclusive and intersectional understanding of technology and embodiment

    EVALUATING ENGINEERING LEARNING AND GENDER NEUTRALITY FOR THE PRODUCT DESIGN OF A MODULAR ROBOTIC KIT

    Get PDF
    The development of a system is informed from design factors in order to success- fully support the intended usability from the perceived affordances [1]. The theory of ‘Human Centered Design’ champions that these factors be derived from the user itself. It is based on exploiting these affordances that the boundary of technology is pushed to sometimes invent new methods or sometimes approach a problem from newer perspectives. This thesis is an example where we inform our design rationales from children in order to develop a gender neutral modular robotic toy kit

    HapticHead - Augmenting Reality via Tactile Cues

    Get PDF
    Information overload is increasingly becoming a challenge in today's world. Humans have only a limited amount of attention to allocate between sensory channels and tend to miss or misjudge critical sensory information when multiple activities are going on at the same time. For example, people may miss the sound of an approaching car when walking across the street while looking at their smartphones. Some sensory channels may also be impaired due to congenital or acquired conditions. Among sensory channels, touch is often experienced as obtrusive, especially when it occurs unexpectedly. Since tactile actuators can simulate touch, targeted tactile stimuli can provide users of virtual reality and augmented reality environments with important information for navigation, guidance, alerts, and notifications. In this dissertation, a tactile user interface around the head is presented to relieve or replace a potentially impaired visual channel, called \emph{HapticHead}. It is a high-resolution, omnidirectional, vibrotactile display that presents general, 3D directional, and distance information through dynamic tactile patterns. The head is well suited for tactile feedback because it is sensitive to mechanical stimuli and provides a large spherical surface area that enables the display of precise 3D information and allows the user to intuitively rotate the head in the direction of a stimulus based on natural mapping. Basic research on tactile perception on the head and studies on various use cases of head-based tactile feedback are presented in this thesis. Several investigations and user studies have been conducted on (a) the funneling illusion and localization accuracy of tactile stimuli around the head, (b) the ability of people to discriminate between different tactile patterns on the head, (c) approaches to designing tactile patterns for complex arrays of actuators, (d) increasing the immersion and presence level of virtual reality applications, and (e) assisting people with visual impairments in guidance and micro-navigation. In summary, tactile feedback around the head was found to be highly valuable as an additional information channel in various application scenarios. Most notable is the navigation of visually impaired individuals through a micro-navigation obstacle course, which is an order of magnitude more accurate than the previous state-of-the-art, which used a tactile belt as a feedback modality. The HapticHead tactile user interface's ability to safely navigate people with visual impairments around obstacles and on stairs with a mean deviation from the optimal path of less than 6~cm may ultimately improve the quality of life for many people with visual impairments.Die Informationsüberlastung wird in der heutigen Welt zunehmend zu einer Herausforderung. Der Mensch hat nur eine begrenzte Menge an Aufmerksamkeit, die er zwischen den Sinneskanälen aufteilen kann, und neigt dazu, kritische Sinnesinformationen zu verpassen oder falsch einzuschätzen, wenn mehrere Aktivitäten gleichzeitig ablaufen. Zum Beispiel können Menschen das Geräusch eines herannahenden Autos überhören, wenn sie über die Straße gehen und dabei auf ihr Smartphone schauen. Einige Sinneskanäle können auch aufgrund von angeborenen oder erworbenen Erkrankungen beeinträchtigt sein. Unter den Sinneskanälen wird Berührung oft als aufdringlich empfunden, besonders wenn sie unerwartet auftritt. Da taktile Aktoren Berührungen simulieren können, können gezielte taktile Reize den Benutzern von Virtual- und Augmented Reality Anwendungen wichtige Informationen für die Navigation, Führung, Warnungen und Benachrichtigungen liefern. In dieser Dissertation wird eine taktile Benutzeroberfläche um den Kopf herum präsentiert, um einen möglicherweise beeinträchtigten visuellen Kanal zu entlasten oder zu ersetzen, genannt \emph{HapticHead}. Es handelt sich um ein hochauflösendes, omnidirektionales, vibrotaktiles Display, das allgemeine, 3D-Richtungs- und Entfernungsinformationen durch dynamische taktile Muster darstellt. Der Kopf eignet sich gut für taktiles Feedback, da er empfindlich auf mechanische Reize reagiert und eine große sphärische Oberfläche bietet, die die Darstellung präziser 3D-Informationen ermöglicht und es dem Benutzer erlaubt, den Kopf aufgrund der natürlichen Zuordnung intuitiv in die Richtung eines Reizes zu drehen. Grundlagenforschung zur taktilen Wahrnehmung am Kopf und Studien zu verschiedenen Anwendungsfällen von kopfbasiertem taktilem Feedback werden in dieser Arbeit vorgestellt. Mehrere Untersuchungen und Nutzerstudien wurden durchgeführt zu (a) der Funneling Illusion und der Lokalisierungsgenauigkeit von taktilen Reizen am Kopf, (b) der Fähigkeit von Menschen, zwischen verschiedenen taktilen Mustern am Kopf zu unterscheiden, (c) Ansätzen zur Gestaltung taktiler Muster für komplexe Arrays von Aktoren, (d) der Erhöhung des Immersions- und Präsenzgrades von Virtual-Reality-Anwendungen und (e) der Unterstützung von Menschen mit Sehbehinderungen bei der Führung und Mikronavigation. Zusammenfassend wurde festgestellt, dass taktiles Feedback um den Kopf herum als zusätzlicher Informationskanal in verschiedenen Anwendungsszenarien sehr wertvoll ist. Am interessantesten ist die Navigation von sehbehinderten Personen durch einen Mikronavigations-Hindernisparcours, welche um eine Größenordnung präziser ist als der bisherige Stand der Technik, der einen taktilen Gürtel als Feedback-Modalität verwendete. Die Fähigkeit der taktilen Benutzerschnittstelle HapticHead, Menschen mit Sehbehinderungen mit einer mittleren Abweichung vom optimalen Pfad von weniger als 6~cm sicher um Hindernisse und auf Treppen zu navigieren, kann letztendlich die Lebensqualität vieler Menschen mit Sehbehinderungen verbessern
    corecore