2,188 research outputs found

    Empowering and assisting natural human mobility: The simbiosis walker

    Get PDF
    This paper presents the complete development of the Simbiosis Smart Walker. The device is equipped with a set of sensor subsystems to acquire user-machine interaction forces and the temporal evolution of user's feet during gait. The authors present an adaptive filtering technique used for the identification and separation of different components found on the human-machine interaction forces. This technique allowed isolating the components related with the navigational commands and developing a Fuzzy logic controller to guide the device. The Smart Walker was clinically validated at the Spinal Cord Injury Hospital of Toledo - Spain, presenting great acceptability by spinal chord injury patients and clinical staf

    Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoples‘ Building Exploration in the Context of Orientation and Mobility

    Get PDF
    Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger für eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell präsentiert, was den Zugang für blinde und sehbehinderte Menschen einschränkt. Die grundlegendste Barriere ist oft die elementare Orientierung und Mobilität (und folglich die soziale Mobilität), einschließlich der Erlangung von Kenntnissen über unbekannte Gebäude vor deren Besuch. Um solche Barrieren zu überbrücken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugänglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen Realität (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte Interaktivität, die Aktualisierbarkeit und die Möglichkeit, virtuelle Räume und Modelle als Abbilder von realen Räumen zu erkunden, ohne dass reale Gefahren und die begrenzte Verfügbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit. Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle Repräsentationen realer Gebäude im Kontext von Orientierung und Mobilität berührbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. Zusätzlich bietet diese Arbeit einen Überblick über die verschiedenen Interaktionen. Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung für blinde und sehbehinderte Nutzer und der Weg dorthin durch die Einführung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des Geräts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. Begründet und motiviert werden die folgenden Kapitel durch explorative Ansätze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung). Die folgenden Kapitel führen empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der Hände mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden geräteunabhängige technologische Möglichkeiten und auch Herausforderungen für weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und räumlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren.Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality. Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions. Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches. The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner)

    Report of the Terrestrial Bodies Science Working Group. Volume 9: Complementary research and development

    Get PDF
    Topics discussed include the need for: the conception and development of a wide spectrum of experiments, instruments, and vehicles in order to derive the proper return from an exploration program; the effective use of alternative methods of data acquisition involving ground-based, airborne and near Earth orbital techniques to supplement spacraft mission; and continued reduction and analysis of existing data including laboratory and theoretical studies in order to benefit fully from experiments and to build on the past programs toward a logical and efficient exploration of the solar system

    Sample-Efficient Training of Robotic Guide Using Human Path Prediction Network

    Full text link
    Training a robot that engages with people is challenging, because it is expensive to involve people in a robot training process requiring numerous data samples. This paper proposes a human path prediction network (HPPN) and an evolution strategy-based robot training method using virtual human movements generated by the HPPN, which compensates for this sample inefficiency problem. We applied the proposed method to the training of a robotic guide for visually impaired people, which was designed to collect multimodal human response data and reflect such data when selecting the robot's actions. We collected 1,507 real-world episodes for training the HPPN and then generated over 100,000 virtual episodes for training the robot policy. User test results indicate that our trained robot accurately guides blindfolded participants along a goal path. In addition, by the designed reward to pursue both guidance accuracy and human comfort during the robot policy training process, our robot leads to improved smoothness in human motion while maintaining the accuracy of the guidance. This sample-efficient training method is expected to be widely applicable to all robots and computing machinery that physically interact with humans

    Soft Robotics - The Next Industrial Revolution?

    Get PDF

    Haptic Interaction with a Guide Robot in Zero Visibility

    Get PDF
    Search and rescue operations are often undertaken in dark and noisy environment in which rescue team must rely on haptic feedback for exploration and safe exit. However, little attention has been paid specifically to haptic sensitivity in such contexts or the possibility of enhancing communicational proficiency in the haptic mode as a life-preserving measure. The potential of root swarms for search and rescue has been shown by the Guardians project (EU, 2006-2010); however the project also showed the problem of human robot interaction in smoky (non-visibility) and noisy conditions. The REINS project (UK, 2011-2015) focused on human robot interaction in such conditions. This research is a body of work (done as a part of he REINS project) which investigates the haptic interaction of a person wit a guide robot in zero visibility. The thesis firstly reflects upon real world scenarios where people make use of the haptic sense to interact in zero visibility (such as interaction among firefighters and symbiotic relationship between visually impaired people and guide dogs). In addition, it reflects on the sensitivity and trainability of the haptic sense, to be used for the interaction. The thesis presents an analysis and evaluation of the design of a physical interface (Designed by the consortium of the REINS project) connecting the human and the robotic guide in poor visibility conditions. Finally, it lays a foundation for the design of test cases to evaluate human robot haptic interaction, taking into consideration the two aspects of the interaction, namely locomotion guidance and environmental exploration

    Modeling Three-Dimensional Interaction Tasks for Desktop Virtual Reality

    Get PDF
    A virtual environment is an interactive, head-referenced computer display that gives a user the illusion of presence in real or imaginary worlds. Two most significant differences between a virtual environment and a more traditional interactive 3D computer graphics system are the extent of the user's sense of presence and the level of user participation that can be obtained in the virtual environment. Over the years, advances in computer display hardware and software have substantially progressed the realism of computer-generated images, which dramatically enhanced user’s sense of presence in virtual environments. Unfortunately, such progress of user’s interaction with a virtual environment has not been observed. The scope of the thesis lies in the study of human-computer interaction that occurs in a desktop virtual environment. The objective is to develop/verify 3D interaction models that can be used to quantitatively describe users’ performance for 3D pointing, steering and object pursuit tasks and through the analysis of the interaction models and experimental results to gain a better understanding of users’ movements in the virtual environment. The approach applied throughout the thesis is a modeling methodology that is composed of three procedures, including identifying the variables involved for modeling a 3D interaction task, formulating and verifying the interaction model through user studies and statistical analysis, and applying the model to the evaluation of interaction techniques and input devices and gaining an insight into users’ movements in the virtual environment. In the study of 3D pointing tasks, a two-component model is used to break the tasks into a ballistic phase and a correction phase, and comparison is made between the real-world and virtual-world tasks in each phase. The results indicate that temporal differences arise in both phases, but the difference is significantly greater in the correction phase. This finding inspires us to design a methodology with two-component model and Fitts’ law, which decomposes a pointing task into the ballistic and correction phase and decreases the index of the difficulty of the task during the correction phase. The methodology allows for the development and evaluation of interaction techniques for 3D pointing tasks. For 3D steering tasks, the steering law, which was proposed to model 2D steering tasks, is adapted to 3D tasks by introducing three additional variables, i.e., path curvature, orientation and haptic feedback. The new model suggests that a 3D ball-and-tunnel steering movement consists of a series of small and jerky sub-movements that are similar to the ballistic/correction movements observed in the pointing movements. An interaction model is originally proposed and empirically verified for 3D object pursuit tasks, making use of Stevens’ power law. The results indicate that the power law can be used to model all three common interaction tasks, which may serve as a general law for modeling interaction tasks, and also provides a way to quantitatively compare the tasks
    corecore