234 research outputs found

    A two party haptic guidance controller via a hard rein

    Get PDF
    In the case of human intervention in disaster response operations like indoor firefighting, where the environment perception is limited due to thick smoke, noise in the oxygen masks and clutter, not only limit the environmental perception of the human responders, but also causes distress. An intelligent agent (man/machine) with full environment perceptual capabilities is an alternative to enhance navigation in such unfavorable environments. Since haptic communication is the least affected mode of communication in such cases, we consider human demonstrations to use a hard rein to guide blindfolded followers with auditory distraction to be a good paradigm to extract salient features of guiding using hard reins. Based on numerical simulations and experimental systems identification based on demonstrations from eight pairs of human subjects, we show that, the relationship between the orientation difference between the follower and the guider, and the lateral swing patterns of the hard rein by the guider can be explained by a novel 3rd order auto regressive predictive controller. Moreover,by modeling the two party voluntary movement dynamics using a virtual damped inertial model, we were able to model the mutual trust between two parties. In the future, the novel controller extracted based on human demonstrations can be tested on a human-robot interaction scenario to guide a visually impaired person in various applications like fire fighting, search and rescue, medical surgery, etc

    Human Behavioral Metrics of a Predictive Model Emerging During Robot Assisted Following Without Visual Feedback

    Get PDF
    Robot assisted guiding is gaining increased interest due to many applications involving moving in noisy and low visibility environments. In such cases, haptic feedback is the most effective medium to communicate. In this paper, we focus on perturbation based haptic feedback due to applications like guide dogs for visually impaired people and potential robotic counterparts providing haptic feedback via reins to assist indoor firefighting in thick smoke. Since proprioceptive sensors like spindles and tendons are part of the muscles involved in the perturbation, haptic perception becomes a coupled phenomenon with spontaneous reflex muscle activity. The nature of this interplay and how the model based sensory-motor integration evolves during haptic based guiding is not well understood yet. In this study, we asked human followers to hold the handle of a hard rein attached to a 1-DoF robotic arm that gave perturbations to the hand to correct an angle error of the follower. We found that human followers start with a 2nd order reactive autoregressive following model and changes it to a predictive model with training. The post-perturbation Electromyography (EMG) activity exhibited a reduction in co-contraction of muscles with training. This was accompanied by a reduction in the leftward/rightward asymmetry of a set of followers behavioural metrics. These results show that the model based prediction accounts for the internal coupling between proprioception and muscle activity during perturbation responses. Furthermore, the results provide a firm foundation and measurement metrics to design and evaluate robot assisted haptic guiding of humans in low visibility environments

    Identification of Haptic Based Guiding Using Hard Reins

    Get PDF
    This paper presents identifications of human-human interaction in which one person with limited auditory and visual perception of the environment (a follower) is guided by an agent with full perceptual capabilities (a guider) via a hard rein along a given path. We investigate several identifications of the interaction between the guider and the follower such as computational models that map states of the follower to actions of the guider and the computational basis of the guider to modulate the force on the rein in response to the trust level of the follower. Based on experimental identification systems on human demonstrations show that the guider and the follower experience learning for an optimal stable state-dependent novel 3rd and 2nd order auto-regressive predictive and reactive control policies respectively. By modeling the follower's dynamics using a time varying virtual damped inertial system, we found that the coefficient of virtual damping is most appropriate to explain the trust level of the follower at any given time. Moreover, we present the stability of the extracted guiding policy when it was implemented on a planar 1-DoF robotic arm. Our findings provide a theoretical basis to design advanced human-robot interaction algorithms applicable to a variety of situations where a human requires the assistance of a robot to perceive the environment

    Human-Aware Robot Navigation by Behavioral Metrics Based on Predictive Model

    Get PDF
    Human-aware robot navigation is very important in many applications in human-robot shared environments. There are some situations, people have to move with less visual and auditory perceptions. In that case, the robot can help to enhance the efficiency of navigation when moving in noisy and low visibility conditions. In that scenario, haptic is the best way to communicate when other modalities are less reliable. We used a rein to guide a human when 1-DoF robotic arm can perturb the humans’ arm to guide into a desired point. The novelty of our work is presenting behavioral metrics based on novel predictive model to strategically position the humans in human-robot shared environment in low visibly and auditory conditions. We found that humans start with a second order reactive autoregressive following model and changes it to a predictive model with training. This result would help us to enhance humans’ safety and comfort in robot leading navigation in shared environment

    Cooperative Navigation for Mixed Human–Robot Teams Using Haptic Feedback

    Get PDF
    In this paper, we present a novel cooperative navigation control for human–robot teams. Assuming that a human wants to reach a final location in a large environment with the help of a mobile robot, the robot must steer the human from the initial to the target position. The challenges posed by cooperative human–robot navigation are typically addressed by using haptic feedback via physical interaction. In contrast with that, in this paper, we describe a different approach, in which the human–robot interaction is achieved via wearable vibrotactile armbands. In the proposed work, the subject is free to decide her/his own pace. A warning vibrational signal is generated by the haptic armbands when a large deviation with respect to the desired pose is detected by the robot. The proposed method has been evaluated in a large indoor environment, where 15 blindfolded human subjects were asked to follow the haptic cues provided by the robot. The participants had to reach a target area, while avoiding static and dynamic obstacles. Experimental results revealed that the blindfolded subjects were able to avoid the obstacles and safely reach the target in all of the performed trials. A comparison is provided between the results obtained with blindfolded users and experiments performed with sighted people

    Context-aware learning for robot-assisted endovascular catheterization

    Get PDF
    Endovascular intervention has become a mainstream treatment of cardiovascular diseases. However, multiple challenges remain such as unwanted radiation exposures, limited two-dimensional image guidance, insufficient force perception and haptic cues. Fast evolving robot-assisted platforms improve the stability and accuracy of instrument manipulation. The master-slave system also removes radiation to the operator. However, the integration of robotic systems into the current surgical workflow is still debatable since repetitive, easy tasks have little value to be executed by the robotic teleoperation. Current systems offer very low autonomy, potential autonomous features could bring more benefits such as reduced cognitive workloads and human error, safer and more consistent instrument manipulation, ability to incorporate various medical imaging and sensing modalities. This research proposes frameworks for automated catheterisation with different machine learning-based algorithms, includes Learning-from-Demonstration, Reinforcement Learning, and Imitation Learning. Those frameworks focused on integrating context for tasks in the process of skill learning, hence achieving better adaptation to different situations and safer tool-tissue interactions. Furthermore, the autonomous feature was applied to next-generation, MR-safe robotic catheterisation platform. The results provide important insights into improving catheter navigation in the form of autonomous task planning, self-optimization with clinical relevant factors, and motivate the design of intelligent, intuitive, and collaborative robots under non-ionizing image modalities.Open Acces

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte Realitäten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt über. Mit der Einführung von leistungsfähigen Mixed-Reality-Geräten verändert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende Geräte sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, müssen Designer und Entwicklerinnen die künftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation präsentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von Modalitäten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenüber Mixed Reality in häuslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der Realität oder Virtualität untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen für Navigationsaufgaben zu einer deutlich höheren Fähigkeit führt, Sehenswürdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der Realität durch Überlagerung von Echtzeitinformationen, die für das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, Wärmestrahlung visuell wahrzunehmen. Darüber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollständig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle Realitäten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die Hände und die Tastatur, zeigt diese in der vermischen Realität an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der Texteingabequalität zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man Virtualität berühren kann, indem wir generisches haptisches Feedback für virtuelle Realitäten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das Präsenzgefühl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als Eingabegerät mit einem sekundären physischen Bildschirm verbunden, um die Ein- und Ausgabemodalitäten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den Artikulationsfähigkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben

    The potential of haptic interfaces for urban cyclists in China

    Get PDF
    The thesis is the culmination of a two year investigation into revolutionary mobile communication interface designs for cyclists in large cities in China. The research came about as a result of my interest in the growing trend of cyclists making phone calls while cycling in China. There is current discussion in China about whether creating legislation would be a good option for controlling mobile phone use while cycling. My analysis of website articles indicates, however that fining cyclists for making mobile phone calls while cycling would be ineffective. In a sense my research is concerned with this view. I hope to demonstrate through this research, therefore that the problem can be addressed through product design rather than through stricter laws and changes to legislation. This current project looks at: Why and how this phenomenon of cyclists making phone calls arose in current modern China, what are the implications, hidden problems and potential opportunities for the existent system? How can these problems be addressed through design with a view towards creating a better interface for cyclists to interact with other people and the traffic system whilst cycling in urban cities? The final design scenarios are used to illustrate how cyclists being an integral to interact with the systems and stay in connect with others. How and why haptic interfaces can contribute for the cyclists’ safety in a broader traffic situation in China
    • …
    corecore