3 research outputs found

    Modeling Human-Robot Interaction in Three Dimensions

    Get PDF
    This dissertation answers the question: Can a small autonomous UAV change a person's movements by emulating animal behaviors? Human-robot interaction (HRI) has generally been limited to engagements with ground robots at human height or shorter, essentially working on the same two dimensional plane, but this ignores potential interactions where the robot may be above the human such as small un- manned aerial vehicles (sUAVs) for crowd control and evacuation or for underwater or space vehicles acting as assistants for divers or astronauts. The dissertation combines two approaches {behavioral robotics and HRI {to create a model of \Comfortable Distance" containing the information about human-human and human-ground robot interactions and extends it to three dimensions. Behavioral robotics guides the ex- amination and transfer of relevant behaviors from animals, most notably mammals, birds, and ying insects, into a computational model that can be programmed in simulation and on a sUAV. The validated model of proxemics in three dimensions makes a fundamental contribution to human-robot interaction. The results also have significant benefit to the public safety community, leading to more effective evacuation and crowd control, and possibly saving lives. Three findings from this experiment were important in regards to sUAVs for evacuation: i) expressions focusing on the person, rather than the area, are good for decreasing time (by 7.5 seconds, p <.0001) and preference (by 17.4 %, p <.0001), ii) personal defense behaviors are best for decreasing time of interaction (by about 4 seconds, p <.004), while site defense behaviors are best for increasing distance of interaction (by about .5 m, p <.003), and iii) Hediger's animal zones may be more applicable than Hall's human social zones when considering interactions with animal behaviors in sUAVs

    Optimising Outcomes of Human-Agent Collaboration using Trust Calibration

    Full text link
    As collaborative agents are implemented within everyday environments and the workforce, user trust in these agents becomes critical to consider. Trust affects user decision making, rendering it an essential component to consider when designing for successful Human-Agent Collaboration (HAC). The purpose of this work is to investigate the relationship between user trust and decision making with the overall aim of providing a trust calibration methodology to achieve the goals and optimise the outcomes of HAC. Recommender systems are used as a testbed for investigation, offering insight on human collaboration with dyadic decision domains. Four studies are conducted and include in-person, online, and simulation experiments. The first study provides evidence of a relationship between user perception of a collaborative agent and trust. Outcomes of the second study demonstrate that initial trust can be used to predict task outcome during HAC, with Signal Detection Theory (SDT) introduced as a method to interpret user decision making in-task. The third study provides evidence to suggest that the implementation of different features within a single agent's interface influences user perception and trust, subsequently impacting outcomes of HAC. Finally, a computational trust calibration methodology harnessing a Partially Observable Markov Decision Process (POMDP) model and SDT is presented and assessed, providing an improved understanding of the mechanisms governing user trust and its relationship with decision making and collaborative task performance during HAC. The contributions from this work address important gaps within the HAC literature. The implications of the proposed methodology and its application to alternative domains are identified and discussed

    Trust in Emergency Evacuation Robots

    Get PDF
    ©2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 10th IEEE International Symposium on Safety Security and Rescue Robotics (SSRR 2012), College Station, TX, Nov. 2012.DOI: 10.1109/SSRR.2012.6523903Would you trust a robot to lead you to safety in an emergency? What design would best attract your attention in a smoke-filled environment? How should the robot behave to best increase your trust? To answer these questions, we have created a three dimensional environment to simulate an emergency and determine to what degree an individual will follow a robot to a variety of exits. Survey feedback and quantitative scenario results were gathered on two different robot designs. Fifteen volunteers completed a total of seven scenarios each: one without a robot and one with each robot pointing to each of three exits in the environment. Robots were followed by each volunteer in at least two scenarios. One-third of all volunteers followed the robot in each robot-guided scenario
    corecore