67 research outputs found

    Would You Trust a (Faulty) Robot? : Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust

    Get PDF
    How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of its unusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct mode or (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot's performance does not seem to substantially influence participants' decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a signicant im- pact on participants' willingness to follow its instructions

    Investigating Trust and Trust Recovery in Human-Robot Interactions

    Get PDF
    As artificial intelligence and robotics continue to advance and be used in increasingly different functions and situations, it is important to look at how these new technologies will be used. An important factor in how a new resource will be used is how much it is trusted. This experiment was conducted to examine people’s trust in a robotic assistant when completing a task, how mistakes affect this trust, and if the levels of trust exhibited with a robot assistant were significantly different than if the assistant were human. The task was to watch a computer simulation of the three-cup monte or shell game where the assistant would give advice and the participant could choose to follow, ignore, or go against the advice. The hypothesis was that participants would have higher levels of trust in the robotic assistant than the human, but that mistakes would have a larger impact on trust levels. The study found that while there was not a significant difference between the overall levels of trust between the robotic assistant and the human one, mistakes did have a significantly larger impact on the short-term trust levels for the robotic assistant versus the human

    A Need for Trust in Conversational Interface Research

    Full text link
    Across several branches of conversational interaction research including interactions with social robots, embodied agents, and conversational assistants, users have identified trust as a critical part of those interactions. Nevertheless, there is little agreement on what trust means within these sort of interactions or how trust can be measured. In this paper, we explore some of the dimensions of trust as it has been understood in previous work and we outline some of the ways trust has been measured in the hopes of furthering discussion of the concept across the field

    What a pity, Pepper! How warmth in robot's language impacts reactions to errors during a collaborative task

    Get PDF
    Hoffmann L, Derksen M, Kopp S. What a pity, Pepper! How warmth in robot's language impacts reactions to errors during a collaborative task. In: HRI '20 Companion. ACM; 2020

    Human-Robot Interactions: Insights from Experimental and Evolutionary Social Sciences

    Get PDF
    Experimental research in the realm of human-robot interactions has focused on the behavioral and psychological influences affecting human interaction and cooperation with robots. A robot is loosely defined as a device designed to perform agentic tasks autonomously or under remote control, often replicating or assisting human actions. Robots can vary widely in form, ranging from simple assembly line machines performing repetitive actions to advanced systems with no moving parts but with artificial intelligence (AI) capable of learning, problem-solving, communicating, and adapting to diverse environments and human interactions. Applications of experimental human-robot interaction research include the design, development, and implementation of robotic technologies that better align with human preferences, behaviors, and societal needs. As such, a central goal of experimental research on human-robot interactions is to better understand how trust is developed and maintained. A number of studies suggest that humans trust and act toward robots as they do towards humans, applying social norms and inferring agentic intent (Rai and Diermeier, 2015). While many robots are harmless and even helpful, some robots may reduce their human partner’s wages, security, or welfare and should not be trusted (Taddeo, McCutcheon and Floridi, 2019; Acemoglu and Restrepo, 2020; Alekseev, 2020). For example, more than half of all internet traffic is generated by bots, the majority of which are \u27bad bots\u27 (Imperva, 2016). Despite the hazards, robotic technologies are already transforming our everyday lives and finding their way into important domains such as healthcare, transportation, manufacturing, customer service, education, and disaster relief (Meyerson et al., 2023)

    The Right to a Fair Trial in Automated Civil Proceedings

    Get PDF
    Challenges associated with the use of artificial intelligence (AI) in law are one of the most hotly debated issues today. This paper draws attention to the question of how to safeguard the right to a fair trial in the light of rapidly changing technologies significantly affecting the judiciary and enabling automation of the civil procedure. The paper does not intend to comprehensively address all aspects related to the right to a fair trial in the context of the automation of civil proceedings but rather seeks to analyse some legal concerns from the perspective of the Article 6 of the European Convention on Human Rights and the case-law of the European Court of Human Rights. Section 1 discusses the issues of using artificial intelligence in the justice and automation of the judicial proceedings. Section 2 is devoted to the judge supporting system based on artificial intelligence and psychological requirements of its practical use. Section 3 presents the right to a fair trial in civil cases established by the Article 6 of the European Convention on Human Rights, while subsequent sections characterize its elements with respect to the possibility to automate civil proceedings: a right to have case heard within a reasonable time in section 4 and a right to a reasoned judgment in section 5

    AN REPRODUCTION APTITUDE TRICK FOR SMART LEARNING METHOD

    Get PDF
    Using sensors in a physical and semantic level offers the chance to use temporal constraints. This idea is extended by thinking about the robot must learn to in the user concerning the user’s activities and subsequently have the ability to exploit these details later on teaching episodes. The thought of co-learning within this context refers back to the situation whereby an individual user along with a robot interacts to attain a specific goal. Numerous enhancements and enhancements to such facilities are always to use both inductive and predictive mechanisms to improve the longevity of the robot recognizing user activities.  In the present paper, we supply the house resident by having an interface for teaching robot behaviors according to formerly learnt activities using Quinlan’s C4.5 rule induction system. The participants didn't, however, agree as strongly on set up robot ought to be completely setup by another person, having a wider selection of responses in the participants. The resulting robot behavior rules will also be with different production rule approach. The sensor system supplies a standardized method of encoding information and offers options for connecting semantic sensors along with other, typically exterior, occasions. Take into account that the individual has indicated towards the robot that she or he are “preparing food” and sooner or later also indicated that she or he are actually “using the toaster.” When the robot learns the group of physical activities connected using these tasks it will be able to recognize them once they occur later on. During these studies the robot was operating mainly like a cognitive prosthetic

    Robot Behavior Architecture Based on Smart Resource Service Execution

    Full text link
    [EN] Robot behavior definition aims to classify and specify the robot tasks execution. Behavior architecture design is crucial for proper robot operation performance. According to this, this work aims to establish a robot behavior architecture based on distributed intelligent services. Therefore, behavior definition is set in a high-level delegating the task execution to distributed services provided by network abstractions characterized as Smart Resources. In order to provide a mechanism to measure the performance of this architecture, an evaluation mechanisms based on a service performance composition is introduced. In order to test this proposal it is designed a real use case implementing the proposed robot behavior architecture on a real navigation task.Work supported by the Spanish Science and Innovation Ministry MICINN: CICYT 866 project M2C2: Codiseño de sistemas de control con criticidad mixta basado en 867 misiones TIN2014-56158-C4-4-P and PAID (Polytechnic University of Valencia): 868 UPV-PAID-FPI-2013.Munera-Sánchez, E.; Poza-Lujan, J.; Posadas-Yagüe, J.; Simó Ten, JE.; Blanes Noguera, F. (2017). Robot Behavior Architecture Based on Smart Resource Service Execution. International Journal of Soft Computing And Artificial Intelligence (Online). 5(1):55-60. http://hdl.handle.net/10251/152272S55605
    • …
    corecore