32 research outputs found

    Beyond human-likeness: Socialness is more influential when attributing mental states to robots

    Get PDF
    We sought to replicate and expand previous work showing that the more human-like a robot appears, the more willing people are to attribute mind-like capabilities and socially engage with it. Forty-two participants played games against a human, a humanoid robot, a mechanoid robot, and a computer algorithm while undergoing functional neuroimaging. We confirmed that the more human-like the agent, the more participants attributed a mind to them. However, exploratory analyses revealed that the perceived socialness of an agent appeared to be as, if not more, important for mind attribution. Our findings suggest top-down knowledge cues may be equally or possibly more influential than bottom-up stimulus cues when exploring mind attribution in non-human agents. While further work is now required to test this hypothesis directly, these preliminary findings hold important implications for robotic design and to understand and test the flexibility of human social cognition when people engage with artificial agent

    MODELING THE CONSUMER ACCEPTANCE OF RETAIL SERVICE ROBOTS

    Get PDF
    This study uses the Computers Are Social Actors (CASA) and domestication theories as the underlying framework of an acceptance model of retail service robots (RSRs). The model illustrates the relationships among facilitators, attitudes toward Human-Robot Interaction (HRI), anxiety toward robots, anticipated service quality, and the acceptance of RSRs. Specifically, the researcher investigates the extent to which the facilitators of usefulness, social capability, the appearance of RSRs, and the attitudes toward HRI affect acceptance and increase the anticipation of service quality. The researcher also tests the inhibiting role of pre-existing anxiety toward robots on the relationship between these facilitators and attitudes toward HRI. The study uses four methodological strategies: (1) incorporating a focus group and personal interviews, (2) using a presentation method of video clip stimuli, (3) empirical data collection and multigroup SEM analyses, and (4) applying three key product categories for the model’s generalization— fashion, technology (mobile phone), and food service (restaurant). The researcher conducts two pretests to check the survey items and to select the video clips. The researcher conducts the main test using an online survey of US consumer panelists (n = 1424) at a marketing agency. The results show that usefulness, social capability, and the appearance of a RSR positively influence the attitudes toward HRI. The attitudes toward HRI predict greater anticipation of service quality and the acceptance of the RSRs. The expected quality of service tends to enhance the acceptance. The relationship between social capability and attitudes toward HRI is weaker when the anxiety toward robots is higher. However, when the anxiety is higher, the relationship between appearance and the attitudes toward HRI is stronger than those with low anxiety. This study contributes to the literature on the CASA and domestication theories and to the human-computer interaction that involves robots or artificial intelligence. By considering social capability, humanness, intelligence, and the appearance of robots, this model of RSR acceptance can provide new insights into the psychological, social, and behavioral principles that guide the commercialization of robots. Further, this acceptance model could help retailers and marketers formulate strategies for effective HRI and RSR adoption in their businesses

    Some Agents are more Similar than Others:Customer Orientation of Frontline Robots and Employees

    Get PDF
    Purpose: The impact of frontline robots (FLRs) on customer orientation perceptions remains unclear. This is remarkable because customers may associate FLRs with standardization and cost-cutting, such that they may not fit firms that aim to be customer oriented. Design/methodology/approach: In four experiments, data are collected from customers interacting with frontline employees (FLEs) and FLRs in different settings. Findings: FLEs are perceived as more customer-oriented than FLRs due to higher competence and warmth evaluations. A relational interaction style attenuates the difference in perceived competence between FLRs and FLEs. These agents are also perceived as more similar in competence and warmth when FLRs participate in the customer journey's information and negotiation stages. Switching from FLE to FLR in the journey harms FLR evaluations. Practical implications: The authors recommend firms to place FLRs only in the negotiation stage or in both the information and negotiation stages of the customer journey. Still then customers should not transition from employees to robots (vice versa does no harm). Firms should ensure that FLRs utilize a relational style when interacting with customers for optimal effects. Originality/value: The authors bridge the FLR and sales/marketing literature by drawing on social cognition theory. The authors also identify the product categories for which customers are willing to negotiate with an FLR. Broadly speaking, this study’s findings underline that customers perceive robots as having agency (i.e. the mental capacity for acting with intentionality) and, just as humans, can be customer-oriented.</p

    Challenging Robot Morality: An Ethical Debate on Humanoid Companions, Dataveillance, and Algorithms

    Get PDF
    In this thesis, I reflect on ethical, moral, and agenthood debates around social and humanoid robots in two ways. I focus on how the technological agency of social robots is understood in ethical canons by shifting from moral concerns in Robot Ethics to data-related ethical concerns in Media and Surveillance Studies. I then move to wider debates on morality, agenthood, and agencies in Machine and Computer Ethics research, so as to highlight that social robots, other robots, machines, and algorithmic structures are often moralised but not understood ethically. In that vein, I distinguish between these two terms to point to a wider critique on the anthropocentric and anthropomorphic tendency in ethical streams, so as to view technology from a morality-aligned standpoint. I undertake a critical survey of current ethical streams and, by doing so, I establish a transdisciplinary ethical discussion around social robots and algorithmic agencies. I undertake this research in two steps. First, I look at the use of humanoid social robots in elderly care, as discussed in Robot Ethics, and expand it with a view from Media and Surveillance Studies on data concern around robots. I hereby examine the social robot and the allocation of its ethical and moral agency as an anthropomorphised and humanoid companion, data tracking device, and Posthumanist ethical network of agencies. This is done to amplify the ethical concerns around its pseudo-agenthood and its potential position as dataveillance. Next, I move on to streams in the Philosophy of Technology (POT) and Machine/Computer Ethics. Here, I discuss concepts on machinic moral agency in digital systems. As I pass from the social robot as a humanoid pseudo-agent towards moralised algorithmic structures, I lay out wider conflicts in morality research streams. Specifically, I address their epistemological simplification and reduction of moral norms to digital code, as well as the increasing dissolvement of accountable agenthood within algorithmic systems. By creating a transdisciplinary investigation on techno-ethical and techno-moral canons and their agency models, I urge for a holistic ethics that, first, gives a greater focus to human agent accountability and moral concerns in the application of robots and, second, negotiates new moral or social norms around the use of robots or digital media structures. This is aligned with increasing concerns around the growing commodification of health data and the lack of transparency on data ownership and privacy infringement.University of Plymout

    The Role of Accounts and Apologies in Mitigating Blame toward Human and Machine Agents

    Get PDF
    Would you trust a machine to make life-or-death decisions about your health and safety? Machines today are capable of achieving much more than they could 30 years ago—and the same will be said for machines that exist 30 years from now. The rise of intelligence in machines has resulted in humans entrusting them with ever-increasing responsibility. With this has arisen the question of whether machines should be given equal responsibility to humans—or if humans will ever perceive machines as being accountable for such responsibility. For example, if an intelligent machine accidentally harms a person, should it be blamed for its mistake? Should it be trusted to continue interacting with humans? Furthermore, how does the assignment of moral blame and trustworthiness toward machines compare to such assignment to humans who harm others? I answer these questions by exploring differences in moral blame and trustworthiness attributed to human and machine agents who make harmful moral mistakes. Additionally, I examine whether the knowledge and type of reason, as well as apology, for the harmful incident affects perceptions of the parties involved. In order to fill the gaps in understanding between topics in moral psychology, cognitive psychology, and artificial intelligence, valuable information from each of these fields have been combined to guide the research study being presented herein

    Artificial Intelligence leadership : how trust and fairness perceptions impact turnover intentions through psychological safety

    Get PDF
    Artificial intelligence agent’s intervention in decision making at organizational environments has been increasing rapidly. These agents bring advantages in decision making due to their objectivity, efficiency, and superior capacity of information processing while lacking human weaknesses such as fatigue or self-interest. However, their perception by organizational employees might be less optimistic, as artificial intelligence leaders might be perceived as less fair and just. This dissertation intends to study the effects that this new type of leadership has on employees' turnover intentions, an important variable as high levels of voluntary turnover cause several losses for companies both in terms of cost increase and loss of talented human resources. Additionally, I propose the decrease in employee’s psychological safety to mediate this relationship. Finally, I propose a way to overcome this effect by manipulating the perceptions of trust and justice of these leaders, in order to try to counter the negative effect of non-human leadership. The results of this study revealed a significant effect of the leader agent on the employees' exit intentions as well as on their psychological safety, including as a mediator of the former. Regarding the moderation of trust and justice perceptions, the results showed that these testimonials have a direct effect on psychological safety, and an indirect one in turnover intentions through psychological safety.A intervenção de agentes de inteligência artificial na tomada de decisão em ambientes organizacionais tem aumentado rapidamente. Estes agentes trazem vantagens para a tomada de decisão devido à sua objetividade, eficiência e superior capacidade de processamento de informação, ao mesmo tempo que não possuem fragilidades humanas tais como fadiga ou interesses próprios. No entanto, a sua perceção por parte dos funcionários da organização pode ser menos otimista, pois os líderes de inteligência artificial podem ser vistos como menos justos e confiáveis. Esta dissertação pretende estudar os efeitos que este novo tipo de liderança tem sobre as intenções de saída dos funcionários, uma variável importante, já que altos níveis de rotatividade voluntária causam várias perdas para as empresas, tanto em termos de aumento de custos quanto de perda de recursos humanos talentosos. Além disso, proponho a diminuição da segurança psicológica dos funcionários para mediar esta relação. Por fim, proponho uma forma de superar esse efeito, manipulando as perceções de confiança e justiça desses líderes, a fim de tentar combater o efeito negativo de uma liderança não humana. Os resultados deste estudo revelaram um efeito significativo do agente de liderança nas intenções de saída dos funcionários e em sua segurança psicológica, inclusive como mediador do primeiro. No que se refere à moderação das perceções de confiança e justiça, os resultados mostraram que estes têm um efeito direto na segurança psicológica, e um efeito indireto nas intenções de saída através da segurança psicológica

    RoboAct : control de acciones para un actor robótico

    Get PDF
    El uncanny valley (o Valle Inquietante) es una hipótesis de la robótica que nos dice que cuando hay robots antropomórficos parecidos en exceso a una persona, causan una sensación de rechazo e inconformidad, sin embargo si el robot es parcialmente parecido, se le manipula como una máquina, generando la reacción inversa y desmejorando la interacción con éste. Con éste trabajo, se busca solucionar el problema de interacción humano-robot con los robots humanoides, realizando un acercamiento desde la psicología humana y aplicando estos resultados a través de algoritmos de cinemática robótica, haciendo que los gestos del robot sean más humanos.The uncanny valley it s a hypothesis in the robotic filed which holds that when are highly human-like anthropomorphic robots, they cause a negative reaction in the people and sensation of discomfort, nonetheless when there is some anthropomorphic similarity, the robot it-s treated like a machine, causing the opposite effect. With this work we want to solve the human-humanoid-robot interaction using a psychological approach and applying this results using robot kinematics algorithms, thus, making the robot gestures, more human-like.Ingeniero (a) de SistemasPregrad

    Human-Machine Communication: Complete Volume. Volume 2

    Get PDF
    This is the complete volume of HMC Volume 2

    The facilitation of trust in automation: a qualitative study of behaviour and attitudes towards emerging technology in military culture

    Get PDF
    High speciality and criticality domains categorise the most researched areas in the field of Trust in Automation. Minimal studies have explored the nuances of the psycho-social environment and organisational culture in the development of appropriate mental models on dispositional trust. To aid integration of human operators with emergent specialised systems, there is ambition to introduce Human-Human/Human-System analogies with AI Avatars and 3D representations of environments (Ministry of Defence, 2018). Due to the criticisms in the literature of Human-Human and Human-System teaming analogues this research has explored personal narratives of civilians and military personnel about technology, adaptability and how to facilitate beneficial attitudes and behaviours in appropriate trust, reliance and misuse. A subdivision of the research explores the socio-cultural idiosyncrasies within the different echelons of the military as variances in authority and kinship provide insight on informing training targeted to unique domains. The thesis proposes that there are core hindrances to tacit trust facilitation with automation as cognitive rigidity towards individual and group identities impact socially constructed social responses and internal mental models. Furthermore, as automation broaches category boundaries there may be resistance and discomfort as a result of unpredictable social contracts whereby transactional and relational trust-related power dynamics are unknown or unpredictable
    corecore