4,685 research outputs found

    Negative attitudes towards robots vary by the occupation of robots

    Get PDF
    The "negative attitudes towards robots scale" (NARS) has been widely applied in the field of robot-human interaction. However, the various occupations and roles of robots have not been discussed when studying negative attitudes towards robots. This study explores whether the occupation of robots could influence people's negative attitudes towards them. For the first time, two types of robots that may be widely used were used in a NARS-related study. We conducted online questionnaire research, covering three separate parts: negative attitudes towards robots, negative attitudes towards service robots, and negative attitudes towards security robots. The results of the online survey collected from 114 participants (54 females and 60 males) highlighted differences among the scores of people's negative attitudes towards service robots and the negative attitudes towards robots or security robots. People showed the lowest negative attitudes towards service robots. There were no significant differences between the negative attitudes towards robots and security robots. This study supports the hypothesis that people show different levels of negative attitudes towards different types of robots in terms of occupational division. These results provide a helpful indicator for the study and design of robots in various occupations in the robotics industry

    Perceptions of healthcare robots as a function of emotion-based coping:The importance of coping appraisals and coping strategies

    Get PDF
    The urgent pressure on healthcare increases the need for understanding how new technology such as social robots may offer solutions. Many healthcare situations are emotionally charged, which likely affects people's perceptions of such robots in healthcare contexts. Thus far however, little attention has been paid to how people's prior emotions may influence their perceptions of the robot. Based on emotional appraisal theories and prior research, we assume that particularly emotional coping appraisals would influence healthcare-robot perceptions. Additionally, we tested effects of actual coping through the use of emotion-focused and problem-focused coping strategies. Hypotheses were tested in a 2 (sad vs. angry) x 2 (hard-to-cope-with vs. easy-to-cope-with) between-subjects experiment, also including a control group. Results (N = 132; age range 18–36) showed that manipulated coping potential indirectly affected perceptions of a healthcare robot via the appraisal of coping potential. Furthermore, positive emotion-focused coping affected perceptions of a healthcare robot positively. Thus, people's healthcare-robot perceptions were affected by how they cope or how they think they can cope with their emotions, rather than by the emotions as such

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Public Opinions of Unmanned Aerial Technologies in 2014 to 2019: A Technical and Descriptive Report

    Get PDF
    The primary purpose of this report is to provide a descriptive and technical summary of the results from similar surveys administered in fall 2014 (n = 576), 2015 (n = 301), 2016 (ns = 1946 and 2089), and 2018 (n = 1050) and summer 2019 (n = 1300). In order to explore a variety of factors that may impact public perceptions of unmanned aerial technologies (UATs), we conducted survey experiments over time. These experiments randomly varied the terminology (drone, aerial robot, unmanned aerial vehicle (UAV), unmanned aerial system (UAS)) used to describe the technology, the purposes of the technology (for economic, environmental, or security goals), the actors (public or private) using the technology, the technology’s autonomy (fully autonomous, partially autonomous, no autonomy), and the framing (promotion or prevention) used to describe the technology’s purpose. Initially, samples were recruited through Amazon’s Mechanical Turk, required to be Americans, and paid a small amount for participation. In 2016 we also examined a nationally representative samples recruited from Qualtrics panels. After 2016 we only used nationally representative samples from Qualtrics. Major findings are reported along with details regarding the research methods and analyses

    Preliminary validation of the European Portuguese version of the Robotic Social Attributes Scale (RoSAS)

    Get PDF
    Background: People’s perception of social robots is essential in determining their responses and acceptance of this type of agent. Currently, there are few instruments validated for the European Portuguese population that measures the perception of social robots. Method: Our goal was to translate, validate, and evaluate the psychometric properties of the Robotic Social Attributes Scale (RoSAS) to European Portuguese. To achieve this goal, we conducted a validation study using a sample of 185 participants. We measured the temporal validity of the scale (over a two-week interval) and its divergent and convergent validity using the Portuguese Negative Attitudes towards Robots Scale (PNARS) and the Godspeed scales. Results: Our data analysis resulted in a shortened version of the Portuguese RoSAS with 11 items while retaining the original three-factor structure. The scale presented poor to acceptable levels of temporal reliability. We found a positive correlation between the warmth and competence dimensions. Further validation studies are needed to investigate the psychometric properties of this scale.info:eu-repo/semantics/publishedVersio

    A Systematic Literature Review of User Experience Evaluation Scales for Human-Robot Collaboration

    Get PDF
    In the last decade, the field of Human-Robot Collaboration (HRC) has received much attention from both research institutions and industries. Robot technologies are in fact deployed in many different areas (e.g., industrial processes, people assistance) to support an effective collaboration between humans and robots. In this transdisciplinary context, User eXperience (UX) has inevitably to be considered to achieve an effective HRC, namely to allow the robots to better respond to the users’ needs and thus improve the interaction quality. The present paper reviews the evaluation scales used in HRC scenarios, focusing on the application context and evaluated aspects. In particular, a systematic review was conducted based on the following questions: (RQ1) which evaluation scales are adopted within the HRI scenario with collaborative tasks?, and (RQ2) how the UX and user satisfaction are assessed?. The records analysis highlighted that the UX aspects are not sufficiently examined in the current HRC design practice, particularly in the industrial field. This is most likely due to a lack of standardized scales. To respond to this recognized need, a set of dimensions to be considered in a new UX evaluation scale were proposed

    Botsourcing and Outsourcing: Robot, British, Chinese, and German Workers Are for Thinking—Not Feeling—Jobs

    Get PDF
    Technological innovations have produced robots capable of jobs that, until recently, only humans could perform. The present research explores the psychology of "botsourcing"—the replacement of human jobs by robots—while examining how understanding botsourcing can inform the psychology of outsourcing—the replacement of jobs in one country by humans from other countries. We test four related hypotheses across six experiments: (1) Given people's lay theories about the capacities for cognition and emotion for robots and humans, workers will express more discomfort with botsourcing when they consider losing jobs that require emotion versus cognition; (2) people will express more comfort with botsourcing when jobs are framed as requiring cognition versus emotion; (3) people will express more comfort with botsourcing for jobs that do require emotion if robots appear to convey more emotion; and (4) people prefer to outsource cognition-oriented versus emotion-oriented jobs to other humans who are perceived as more versus less robotic. These results have theoretical implications for understanding social cognition about both humans and nonhumans and practical implications for the increasingly botsourced and outsourced economy

    Robot Mindreading and the Problem of Trust

    Get PDF
    This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency? My worry here is that by attempting to make robots more mind-readable, they are abandoning the project of understanding automatic decision processes. Features that enhance mind-readability are prone to make the factors that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. The last part of the paper discusses different ways to analyze this apparent trade-off and suggests that a possible solution must adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot

    Artificial intelligence and human values

    Get PDF
    Artificial Intelligence (AI) is increasingly used in daily life. Where once decisions and choices were left to human management, technology now plays a much more incisive role. This topic has spawned several diverging and alarming opinions (e.g. Elon Musk and Stephen Hawkings), due to the various ethical susceptibilities that AI development spans. The present study attempts to perceive whether human values influence individuals’ attitudes towards AI evolution. This technology is exposed in several different contexts: in the chance that AI threatens each one of the values; the opposite case, where AI is beneficial; in the need (or lack thereof) for the presence of regulatory agents, and whether that changed people's initial decision. With a sample of 205 participants, and through quantitative methodology (questionnaire), as well as qualitative (semi-structured interviews), the conclusion is that equality, freedom, health and national security constitute a predictive power when it comes to the attitudes that individuals nurture towards AI evolution. More specifically, in the event that AI threatens equality, people develop unfavourable attitudes towards its evolution. The same happens for freedom, where people are also against AI evolution, whether it benefits or threatens human values. People tend to be in favour of AI evolution if it benefits health, but require the presence of regulatory agents. Lastly, the attitudes towards AI evolution are positive if it benefits national security. People still demonstrate generally positive attitudes in the event that this value is threatened by AI, but require the presence of regulatory agents.A inteligência artificial (IA) é cada vez mais utilizada na vida quotidiana, em que onde as decisões e escolhas eram deixadas à gestão humana, a tecnologia assume agora um papel mais incisivo nessa questão. Esta temática tem sido motivo para várias opiniões divergentes e alarmísticas (e.g. Elon Musk e Stephen Hawkings), devido às várias suscetibilidades éticas que o desenvolvimento da IA abarca. O presente estudo procura percecionar se os valores humanos influenciam as atitudes que os indivíduos têm para com a evolução da IA. Esta tecnologia é exposta em vários contextos: no caso da IA ameaçar cada um dos valores; no caso oposto, de a mesma os beneficiar; na necessidade ou não da presença de agentes reguladores, e se os mesmos de alguma forma alteravam a sua decisão inicial. Com uma amostra de 205 participantes, através de uma metodologia quantitativa (questionário) e qualitativa (entrevistas semiestruturadas), concluiu-se que a igualdade, a liberdade, a saúde e a segurança nacional constituem um poder preditivo relativamente às atitudes que os indivíduos têm face à evolução da IA. Mais especificamente, se a IA ameaçar a igualdade surgem atitudes desfavoráveis à sua evolução, igualmente, no caso da liberdade as pessoas também são contra à evolução da IA, mesmo que ela beneficie ou ameace os valores humanos. A saúde se for beneficiada pela IA, a pessoas tendem a ser a favor da sua evolução, mas sempre com a presença de agentes reguladores. Por fim, se a IA beneficiar a segurança nacional, surgem atitudes positivas face à sua evolução, bem como, se a IA ameaçar o mesmo valor as pessoas continuam a demonstrar atitudes positivas, mas exigem a presença de agentes reguladores
    • …
    corecore