12 research outputs found

    A Multisite Preregistered Paradigmatic Test of the Ego-Depletion Effect

    Get PDF
    We conducted a preregistered multilaboratory project (k = 36; N = 3,531) to assess the size and robustness of ego-depletion effects using a novel replication method, termed the paradigmatic replication approach. Each laboratory implemented one of two procedures that was intended to manipulate self-control and tested performance on a subsequent measure of self-control. Confirmatory tests found a nonsignificant result (d = 0.06). Confirmatory Bayesian meta-analyses using an informed-prior hypothesis (ÎŽ = 0.30, SD = 0.15) found that the data were 4 times more likely under the null than the alternative hypothesis. Hence, preregistered analyses did not find evidence for a depletion effect. Exploratory analyses on the full sample (i.e., ignoring exclusion criteria) found a statistically significant effect (d = 0.08); Bayesian analyses showed that the data were about equally likely under the null and informed-prior hypotheses. Exploratory moderator tests suggested that the depletion effect was larger for participants who reported more fatigue but was not moderated by trait self-control, willpower beliefs, or action orientation.</p

    Ambivalence in attitudes towards robots

    No full text
    Stapels J. Ambivalence in attitudes towards robots. Bielefeld: UniversitĂ€t Bielefeld; 2022.The current work investigated the under-researched topic of ambivalent attitudes towards robots from an experimental social psychological perspective. While ambivalence has been a research topic for almost a hundred years, it is often overlooked in the context of attitude research. When an attitude towards an attitude object is not clearly positive or negative, it is often interpreted as neutral. Depending upon the measurement method, ambivalent attitude objects may appear neutral, despite differing in terms of their positive and negative evaluations, their perceived subjective conflict, and the affective, behavioral and cognitive indicators of such conflict. In this work, we apply a theoretical framework, the ABC of Ambivalence (van Harreveld et al., 2015), to the domain of attitudes towards robots and thereby test the external validity of the model as well as enhancing our understanding of attitudes towards robots. In three manuscripts relating to five experiments and data from over 600 participants in total we demonstrated firstly, that attitudes towards robots are highly ambivalent. Secondly, we investigated the evaluation contents and dispositional differences influencing ambivalence towards robots in a mixed methods design. Thirdly, using implicit and explicit measures, we examined the behavioral, and cognitive indicators of ambivalence in attitudes towards robots, providing an updated ABC of Ambivalence, the AB of Robot-related Ambivalence. While self-reported attitudes were consistently highly ambivalent across experiments, the behavioral indicators of such ambivalence seemed to depend upon the type of robot. Further, the current research highlighted boundaries concerning the cognitive indicators of ambivalence, which could not be replicated in the domain of social robotics. Further research is required to investigate the specific cognitive and behavioral indicators of ambivalence. The current work demonstrates a novel interpretation of seemingly “neutral” attitudes towards robots, encouraging researchers to reinterpret and possibly replicate robot related attitude research with the proposed methodology considering attitudinal ambivalence

    VIVA - Ambivalence towards and potential use of a newly developed social robot

    No full text
    Stapels J, Eyssel F. VIVA - Ambivalence towards and potential use of a newly developed social robot. Presented at the Social Psychology Conference 2021: The Multiple Angles of Well-being, Helsinki.Social robots bear the potential to increase wellbeing by enabling social interaction. However, ambivalent attitudes toward robots may be among the reasons why to date, social robots are not yet widely accepted. Ambivalence, the simultaneous experience of strong positive and negative evaluations, is an unpleasant state that yields detrimental consequences on the affective, cognitive, and behavioural level. To gain insights into these dynamics, we conducted a preregistered experiment comparing two robots: Pepper and VIVA. 40 participants initially rated the design of both robots. Subsequently, they evaluated the newly developed robot VIVA based on a video featuring use cases for the robot. Moreover, ambivalence towards VIVA, VIVA’s perceived contributions to people’s wellbeing, perceived desirability of VIVA’s alleged functions were assessed. Finally, potential user groups were identified. Results showed that, VIVA’s design was perceived as adorable, high-quality, friendly, and sympathetic. However, comparing VIVA and Pepper, we found that Pepper was evaluated as more trustworthy, responsible, and intelligent than VIVA. VIVA was perceived as trustworthy but was deemed neither likeable nor accepted. As hypothesized, attitudes towards VIVA were ambivalent, implying that participants were torn between strong positive and strong negative evaluations of the robot. Participants did not expect VIVA to increase their wellbeing. Memory-based functions, such as reminder services or learning support and security functions, such as home surveillance or emergency call placement were evaluated as desirable. Our findings imply that a user-centred approach which addresses robot related ambivalence may increase robot acceptance

    Let’s not be indifferent about robots: Neutral ratings on bipolar measures mask ambivalence in attitudes towards robots

    No full text
    Stapels J, Eyssel F. Let’s not be indifferent about robots: Neutral ratings on bipolar measures mask ambivalence in attitudes towards robots. PLOS ONE. 2021;16(1): e0244697.Ambivalence, the simultaneous experience of both positive and negative feelings about one and the same attitude object, has been investigated within psychological attitude research for decades. Ambivalence is interpreted as an attitudinal conflict with distinct affective, behavioral, and cognitive consequences. In social psychological research, it has been shown that ambivalence is sometimes confused with neutrality due to the use of measures that cannot distinguish between neutrality and ambivalence. Likewise, in social robotics research the attitudes of users are often characterized as neutral. We assume that this is due to the fact that existing research regarding attitudes towards robots lacks the opportunity to measure ambivalence. In the current experiment (N = 45), we show that a neutral and a robot stimulus were evaluated equivalently when using a bipolar item, but evaluations differed greatly regarding self-reported ambivalence and arousal. This points to attitudes towards robots being in fact highly ambivalent, although they might appear neutral depending on the measurement method. To gain valid insights into people’s attitudes towards robots, positive and negative evaluations of robots should be measured separately, providing participants with measures to express evaluative conflict instead of administering bipolar items. Acknowledging the role of ambivalence in attitude research focusing on robots has the potential to deepen our understanding of users’ attitudes and their potential evaluative conflicts, and thus improve predictions of behavior from attitudes towards robots

    Robocalypse? Yes, Please! The Role of Robot Autonomy in the Development of Ambivalent Attitudes Towards Robots

    No full text
    Stapels J, Eyssel F. Robocalypse? Yes, Please! The Role of Robot Autonomy in the Development of Ambivalent Attitudes Towards Robots. International Journal of Social Robotics. 2021;14(3):683–697.**Abstract** Attitudes towards robots are not always unequivocally positive or negative: when attitudes encompass both strong positive and strong negative evaluations about an attitude object, people experience an unpleasant state of evaluative conflict, called ambivalence. To shed light on ambivalence towards robots, we conducted a mixed-methods experiment withN= 163 German university students that investigated the influence of robot autonomy on robot-related attitudes. With technological progress, robots become increasingly autonomous. We hypothesized that high levels of robot autonomy would increase both positive and negative robot-related evaluations, resulting in more attitudinal ambivalence. We experimentally manipulated robot autonomy through text vignettes and assessed objective ambivalence (i.e., the amount of reported conflicting thoughts and feelings) and subjective ambivalence (i.e., self-reported experienced conflict) towards the robot ‘VIVA’ using qualitative and quantitative measures. Autonomy did not impact objective ambivalence. However, subjective ambivalence was higher towards the robot high versus low in autonomy. Interestingly, this effect turned non-significant when controlling for individual differences in technology commitment. Qualitative results were categorized by two independent raters into assets (e.g., assistance, companionship) and risks (e.g., privacy/data security, social isolation). Taken together, the present research demonstrated that attitudes towards robots are indeed ambivalent and that this ambivalence might influence behavioral intentions towards robots. Moreover, the findings highlight the important role of technology commitment. Finally, qualitative results shed light on potential users’ concerns and aspirations. This way, these data provide useful insights into factors that facilitate human–robot research

    Never trust anything that can think for itself, if you can’t control its privacy settings: The influence of a robot’s privacy settings on users’ attitudes and willingness to self-disclose

    No full text
    Stapels JG, Penner A, Diekmann N, Eyssel F. Never trust anything that can think for itself, if you can’t control its privacy settings: The influence of a robot’s privacy settings on users’ attitudes and willingness to self-disclose. International Journal of Social Robotics. 2023.**Abstract** When encountering social robots, potential users are often facing a dilemma between privacy and utility. That is, high utility often comes at the cost of lenient privacy settings, allowing the robot to store personal data and to connect to the internet permanently, which brings in associated data security risks. However, to date, it still remains unclear how this dilemma affects attitudes and behavioral intentions towards the respective robot. To shed light on the influence of a social robot’s privacy settings on robot-related attitudes and behavioral intentions, we conducted two online experiments with a total sample ofN = 320 German university students. We hypothesized that strict privacy settings compared to lenient privacy settings of a social robot would result in more favorable attitudes and behavioral intentions towards the robot in Experiment 1. For Experiment 2, we expected more favorable attitudes and behavioral intentions for choosing independently the robot’s privacy settings in comparison to evaluating preset privacy settings. However, those two manipulations seemed to influence attitudes towards the robot in diverging domains: While strict privacy settings increased trust, decreased subjective ambivalence and increased the willingness to self-disclose compared to lenient privacy settings, the choice of privacy settings seemed to primarily impact robot likeability, contact intentions and the depth of potential self-disclosure. Strict compared to lenient privacy settings might reduce the risk associated with robot contact and thereby also reduce risk-related attitudes and increase trust-dependent behavioral intentions. However, if allowed to choose, people make the robot ‘their own’, through making a privacy-utility tradeoff. This tradeoff is likely a compromise between full privacy and full utility and thus does not reduce risks of robot-contact as much as strict privacy settings do. Future experiments should replicate these results using real-life human robot interaction and different scenarios to further investigate the psychological mechanisms causing such divergences

    Too close to call: Spatial distance between options influences choice difficulty

    No full text
    Schneider IK, Stapels J, Koole SL, Schwarz N. Too close to call: Spatial distance between options influences choice difficulty. Journal of Experimental Social Psychology. 2020;87: 103939.In language, people often refer to decision difficulty in terms of spatial distance. Specifically, decision-difficulty is expressed as proximity, for instance when people say that a decision was “too close to call”. Although these expressions are metaphorical, we argue, in line with research on conceptual metaphor theory, that they reflect how people think about difficult decisions. Thus, here we examine whether close spatial distance can actually make decision-making harder. In six experiments (total N = 672), participants chose between two choice options presented either close together or far apart. As predicted, close (rather than far) choice options led to more difficulty, both in self-report (Experiment 1A–1C) and behavioral measures (decision-time, Experiment 2 and 3). Identifying a boundary condition, we show that close choice options lead to more difficulty only for within-category choices (Experiment 3). The too-close-to-call effect is theoretically and methodologically relevant for a broad array of research where choice options are visually presented, ranging from social cognition, judgment and decision-making to more applied settings in consumer psychology and marketing

    Too close to call:Spatial distance between options influences choice difficulty

    No full text
    In language, people often refer to decision difficulty in terms of spatial distance. Specifically, decision-difficulty is expressed as proximity, for instance when people say that a decision was too close to call. Although these expressions are metaphorical, we argue, in line with research on conceptual metaphor theory, that they reflect how people think about difficult decisions. Thus, here we examine whether close spatial distance can actually make decision-making harder. In six experiments (total N = 672), participants chose between two choice options presented either close together or far apart. As predicted, close (rather than far) choice options led to more difficulty, both in self-report (Experiment 1A-1C) and behavioral measures (decision-time, Experiment 2 and 3). Identifying a boundary condition, we show that close choice options lead to more difficulty only for within-category choices (Experiment 3). The too-close-to-call effect is theoretically and methodologically relevant for a broad array of research where choice options are visually presented, ranging from social cognition, judgment and decision-making to more applied settings in consumer psychology and marketing

    To Move or Not to Move? Social Acceptability of Robot Proxemics Behavior Depending on User Emotion

    No full text
    Petrak B, Stapels J, Weitz K, Eyssel F, Andre E. To Move or Not to Move? Social Acceptability of Robot Proxemics Behavior Depending on User Emotion. In: 2021 30th IEEE International Conference on Robot &amp; Human Interactive Communication (RO-MAN). IEEE; 2021: 975-982.Various works show that proxemics occupies an important role in human-robot interaction and that appropriate proxemic interaction depends on many characteristics of humans and robots. However, there is none that shows the relationship between an emotional state expressed by a user and a proxemic reaction of the robot to it, in a social interaction between these interactants. In the current experiment (N = 82), we investigate this using an online study in which we examine which proxemic response (i.e., approaching, not moving, moving away) to a person’s expressed emotional state (i.e., anger, fear, disgust, surprise, sadness, joy) is perceived as appropriate. The quantitative and qualitative data collected suggests that the robot’s approach was considered appropriate for the expressed fear, sadness, and joy, whereas moving away was perceived as inappropriate in most scenarios. Further exploratory findings underline the importance of appropriate nonverbal behavior on the perception of the robot
    corecore