20 research outputs found

    Estimating the construct validity of Principal Components Analysis

    Full text link
    In many scientific disciplines, the features of interest cannot be observed directly, so must instead be inferred from observed behaviour. Latent variable analyses are increasingly employed to systematise these inferences, and Principal Components Analysis (PCA) is perhaps the simplest and most popular of these methods. Here, we examine how the assumptions that we are prepared to entertain, about the latent variable system, mediate the likelihood that PCA-derived components will capture the true sources of variance underlying data. As expected, we find that this likelihood is excellent in the best case, and robust to empirically reasonable levels of measurement noise, but best-case performance is also: (a) not robust to violations of the method's more prominent assumptions, of linearity and orthogonality; and also (b) requires that other subtler assumptions be made, such as that the latent variables should have varying importance, and that weights relating latent variables to observed data have zero mean. Neither variance explained, nor replication in independent samples, could reliably predict which (if any) PCA-derived components will capture true sources of variance in data. We conclude by describing a procedure to fit these inferences more directly to empirical data, and use it to find that components derived via PCA from two different empirical neuropsychological datasets, are less likely to have meaningful referents in the brain than we hoped.Comment: 26 pages, 3 figures, 3 table

    Exploration of big data in education

    Get PDF

    The human in the loop Perspectives and challenges for RoboCup 2050

    Get PDF
    Robotics researchers have been focusing on developing autonomous and human-like intelligent robots that are able to plan, navigate, manipulate objects, and interact with humans in both static and dynamic environments. These capabilities, however, are usually developed for direct interactions with people in controlled environments, and evaluated primarily in terms of human safety. Consequently, human-robot interaction (HRI) in scenarios with no intervention of technical personnel is under-explored. However, in the future, robots will be deployed in unstructured and unsupervised environments where they will be expected to work unsupervised on tasks which require direct interaction with humans and may not necessarily be collaborative. Developing such robots requires comparing the effectiveness and efficiency of similar design approaches and techniques. Yet, issues regarding the reproducibility of results, comparing different approaches between research groups, and creating challenging milestones to measure performance and development over time make this difficult. Here we discuss the international robotics competition called RoboCup as a benchmark for the progress and open challenges in AI and robotics development. The long term goal of RoboCup is developing a robot soccer team that can win against the world’s best human soccer team by 2050. We selected RoboCup because it requires robots to be able to play with and against humans in unstructured environments, such as uneven fields and natural lighting conditions, and it challenges the known accepted dynamics in HRI. Considering the current state of robotics technology, RoboCup’s goal opens up several open research questions to be addressed by roboticists. In this paper, we (a) summarise the current challenges in robotics by using RoboCup development as an evaluation metric, (b) discuss the state-of-the-art approaches to these challenges and how they currently apply to RoboCup, and (c) present a path for future development in the given areas to meet RoboCup’s goal of having robots play soccer against and with humans by 2050.</p

    The human in the loop Perspectives and challenges for RoboCup 2050

    Get PDF
    © 2024 The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/Robotics researchers have been focusing on developing autonomous and human-like intelligent robots that are able to plan, navigate, manipulate objects, and interact with humans in both static and dynamic environments. These capabilities, however, are usually developed for direct interactions with people in controlled environments, and evaluated primarily in terms of human safety. Consequently, human-robot interaction (HRI) in scenarios with no intervention of technical personnel is under-explored. However, in the future, robots will be deployed in unstructured and unsupervised environments where they will be expected to work unsupervised on tasks which require direct interaction with humans and may not necessarily be collaborative. Developing such robots requires comparing the effectiveness and efficiency of similar design approaches and techniques. Yet, issues regarding the reproducibility of results, comparing different approaches between research groups, and creating challenging milestones to measure performance and development over time make this difficult. Here we discuss the international robotics competition called RoboCup as a benchmark for the progress and open challenges in AI and robotics development. The long term goal of RoboCup is developing a robot soccer team that can win against the world’s best human soccer team by 2050. We selected RoboCup because it requires robots to be able to play with and against humans in unstructured environments, such as uneven fields and natural lighting conditions, and it challenges the known accepted dynamics in HRI. Considering the current state of robotics technology, RoboCup’s goal opens up several open research questions to be addressed by roboticists. In this paper, we (a) summarise the current challenges in robotics by using RoboCup development as an evaluation metric, (b) discuss the state-of-the-art approaches to these challenges and how they currently apply to RoboCup, and (c) present a path for future development in the given areas to meet RoboCup’s goal of having robots play soccer against and with humans by 2050.Peer reviewe

    The morality of abusing a robot

    No full text
    https://www.doi.org/10.1515/pjbr-2020-001

    Robot bullying.

    Get PDF
    When robots made their first unsupervised entrance to the public space, their engineers were confronted with an unexpected phenomenon: robot bullying (see for example Brscić, Kidokoro, Suehiro, & Kanda, 2015; Salvini et al., 2010). While the phenomenon has continued to manifest itself since and a few theoretical explanations have been suggested, little empirical work has been done to substantiate any theorising as of yet. This thesis summarizes five pieces of research that explore what psychological fac- tors influence people’s willingness to behave anti-socially towards robots. It is structured around four experiments on human-robot interaction (Chapters 2, 3, 5, and 6) and one analysis of human-chatbot interaction (Chapter 4). In addition, there are some general reflections on the methodological and philosophical issues with studying robot bullying (section 7.2), as well as the role of mind attribution (i.e., attributing the ability to think and feel to another being; section 7.4), which has been a recurring measure of interest throughout the experiments. Chapter 1 provides an overview of the motivation for the thesis topic and the research questions. It also includes a general discussion of the relevant literature, focusing on an- thropomorphism of nonhuman agents, mind attribution as a factor of anthropomorphism, and how dehumanisation as a facilitator for interhuman aggression may be generalisable to human-robot interaction as well. Chapter 2 describes an experiment that explored whether bullying behaviour is per- ceived as more morally acceptable if the victim is a robot rather than a human. The results indicated no significant difference in moral acceptability, and suggested that higher levels of mind attribution were related to lower acceptability of abuse. Chapter 3 expands on these findings by describing two studies that experimentally manipulated mind attribution. Also, whereas participants in the experiment from Chap- ter 2 were passive spectators of a human-robot interaction, one of the experiments in this chapter involved active interaction between a participant and a robot. In two experiments we investigated the influence of a robot’s mind attribution on the perceived acceptability of robot bullying and people’s willingness to bully a robot. Results indicated that ac- ceptability of robot bullying can be manipulated both explicitly, by providing people with information on the robot’s mind attribution, and implicitly, through having the robot give off emotional cues. Those effects are independent of one another. Interestingly, robot mind attribution was not associated with a lower robot bullying incidence rate in this experiment. In contrast to the studies reported in the other chapters, the study covered in Chapter 4 did not realise an experimental design. Almost 300 conversations between users and an online chatbot were harvested and coded for humanlikeness of the chatbot, self-disclosure by the user, and importantly, the amount of verbal abuse or sexual harassment. Subse- quent analyses showed that humanlikeness in the chatbot was associated with more abuse (both sexual harassment and verbal aggression). Self-disclosure in terms of making men- tion of one’s gender (both male and female) was associated with less verbal aggression, but more sexual harassment. Chapter 5 describes an experiment which investigated whether mind attribution is linked to robot abuse. Mind attribution to the robot was intended to be manipulated through priming participants with a feeling of power, as previous studies on dehumani- sation had shown that power reduces mind attribution. In addition, humanlike qualities of the robot were manipulated. The participants’ verbal abuse of a virtual robot was measured as the main outcome of interest; mind attribution to the robot and humanlike- ness of the robot were measured as manipulation checks. Contrary to previous findings in human-human interaction, priming participants with power did not result in reduced mind attribution. However, evidence for dehumanisation was still found, as the less mind participants attributed to the robot, the more aggressive responses they gave. This effect was moderated by the power prime and robot humanlikeness manipulation. The discussion section of Chapter 5 offers an explanation for the surprising results, which is put to the test in Chapter 6, where an expansion of the experiment from Chapter 5 is presented. Feelings of power, robot embodiment (virtual versus embodied) and feelings of threat were experimentally manipulated. Participants played a learning task with either a virtual or an embodied robot, and were asked to restrict the robot’s energy supply after each wrong answer, which was taken as a measure of aggression. Results indicated that an embodied robot was punished less harshly than a virtual one, except for when people had been primed with power and threat. Being primed with power diminished the influence of mind attribution on aggression. Mind attribution increased aggression in the threat condition, but was related to decreased aggression when people had not been reminded of threat. These results suggest that while mind attribution appears to play a role in robot bullying, the relationship is too complicated to be explained by dehumanisation theory alone. Finally, Chapter 7 aggregates the results from the studies in this thesis to provide an answer to the thesis research questions. In addition, the strengths and limitations of the research are discussed. Furthermore, trends in mind attribution to the robots used in the different experiments are discussed. Finally, possible directions for future research are considered

    VR Smoking Craving

    No full text

    The morality of abusing a robot

    Get PDF
    It is not uncommon for humans to exhibit abusive behaviour towards robots. This study compares how abusive behaviour towards a human is perceived differently in comparison with identical behaviour towards a robot. We showed participants 16 video clips of unparalleled quality that depicted different levels of violence and abuse. For each video, we asked participants to rate the moral acceptability of the action, the violence depicted, the intention to harm, and how abusive the action was. The results indicate no significant difference in the perceived morality of the actions shown in the videos across the two victim agents. When the agents started to fight back, their reactive aggressive behaviour was rated differently. Humans fighting back were seen as less immoral compared with robots fighting back. A mediation analysis showed that this was predominately due to participants perceiving the robot’s response as more abusive than the human’s response

    What's to bullying a bot? Correlates between chatbot humanlikeness and abuse

    Get PDF
    Keijsers M, Bartneck C, Eyssel F. What's to bullying a bot? Correlates between chatbot humanlikeness and abuse. Interaction Studies . 2021;22(1):55-80.In human-chatbot interaction, users casually and regularly offend and abuse the chatbot they are interacting with. The current paper explores the relationship between chatbot humanlikeness on the one hand and sexual advances and verbal aggression by the user on the other hand. 283 conversations between the Cleverbot chatbot and its users were harvested and analysed. Our results showed higher counts of user verbal aggression and sexual comments towards Cleverbot when Cleverbot appeared more humanlike in its behaviour. Caution is warranted with the interpretation of the results however as no experimental manipulation was conducted and causality can thus not be inferred. Nonetheless, the findings are relevant for both the research on the abuse of conversational agents, and the development of efficient approaches to discourage or prevent verbal aggression by chatbot users
    corecore