13,223 research outputs found

    License to Kill: An Analysis of the Legality of Fully Autonomous Drones in the Context of International Use of Force Law

    Get PDF
    We live in a world of constant technological change; and with this change, comes unknown effects and consequences. This is even truer with weapons and warfare. Indeed, as the means and methods of warfare rapidly modify and transform, the effects and consequences on the laws of war are unknown. This Article addresses one such development in weapon and warfare technology—Fully Autonomous Weapons or “Killer Robots”—and discusses the inevitable use of these weapons within the current international law framework. Recognizing the current, inadequate legal framework, this Article proposes a regulation policy to mitigate the risks associated with Fully Autonomous Weapons. But the debate should not end here; States and the U.N. must work together to adopt a legal framework that coincides with the advancement of technology. This Article starts that discussion

    Visually Perceiving the Intentions of Others

    Get PDF
    I argue that we sometimes visually perceive the intentions of others. Just as we can see something as blue or as moving to the left, so too can we see someone as intending to evade detection or as aiming to traverse a physical obstacle. I consider the typical subject presented with the Heider and Simmel movie, a widely studied ‘animacy’ stimulus, and I argue that this subject mentally attributes proximal intentions to some of the objects in the movie. I further argue that these attributions are unrevisable in a certain sense and that this result can be used to as part of an argument that these attributions are not post-perceptual thoughts. Finally, I suggest that if these attributions are visual experiences, and more particularly visual illusions, their unrevisability can be satisfyingly explained, by appealing to the mechanisms which underlie visual illusions more generally

    Theoretical, Measured and Subjective Responsibility in Aided Decision Making

    Full text link
    When humans interact with intelligent systems, their causal responsibility for outcomes becomes equivocal. We analyze the descriptive abilities of a newly developed responsibility quantification model (ResQu) to predict actual human responsibility and perceptions of responsibility in the interaction with intelligent systems. In two laboratory experiments, participants performed a classification task. They were aided by classification systems with different capabilities. We compared the predicted theoretical responsibility values to the actual measured responsibility participants took on and to their subjective rankings of responsibility. The model predictions were strongly correlated with both measured and subjective responsibility. A bias existed only when participants with poor classification capabilities relied less-than-optimally on a system that had superior classification capabilities and assumed higher-than-optimal responsibility. The study implies that when humans interact with advanced intelligent systems, with capabilities that greatly exceed their own, their comparative causal responsibility will be small, even if formally the human is assigned major roles. Simply putting a human into the loop does not assure that the human will meaningfully contribute to the outcomes. The results demonstrate the descriptive value of the ResQu model to predict behavior and perceptions of responsibility by considering the characteristics of the human, the intelligent system, the environment and some systematic behavioral biases. The ResQu model is a new quantitative method that can be used in system design and can guide policy and legal decisions regarding human responsibility in events involving intelligent systems

    Shall I trust you? From child-robot interaction to trusting relationships

    Get PDF
    Studying trust in the context of human-robot interaction is of great importance given the increasing relevance and presence of robotic agents in various social settings, from educational to clinical. In the present study, we investigated the acquisition, loss and restoration of trust when preschool and school-age children played with either a human or a humanoid robot in-vivo. The relationship between trust and the representation of the quality of attachment relationships, Theory of Mind, and executive function skills was also investigated. Additionally, to outline children\u2019s beliefs about the mental competencies of the robot, we further evaluated the attribution of mental states to the interactive agent. In general, no substantial differences were found in children\u2019s trust in the play-partner as a function of agency (human or robot). Nevertheless, 3-years-olds showed a trend toward trusting the human more than the robot, as opposed to 7-years-olds, who displayed the reverse pattern. These findings align with results showing that, for children aged 3 and 7 years, the cognitive ability to switch was significantly associated with trust restoration in the human and the robot, respectively. Additionally, supporting previous findings, a dichotomy was found between attribution of mental states to the human and robot and children\u2019s behavior: while attributing significantly lower mental states to the robot than the human, in the trusting game children behaved similarly when they related to the human and the robot. Altogether, the results of this study highlight that comparable psychological mechanisms are at play when children are to establish a novel trustful relationship with a human and robot partner. Furthermore, the findings shed light on the interplay \u2013 during development \u2013 between children\u2019s quality of attachment relationships and the development of a Theory of Mind, which act differently on trust dynamics as a function of the children\u2019s age as well as the interactive partner\u2019s nature (human vs. robot)

    EMPLOYEES’ CHALLENGES AND NEEDS FOR RESKILLING WHEN WORKING WITH SOFTWARE ROBOTS

    Get PDF
    Software robots are becoming increasingly adopted in different industries. The growing rate of automatization will affect more and more people and will result in changes in businesses of all sizes. Impacts can be observed at both the organizational and individual employee levels. A growing number of studies of software robots’ advantages and disadvantages on an organizational or industry-specific level have been carried out. However, there is limited knowledge about the employees’ perceptions of challenges and new skills needed when working with software robots. This study addresses this gap by using open-ended questionnaire responses from employees who have worked with software robots. This study aims to contribute to prior knowledge by identifying comprehensive sets of subcategories for employees’ perceptions of (1) the challenges as well as (2) the new skills needed when working with software robots. As practical implications, our findings can help organizations and individual workers prepare for the implementation and use of software robots by identifying potential challenges, planning for overcoming such challenges via suitable skills, and providing training for employees. According to our findings, many respondents mentioned learning new technical skills as a challenge, and because they have had to work with software robots, they have acquired additional knowledge, such as basic programming skills. Challenges related to reskilling constitute an interesting topic for further research

    Toward a Probabilistic Approach to Acquiring Information from Human Partners Using Language

    Get PDF
    Our goal is to build robots that can robustly interact with humans using natural language. This problem is extremely challenging because human language is filled with ambiguity, and furthermore, the robot's model of the environment might be much more limited than the human partner. When humans encounter ambiguity in dialog with each other, a key strategy to resolve it is to ask clarifying questions about whatthey do not understand. This paper describes an approach for enabling robots to take the same approach: asking the human partner clarifying questions about ambiguous commands in order to infer better actions. The robot fuses information from the command, the question, and the answer by creating a joint probabilistic graphical model in the Generalized Grounding Graph framework. We demonstrate that by performing inference using information from the command, question and answer, the robot is able to infer object groundings and follow commands with higher accuracythan by using the command alone

    Children’s Concept of Animacy: The Humanoid Robot and the Robotic Human

    Get PDF
    Advances in robotics and artificial intelligence introduce increasingly capable robots to society. Such skilled robots bring into question the importance of an object’s capabilities in determining its animacy. This is particularly salient among children, who make the most mistakes in distinguishing animate and inanimate objects. It is common for children to give inanimate objects animate traits; in this way, able-bodied robots could become “animate”. The purpose of this study is to address whether or not providing an object with enough behavioral and intellectual capabilities can change the object’s animacy. A total of 90 children (ages 3, 5, and 7) will interact with either a human or a robot that displays different levels of ability. Following each interaction period, children will be asked to attribute biological and psychological characteristics to the person or robot. Because children have been shown to link animacy with certain traits, the addition (or subtraction) of enough of these traits may come to change the overall animacy of the object
    • 

    corecore