359 research outputs found

    Children's age influences their use of biological and mechanical questions towards a humanoid

    Get PDF
    Complex autonomous interactions, biomimetic appearances, and responsive behaviours are increasingly seen in social robots. These features, by design or otherwise, may substantially influence young children’s beliefs of a robot’s animacy. Young children are believed to hold naive theories of animacy, and can miscategorise objects as living agents with intentions; however, this develops with age to a biological understanding. Prior research indicates that children frequently categorise a responsive humanoid as being a hybrid of person and machine; although, with age, children tend towards classifying the humanoid as being more machine-like. Our current research explores this phenomenon, using an unobtrusive method: recording childrens conversational interaction with the humanoid and classifying indications of animacy beliefs in childrens questions asked. Our results indicate that established findings are not an artefact of prior research methods: young children tend to converse with the humanoid as if it is more animate than older children do

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Children's perception and interpretation of robots and robot behaviour

    Get PDF
    The world of robotics, like that of all technology is changing rapidly (Melson, et al., 2009). As part of an inter-disciplinary project investigating the emergence of artificial culture in robot societies, this study set out to examine children’s perception of robots and interpretation of robot behaviour. This thesis is situated in an interdisciplinary field of human–robot interactions, drawing on research from the disciplines of sociology and psychology as well as the fields of engineering and ethics. The study was divided into four phases: phase one involved children from two primary schools drawing a picture and writing a story about their robot. In phase two, children observed e-puck robots interacting. Children were asked questions regarding the function and purpose of the robots’ actions. Phase three entailed data collection at a public event: Manchester Science Festival. Three activities at the festival: ‘XRay Art Under Your Skin’, ‘Swarm Robots’ and ‘Build-a-Bugbot’ formed the focus of this phase. In the first activity, children were asked to draw the components of a robot and were then asked questions about their drawings. During the second exercise, children’s comments were noted as they watched e-puck robot demonstrations. In the third exercise, children were shown images and asked whether these images were a robot or a ‘no-bot’. They were then prompted to provide explanations for their answers. Phase 4 of the research involved children identifying patterns of behaviour amongst e-pucks. This phase of the project was undertaken as a pilot for the ‘open science’ approach to research to be used by the wider project within which this PhD was nested. Consistent with existing literature, children endowed robots with animate and inanimate characteristics holding multiple understandings of robots simultaneously. The notion of control appeared to be important in children’s conception of animacy. The results indicated children’s perceptions of the location of the locus of control plays an important role in whether they view robots as autonomous agents or controllable entities. The ways in which children perceive robots and robot behaviour, in particular the ways in which children give meaning to robots and robot behaviour will potentially come to characterise a particular generation. Therefore, research should not only concentrate on the impact of these technologies on children but should focus on capturing children’s perceptions and viewpoints to better understand the impact of the changing technological world on the lives of children

    Human-centred design methods : developing scenarios for robot assisted play informed by user panels and field trials

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/ Copyright ElsevierThis article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC1 project that develops a novel robotic toy for children with special needs. The project investigates how robotic toys can become social mediators, encouraging children with special needs to discover a range of play styles, from solitary to collaborative play (with peers, carers/teachers, parents, etc.). This article explains the developmental process of constructing relevant play scenarios for children with different special needs. Results are presented from consultation with panel of experts (therapists, teachers, parents) who advised on the play needs for the various target user groups and who helped investigate how robotic toys could be used as a play tool to assist in the children’s development. Examples from experimental investigations are provided which have informed the development of scenarios throughout the design process. We conclude by pointing out the potential benefit of this work to a variety of research projects and applications involving human–robot interactions.Peer reviewe

    Humanization of robots: is it really such a good idea?

    Get PDF
    The aim of this review was to examine the pros and cons of humanizing social robots following a psychological perspective. As such, we had six goals. First, we defined what social robots are. Second, we clarified the meaning of humanizing social robots. Third, we presented the theoretical backgrounds for promoting humanization. Fourth, we conducted a review of empirical results of the positive effects and the negative effects of humanization on human–robot interaction (HRI). Fifth, we presented some of the political and ethical problems raised by the humanization of social robots. Lastly, we discussed the overall effects of the humanization of robots in HRI and suggested new avenues of research and development.info:eu-repo/semantics/publishedVersio

    People do not always know best: Preschoolers’ trust in social robots versus humans

    Get PDF
    The main goal of my thesis was to investigate how 3- and 5-year-old children learn from robots versus humans using a selective trust paradigm. Children’s conceptualization of robots was also investigated. By using robots, which lack many of the social characteristics human informants possess by default, these studies sought to test young children’s reliance on epistemic characteristics conservatively. In Study 1, a competent humanoid robot, Nao, and an incompetent human, Ina, were presented to children. Both informants labelled familiar objects, like a ball, with Nao labelling them correctly and Ina labelling them incorrectly. Next, both informants labelled novel items with nonsense labels. Children were then asked what the novel item was called. Children were also asked what should go inside robots, something biological or something mechanical. Study 2 followed the same paradigm as Study 1, with the only change being the robot used, now the non-humanoid Cozmo. Eliminating the human-like appearance of the robot made for an even more conservative test than in Study 1. Both studies 1 and 2 found that 3-year-old children learned novel words equally from the robot and the human, regardless of the robot’s morphology. The 3-year-old children were also confused about both robot’s internal properties, attributing mechanical and biological insides to the robots equally. In contrast, the 5-year-olds in both studies preferred to learn from the accurate robot over the inaccurate human. The 5-year-olds also learned from both robots despite understanding that the robot is different from themselves; they attributed mechanical insides to both Nao and Cozmo over biological insides. Study 3 further investigated 3-year-olds ambivalence regarding their trust judgements, that is, who they choose to learn from. Instead of word learning, the robot demonstrated competence through pointing. The robot would accurately point at a toy inside a transparent box, and the human would point at an empty box. Next, both informants pointed at opaque boxes and the child was asked where the toy was located. Neither informant demonstrated the ability to speak, as speech is a salient social characteristic. 3-year-olds were still at chance, equally endorsing the robot and the human’s pointing. This suggests that goal-directedness and autonomous movement may be the most important characteristics used to signal agency for young children. The 3-year-olds were also still unsure about the robot’s biology, whereas they correctly identified the human as biological. This suggests that robots are confusing for children due to their dual nature as animate and yet not alive. This thesis shows that by the age of 5, children are willing and able to learn from a robot. These studies further add to the selective trust literature and have implications for educational settings

    Children's Age Influences Their Perceptions of a Humanoid Robot as Being Like a Person or Machine

    Get PDF
    Models of children’s cognitive development indicate that as children grow, they transition from using behavioral cues to knowledge of biology to determine a target’s animacy. This paper explores the impact of children’s’ ages and a humanoid robot’s expressive behavior on their perceptions of the robot, using a simple, low-demand measure. Results indicate that children’s ages have influence on their perceptions in terms of the robot’s status being a person, a machine, or a composite. Younger children (aged 6) tended to rate the robot as being like a person to a substantially greater extent than older children (aged 7) did. However, additional facially-expressive cues from the robot did not substantively impact on children’s responses. Implications for future HRI studies are discussed

    Why Do Humans Imagine Robots?

    Get PDF
    This project analyzes why people are intrigued by the thought of robots, and why they choose to create them in both reality and fiction. Numerous movies, literature, news articles, online journals, surveys, and interviews have been used in determining the answer
    • 

    corecore