195 research outputs found

    Optimising robot personalities for symbiotic interaction

    Get PDF
    The Expressive Agents for Symbiotic Education and Learning (EASEL) project will explore human-robot symbiotic interaction (HRSI) with the aim of developing an understanding of symbiosis over long term tutoring interactions. The EASEL system will be built upon an established and neurobiologically grounded architecture - Distributed Adaptive Control (DAC). Here we present the design of an initial experiment in which our facially expressive humanoid robot will interact with children at a public exhibition. We discuss the range of measurements we will employ to explore the effects our robot's expressive ability has on interaction with children during HRSI, with the aim of contributing optimal robot personality parameters to the final EASEL model. © 2014 Springer International Publishing

    Children's Age Influences Their Perceptions of a Humanoid Robot as Being Like a Person or Machine

    Get PDF
    Models of children’s cognitive development indicate that as children grow, they transition from using behavioral cues to knowledge of biology to determine a target’s animacy. This paper explores the impact of children’s’ ages and a humanoid robot’s expressive behavior on their perceptions of the robot, using a simple, low-demand measure. Results indicate that children’s ages have influence on their perceptions in terms of the robot’s status being a person, a machine, or a composite. Younger children (aged 6) tended to rate the robot as being like a person to a substantially greater extent than older children (aged 7) did. However, additional facially-expressive cues from the robot did not substantively impact on children’s responses. Implications for future HRI studies are discussed

    SEAI: Social Emotional Artificial Intelligence Based on Damasio's Theory of Mind

    Get PDF
    A socially intelligent robot must be capable to extract meaningful information in real-time from the social environment and react accordingly with coherent human-like behaviour. Moreover, it should be able to internalise this information, to reason on it at a higher abstract level, build its own opinions independently and then automatically bias the decision-making according to its unique experience. In the last decades, neuroscience research highlighted the link between the evolution of such complex behaviour and the evolution of a certain level of consciousness, which cannot leave out of a body that feels emotions as discriminants and prompters. In order to develop cognitive systems for social robotics with greater human-likeliness, we used an "understanding by building" approach to model and implement a well-known theory of mind in the form of an artificial intelligence, and we tested it on a sophisticated robotic platform. The name of the presented system is SEAI (Social Emotional Artificial Intelligence), a cognitive system specifically conceived for social and emotional robots. It is designed as a bio-inspired, highly modular, hybrid system with emotion modelling and high-level reasoning capabilities. It follows the deliberative/reactive paradigm where a knowledge-based expert system is aimed at dealing with the high-level symbolic reasoning, while a more conventional reactive paradigm is deputed to the low-level processing and control. The SEAI system is also enriched by a model which simulate the Damasio's theory of consciousness and the theory of Somatic Markers. After a review of similar bio-inspired cognitive systems, we present the scientific foundations and their computational formalisation at the basis of the SEAI framework. Then, a deeper technical description of the architecture is disclosed underlining the numerous parallelisms with the human cognitive system. Finally, the influence of artificial emotions and feelings, and their link with the robot's beliefs and decisions have been tested in a physical humanoid involved in Human-Robot Interaction (HRI)

    Agency in the Internet of Things

    Get PDF
    This report summarises and extends the work done for the task force on IoT terminated in 2012. In response to DG CNECT request, the JRC studied this emergent technology following the methodologies pertaining to the Science and Technology Studies field. The aim of this document is therefore to present and to explore, on the basis of present day conceptions of relevant values, rights and norms, some of the “ethical issues” arising from the research, development and deployment of IoT, focusing on agency, autonomy and social justice. We start by exploring the types of imaginaries that seem to be entrenched and inspiring the developments of IoT and how they become portrayed in “normal” communication from corporations and promoters to the ordinary citizen (chapter 2). We report the empirical work we have conducted, namely the JRC contribution to the limited public debate initiated by the European Commission via the Your Voice portal during the Spring of 2012 (chapter 3) and an empirical exercise involving participants of two IoT conferences (chapter 4). This latter exercise sought to illustrate how our notions of goodness, trust, relationships, agency and autonomy are negotiated through the appropriation of unnoticed ordinary objects; this contributes to the discussion about ethical issues at stake with the emerging IoT vision beyond the right to privacy, data protection and security. Furthermore, based on literature review the report reflects on two of the main ethical issues that arise with the IoT vision: agency (and autonomy) and social justice (chapter 5), examining eventually governance alternatives of the challenged ethical issues (chapter 6).JRC.G.7-Digital Citizen Securit

    Optimising Outcomes of Human-Agent Collaboration using Trust Calibration

    Full text link
    As collaborative agents are implemented within everyday environments and the workforce, user trust in these agents becomes critical to consider. Trust affects user decision making, rendering it an essential component to consider when designing for successful Human-Agent Collaboration (HAC). The purpose of this work is to investigate the relationship between user trust and decision making with the overall aim of providing a trust calibration methodology to achieve the goals and optimise the outcomes of HAC. Recommender systems are used as a testbed for investigation, offering insight on human collaboration with dyadic decision domains. Four studies are conducted and include in-person, online, and simulation experiments. The first study provides evidence of a relationship between user perception of a collaborative agent and trust. Outcomes of the second study demonstrate that initial trust can be used to predict task outcome during HAC, with Signal Detection Theory (SDT) introduced as a method to interpret user decision making in-task. The third study provides evidence to suggest that the implementation of different features within a single agent's interface influences user perception and trust, subsequently impacting outcomes of HAC. Finally, a computational trust calibration methodology harnessing a Partially Observable Markov Decision Process (POMDP) model and SDT is presented and assessed, providing an improved understanding of the mechanisms governing user trust and its relationship with decision making and collaborative task performance during HAC. The contributions from this work address important gaps within the HAC literature. The implications of the proposed methodology and its application to alternative domains are identified and discussed

    PROTOTYPING RELATIONAL THINGS THAT TALK: A DISCURSIVE DESIGN STRATEGY FOR CONVERSATIONAL AI SYSTEMS

    Get PDF
    This practice-based research inquiry explores the implications of conversational Artificial Intelligence (AI) systems, ‘relational things that talk’, on the way people experience the world. It responds directly to the pervasive lack of ethical design frameworks for commercial AI systems, compounded by limited transparency, ubiquitous authority, embedded bias and the absence of diversity in the development process. The effect produced by relational things that talk upon the feelings, thoughts or intentions of the user is here defined as the ‘perlocutionary effect’ of conversational AI systems. This effect is constituted by these systems’ ‘relationality‘ and ‘persuasiveness’, propagated by the system’s embedded bias and ‘hybrid intentions’, relative to a user’s susceptibility. The proposition of the perlocutionary effect frames the central practice of this thesis and the contribution to new knowledge which manifests as four discursive prototypes developed through a participatory method. Each prototype demonstrates the factors that constitute and propagate the perlocutionary effect. These prototypes also function as instruments which actively engage participants in a counter-narrative as a form of activism. ‘This Is Where We Are’ (TIWWA), explores the persuasiveness and relationality of relational things powered through AI behavioural algorithms and directed by pools of user data. ‘Emoti-OS’, iterates the findings from TIWWA and analyses the construction of relationality through simulated affect, personality and collective (artificial) emotional intelligence. ‘Women Reclaiming AI’ (WRAI), demonstrates stereotyping and bias in commercial conversational AI developments. The last prototype, ‘The Infinite Guide’, synthesises and tests the findings from the three previous prototypes to substantiate the overall perlocutionary effect of conversational AI system. In so doing, this inquiry proposes the appropriation of relational things that talk as a discursive design strategy, extended with a participatory method, for new forms of cultural expression and social action, which activate people to demand more ethical AI systems

    Foundations of Trusted Autonomy

    Get PDF
    Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologie

    Responsible AI in Africa

    Get PDF
    This open access book contributes to the discourse of Responsible Artificial Intelligence (AI) from an African perspective. It is a unique collection that brings together prominent AI scholars to discuss AI ethics from theoretical and practical African perspectives and makes a case for African values, interests, expectations and principles to underpin the design, development and deployment (DDD) of AI in Africa. The book is a first in that it pays attention to the socio-cultural contexts of Responsible AI that is sensitive to African cultures and societies. It makes an important contribution to the global AI ethics discourse that often neglects AI narratives from Africa despite growing evidence of DDD in many domains. Nine original contributions provide useful insights to advance the understanding and implementation of Responsible AI in Africa, including discussions on epistemic injustice of global AI ethics, opportunities and challenges, an examination of AI co-bots and chatbots in an African work space, gender and AI, a consideration of African philosophies such as Ubuntu in the application of AI, African AI policy, and a look towards a future of Responsible AI in Africa. This is an open access book
    • 

    corecore