901 research outputs found

    Integration of the Alexa assistant as a voice interface for robotics platforms

    Get PDF
    Virtual assistants such as Cortana or Google Assistant are becoming familiar devices in everyday environments, where they are used to control real devices through natural language. This paper extends this application scenario, and it describes the use of the Alexa assistant from Amazon through an Echo dot device to drive the behaviour of a robotic platform. The paper focuses on the description of the technologies employed to set such ecosystem. Significantly, the proposed architecture is based, from the remote server to the on-board controllers, in LowEnergy (LE) hardware and a scalable software platform. This approach will ease programmers integrating different platforms, e.g. mobile-based applications to control robots or home-made devices.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Conversational affective social robots for ageing and dementia support

    Get PDF
    Socially assistive robots (SAR) hold significant potential to assist older adults and people with dementia in human engagement and clinical contexts by supporting mental health and independence at home. While SAR research has recently experienced prolific growth, long-term trust, clinical translation and patient benefit remain immature. Affective human-robot interactions are unresolved and the deployment of robots with conversational abilities is fundamental for robustness and humanrobot engagement. In this paper, we review the state of the art within the past two decades, design trends, and current applications of conversational affective SAR for ageing and dementia support. A horizon scanning of AI voice technology for healthcare, including ubiquitous smart speakers, is further introduced to address current gaps inhibiting home use. We discuss the role of user-centred approaches in the design of voice systems, including the capacity to handle communication breakdowns for effective use by target populations. We summarise the state of development in interactions using speech and natural language processing, which forms a baseline for longitudinal health monitoring and cognitive assessment. Drawing from this foundation, we identify open challenges and propose future directions to advance conversational affective social robots for: 1) user engagement, 2) deployment in real-world settings, and 3) clinical translation

    Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making

    Get PDF
    The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority? Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation

    Design and implementation of an automatic speech recognition interface for a Multipurpose Assistant Robot (MASHI)

    Get PDF
    This project focuses in the initialization of the work and in the study of online services in order to design and implement an automatic speech recognition system for the robotic platform MASHI. This system will be implemented in two Raspberry Pi 3 using a Master-Slave structure. Online resources and services will be used to maintain the wireless connection and control of the platform. As the desired functionality, this automatic speech recognition system will serve as an efficient interface for the interaction between MASHI and the people inside public buildings, the interaction of the system with other interconnected devices is also considered

    Autonomous Exchanges: Human-Machine Autonomy in the Automated Media Economy

    Get PDF
    Contemporary discourses and representations of automation stress the impending “autonomy” of automated technologies. From pop culture depictions to corporate white papers, the notion of autonomous technologies tends to enliven dystopic fears about the threat to human autonomy or utopian potentials to help humans experience unrealized forms of autonomy. This project offers a more nuanced perspective, rejecting contemporary notions of automation as inevitably vanquishing or enhancing human autonomy. Through a discursive analysis of industrial “deep texts” that offer considerable insights into the material development of automated media technologies, I argue for contemporary automation to be understood as a field for the exchange of autonomy, a human-machine autonomy in which autonomy is exchanged as cultural and economic value. Human-machine autonomy is a shared condition among humans and intelligent machines shaped by economic, legal, and political paradigms with a stake in the cultural uses of automated media technologies. By understanding human-machine autonomy, this project illuminates complications of autonomy emerging from interactions with automated media technologies across a range of cultural contexts

    Artificial Intelligence Service Agents: Role of Parasocial Relationship

    Get PDF
    Increased use of artificial intelligence service agents (AISA) has been associated with improvements in AISA service performance. Whilst there is consensus that unique forms of attachment develop between users and AISA that manifest as parasocial relationships (PSRs), the literature is less clear about the AISA service attributes and how they influence PSR and the users’ subjective well-being. Based on a dataset collected from 408 virtual assistant users from the US, this research develops and tests a model that can explain how AISA-enabled service influences subjective well-being through the mediating effect of PSR. Findings also indicate significant gender and AISA experience differences in the PSR effect on subjective well-being. This study advances current understanding of AISA in service encounters by investigating the mediating role of PSR in AISA’s effect on users’ subjective well-being. We also discuss managerial implications for practitioners who are increasingly using AISA for delivering customer service

    Uncovering Drivers for the Integration of Dark Patterns in Conversational Agents

    Get PDF
    Today, organizations increasingly utilize conversational agents (CAs), which are smart technologies that converse in a human-to-human interaction style. CAs are very effective in guiding users through digital environments. However, this makes them natural targets for dark patterns, which are user interface design elements that infringe on user autonomy by fostering uninformed decisions. Integrating dark patterns in CAs has tremendous impacts on supposedly free user choices in the digital space. Thus, we conducted a qualitative study consisting of semi-structured interviews with developers to investigate drivers of dark patterns in CAs. Our findings reveal that six drivers for the implementation of dark patterns exist. The technical drivers include heavy guidance of CAs during the conversation and the CAs\u27 data collection potential. Additionally, organizational drivers are assertive stakeholder dominance and time pressure during the development process. Team drivers incorporate a deficient user understanding and an inexperienced team

    Robotix-Academy Conference for Industrial Robotics (RACIR) 2018

    Get PDF
    1st e

    Shopping with Voice Assistants: How Empathy Affects Individual and Family Decision-Making Outcomes

    Full text link
    Artificial intelligence (AI)-enabled voice assistants (VAs) such as Amazon Alexa increasingly assist shopping decisions and exhibit empathic behavior. The advancement of empathic AI raises concerns about machines nudging consumers into purchasing undesired or unnecessary products. Yet, it is unclear how the machine’s empathic behavior affects consumer responses and decision-making outcomes during voice-enabled shopping. This article draws from the service robot acceptance model (sRAM) and social response theory (SRT) and presents an individual-session experiment where families (vs. individuals) complete actual shopping tasks using an ad-hoc Alexa app featuring high (vs. standard) empathic capabilities. We apply the experimental conditions as moderators to the structural model, bridging selected functional, social-emotional, and relational variables. Our framework collocates affective empathy, explicates the bases of consumers’ beliefs, and predicts behavioral outcomes. Findings demonstrate (i) an increase in consumers’ perceptions, beliefs, and adoption intentions with empathic Alexa, (ii) a positive response to empathic Alexa holding constant in family settings, and (iii) an interaction effect only on the functional model dimensions whereby families show greater responses to empathic Alexa while individuals to standard Alexa
    • …
    corecore