3,520 research outputs found
Robots as restaurant employees-A double-barrelled detective story
The paper evaluates the perceptions of Turkish restaurant managers and customers towards service robots. The sample includes 26 managers and 32 customers. Data were collected through semi-structured interviews. The findings reveal that robots are suitable for dirty, dull, dangerous and repetitive tasks. Customers have mostly positive attitudes towards robots while managers – mostly negative. However, respondents agree that robots improve service quality. A mixed service delivery system based on human-robot collaboration is perceived as the most appropriate. Customers are willing to pay more for the robotic service experience. Theoretical and managerial implications are discussed as well
Final report key contents: main results accomplished by the EU-Funded project IM-CLeVeR - Intrinsically Motivated Cumulative Learning Versatile Robots
This document has the goal of presenting the main scientific and technological achievements of the project IM-CLeVeR. The document is organised as follows: 1. Project executive summary: a brief overview of the project vision, objectives and keywords. 2. Beneficiaries of the project and contacts: list of Teams (partners) of the project, Team Leaders and contacts. 3. Project context and objectives: the vision of the project and its overall objectives 4. Overview of work performed and main results achieved: a one page overview of the main results of the project 5. Overview of main results per partner: a bullet-point list of main results per partners 6. Main achievements in detail, per partner: a throughout explanation of the main results per partner (but including collaboration work), with also reference to the main publications supporting them
Exploring the Design Space of Extra-Linguistic Expression for Robots
In this paper, we explore the new design space of extra-linguistic cues
inspired by graphical tropes used in graphic novels and animation to enhance
the expressiveness of social robots. To achieve this, we identified a set of
cues that can be used to generate expressions, including smoke/steam/fog, water
droplets, and bubbles. We prototyped devices that can generate these fluid
expressions for a robot and conducted design sessions where eight designers
explored the use and utility of the cues in conveying the robot's internal
states in various design scenarios. Our analysis of the 22 designs, the
associated design justifications, and the interviews with designers revealed
patterns in how each cue was used, how they were combined with nonverbal cues,
and where the participants drew their inspiration from. These findings informed
the design of an integrated module called EmoPack, which can be used to augment
the expressive capabilities of any robot platform
Do You Feel Me?: Learning Language from Humans with Robot Emotional Displays
In working towards accomplishing a human-level acquisition and understanding of language, a robot must meet two requirements: the ability to learn words from interactions with its physical environment, and the ability to learn language from people in settings for language use, such as spoken dialogue. The second requirement poses a problem: If a robot is capable of asking a human teacher well-formed questions, it will lead the teacher to provide responses that are too advanced for a robot, which requires simple inputs and feedback to build word-level comprehension.
In a live interactive study, we tested the hypothesis that emotional displays are a viable solution to this problem of how to communicate without relying on language the robot doesn\u27t--indeed, cannot--actually know. Emotional displays can relate the robot\u27s state of understanding to its human teacher, and are developmentally appropriate for the most common language acquisition setting: an adult interacting with a child. For our study, we programmed a robot to independently explore the world and elicit relevant word references and feedback from the participants who are confronted with two robot settings: a setting in which the robot displays emotions, and a second setting where the robot focuses on the task without displaying emotions, which also tests if emotional displays lead a participant to make incorrect assumptions regarding the robot\u27s understanding. Analyzing the results from the surveys and the Grounded Semantics classifiers, we discovered that the use of emotional displays increases the number of inputs provided to the robot, an effect that\u27s modulated by the ratio of positive to negative emotions that were displayed
Autonomous behaviour in tangible user interfaces as a design factor
PhD ThesisThis thesis critically explores the design space of autonomous and actuated artefacts, considering
how autonomous behaviours in interactive technologies might shape and influence users’
interactions and behaviours.
Since the invention of gearing and clockwork, mechanical devices were built that both fascinate
and intrigue people through their mechanical actuation. There seems to be something magical
about moving devices, which draws our attention and piques our interest. Progress in the
development of computational hardware is allowing increasingly complex commercial products
to be available to broad consumer-markets. New technologies emerge very fast, ranging from
personal devices with strong computational power to diverse user interfaces, like multi-touch
surfaces or gestural input devices. Electronic systems are becoming smaller and smarter, as they
comprise sensing, controlling and actuation. From this, new opportunities arise in integrating
more sensors and technology in physical objects.
These trends raise some specific questions around the impacts smarter systems might have
on people and interaction: how do people perceive smart systems that are tangible and what
implications does this perception have for user interface design? Which design opportunities are
opened up through smart systems? There is a tendency in humans to attribute life-like qualities
onto non-animate objects, which evokes social behaviour towards technology. Maybe it would be
possible to build user interfaces that utilise such behaviours to motivate people towards frequent
use, or even motivate them to build relationships in which the users care for their devices. Their
aim is not to increase the efficiency of user interfaces, but to create interfaces that are more
engaging to interact with and excite people to bond with these tangible objects.
This thesis sets out to explore autonomous behaviours in physical interfaces. More specifically, I
am interested in the factors that make a user interpret an interface as autonomous. Through a
review of literature concerned with animated objects, autonomous technology and robots, I have
mapped out a design space exploring the factors that are important in developing autonomous
interfaces. Building on this and utilising workshops conducted with other researchers, I have
vi
developed a framework that identifies key elements for the design of Tangible Autonomous
Interfaces (TAIs). To validate the dimensions of this framework and to further unpack the
impacts on users of interacting with autonomous interfaces I have adopted a ‘research through
design’ approach. I have iteratively designed and realised a series of autonomous, interactive
prototypes, which demonstrate the potential of such interfaces to establish themselves as social
entities. Through two deeper case studies, consisting of an actuated helium balloon and desktop
lamp, I provide insights into how autonomy could be implemented into Tangible User Interfaces.
My studies revealed that through their autonomous behaviour (guided by the framework) these
devices established themselves, in interaction, as social entities. They furthermore turned out to
be acceptable, especially if people were able to find a purpose for them in their lives. This thesis
closes with a discussion of findings and provides specific implications for design of autonomous
behaviour in interfaces
Exploring the role of trust and expectations in CRI using in-the-wild studies
Studying interactions of children with humanoid robots in familiar spaces in natural contexts has become a key issue for social robotics. To fill this need, we conducted several Child-Robot Interaction (CRI) events with the Pepper robot in Polish and Japanese kindergartens. In this paper, we explore the role of trust and expectations towards the robot in determining the success of CRI. We present several observations from the video recordings of our CRI events and the transcripts of free-format question-answering sessions with the robot using the Wizard-of-Oz (WOZ) methodology. From these observations, we identify children’s behaviors that indicate trust (or lack thereof) towards the robot, e.g., challenging behavior of a robot or physical interactions with it. We also gather insights into children’s expectations, e.g., verifying expectations as a causal process and an agency or expectations concerning the robot’s relationships, preferences and physical and behavioral capabilities. Based on our experiences, we suggest some guidelines for designing more effective CRI scenarios. Finally, we argue for the effectiveness of in-the-wild methodologies for planning and executing qualitative CRI studies
Empowerment As Replacement for the Three Laws of Robotics
© 2017 Salge and Polani. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.The greater ubiquity of robots creates a need for generic guidelines for robot behaviour. We focus less on how a robot can technically achieve a predefined goal, and more on what a robot should do in the first place. Particularly, we are interested in the question how a heuristic should look like which motivates the robot's behaviour in interaction with human agents. We make a concrete, operational proposal as to how the information-theoretic concept of empowerment can be used as a generic heuristic to quantify concepts such as self-preservation, protection of the human partner and responding to human actions. While elsewhere we studied involved single-agent scenarios in detail, here we present proof-of-principle scenarios demonstrating how empowerment interpreted in light of these perspectives allows one to specify core concepts with a similar aim as Asimov's Three Laws of Robotics in an operational way. Importantly, this route does not depend on having to establish an explicit verbalized understanding of human language and conventions in the robots. Also, it incorporates the ability to take into account a rich variety of different situations and types of robotic embodiment.Peer reviewe
Studies on user control in ambient intelligent systems
People have a deeply rooted need to experience control and be effective in interactions with their environments. At present times, we are surrounded by intelligent systems that take decisions and perform actions for us. This should make life easier, but there is a risk that users experience less control and reject the system. The central question in this thesis is whether we can design intelligent systems that have a degree of autonomy, while users maintain a sense of control. We try to achieve this by giving the intelligent system an 'expressive interface’: the part that provides information to the user about the internal state, intentions and actions of the system. We examine this question both in the home and the work environment.We find the notion of a ‘system personality’ useful as a guiding principle for designing interactions with intelligent systems, for domestic robots as well as in building automation. Although the desired system personality varies per application, in both domains a recognizable system personality can be designed through expressive interfaces using motion, light, sound, and social cues. The various studies show that the level of automation and the expressive interface can influence the perceived system personality, the perceived level of control, and user’s satisfaction with the system. This thesis shows the potential of the expressive interface as an instrument to help users understand what is going on inside the system and to experience control, which might be essential for the successful adoption of the intelligent systems of the future.<br/
- …