255 research outputs found
Les créateurs sociaux de proximité : qui sont-ils ? L'expérience des micro-projets du FSE 10B sur trois régions françaises
A partir des micro-projets conventionnés dans le cadre de la mesure FSE 10B sur trois régions françaises (Bretagne, Pays de Loire, Poitou-Charentes), nous étudions les profils sociodémographiques des personnes initiatrices, leurs parcours professionnels, associatifs et militants, les valeurs défendues, leurs motivations de création, les partenariats développés et leurs postures vis-à -vis de l'économie sociale et solidaire. Nous comparons ces créateurs « sociaux » à la fois aux créateurs d'entreprises « classiques » et aux acteurs (salariés, bénévoles, administrateurs) du monde associatif.Economie sociale et solidaire;développement local;projets de développement;fonds social européen
Am I capable yet? ...where I argue we need better evaluation and better representations to improve cognitive architectures for HRI
Position paper on cognitive architectures for human-robot interactio
Toward supervised reinforcement learning with partial states for social HRI
Social interacting is a complex task for which machine learning holds particular promise. However, as no sufficiently accurate simulator of human interactions exists today, the learning of social interaction strategies has to happen online in the real world. Actions executed by the robot impact on humans, and as such have to be carefully selected, making it impossible to rely on random exploration. Additionally, no clear reward function exists for social interactions. This implies that traditional approaches used for Reinforcement Learning cannot be directly applied for learning how to interact with the social world. As such we argue that robots will profit from human expertise and guidance to learn social interactions. However, as the quantity of input a human can provide is limited, new methods have to be designed to use human input more efficiently. In this paper we describe a setup in which we combine a framework called Supervised Progressively Autonomous Robot Competencies (SPARC), which allows safer online learning with Reinforcement Learning, with the use of partial states rather than full states to accelerate generalisation and obtain a usable action policy more quickly
SPARC: an efficient way to combine reinforcement learning and supervised autonomy
Shortcomings of reinforcement learning for robot control include the sparsity of the environmental reward function, the high number of trials required before reaching an efficient action policy and the reliance on exploration to gather information about the environment, potentially resulting in undesired actions. These limits can be overcome by adding a human in the loop to provide additional information during the learning phase. In this paper, we propose a novel way to combine human inputs and reinforcement by following the Supervised Progressively Autonomous Robot Competencies (SPARC) approach. We compare this method to the principles of Interactive Reinforcement Learning as proposed by Thomaz and Breazeal. Results from a study involving 40 participants show that using SPARC increases the performance of the learning, reduces the time and number of inputs required for teaching and faces fewer errors during the learning process. These results support the use of SPARC as an efficient method to teach a robot to interact with humans
Artificial Cognition for Social Human-Robot Interaction: An Implementation
© 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system
Generating spatial referring expressions in a social robot: Dynamic vs. non-ambiguous
Generating spatial referring expressions is key to allowing robots to communicate with people in an environment. The focus of most algorithms for generation is to create a non-ambiguous description, and how best to deal with the combination explosion this can create in a complex environment. However, this is not how people naturally communicate. Humans tend to give an under-specified description and then rely on a strategy of repair to reduce the number of possible locations or objects until the correct one is identified, what we refer to here as a dynamic description. We present here a method for generating these dynamic descriptions for Human Robot Interaction, using machine learning to generate repair statements. We also present a study with 61 participants in a task on object placement. This task was presented in a 2D environment that favored a non-ambiguous description. In this study we demonstrate that our dynamic method of communication can be more efficient for people to identify a location compared to one that is non-ambiguous
Bio-Inspired Grasping Controller for Sensorized 2-DoF Grippers
We present a holistic grasping controller, combining free-space position
control and in-contact force-control for reliable grasping given uncertain
object pose estimates. Employing tactile fingertip sensors, undesired object
displacement during grasping is minimized by pausing the finger closing motion
for individual joints on first contact until force-closure is established.
While holding an object, the controller is compliant with external forces to
avoid high internal object forces and prevent object damage. Gravity as an
external force is explicitly considered and compensated for, thus preventing
gravity-induced object drift. We evaluate the controller in two experiments on
the TIAGo robot and its parallel-jaw gripper proving the effectiveness of the
approach for robust grasping and minimizing object displacement. In a series of
ablation studies, we demonstrate the utility of the individual controller
components
- …