69 research outputs found

    Affordance-Aware Handovers With Human Arm Mobility Constraints

    Get PDF
    Reasoning about object handover configurations allows an assistive agent to estimate the appropriateness of handover for a receiver with different arm mobility capacities. While there are existing approaches for estimating the effectiveness of handovers, their findings are limited to users without arm mobility impairments and to specific objects. Therefore, current state-of-the-art approaches are unable to hand over novel objects to receivers with different arm mobility capacities. We propose a method that generalises handover behaviours to previously unseen objects, subject to the constraint of a user's arm mobility levels and the task context. We propose a heuristic-guided hierarchically optimised cost whose optimisation adapts object configurations for receivers with low arm mobility. This also ensures that the robot grasps consider the context of the user's upcoming task, i.e., the usage of the object. To understand preferences over handover configurations, we report on the findings of an online study, wherein we presented different handover methods, including ours, to 259259 users with different levels of arm mobility. We find that people's preferences over handover methods are correlated to their arm mobility capacities. We encapsulate these preferences in a statistical relational model (SRL) that is able to reason about the most suitable handover configuration given a receiver's arm mobility and upcoming task. Using our SRL model, we obtained an average handover accuracy of 90.8%90.8\% when generalising handovers to novel objects.Comment: Accepted for RA-L 202

    Reasoning and understanding grasp affordances for robot manipulation

    Get PDF
    This doctoral research focuses on developing new methods that enable an artificial agent to grasp and manipulate objects autonomously. More specifically, we are using the concept of affordances to learn and generalise robot grasping and manipulation techniques. [75] defined affordances as the ability of an agent to perform a certain action with an object in a given environment. In robotics, affordances defines the possibility of an agent to perform actions with an object. Therefore, by understanding the relation between actions, objects and the effect of these actions, the agent understands the task at hand, providing the robot with the potential to bridge perception to action. The significance of affordances in robotics has been studied from varied perspectives, such as psychology and cognitive sciences. Many efforts have been made to pragmatically employ the concept of affordances as it provides the potential for an artificial agent to perform tasks autonomously. We start by reviewing and finding common ground amongst different strategies that use affordances for robotic tasks. We build on the identified grounds to provide guidance on including the concept of affordances as a medium to boost autonomy for an artificial agent. To this end, we outline common design choices to build an affordance relation; and their implications on the generalisation capabilities of the agent when facing previously unseen scenarios. Based on our exhaustive review, we conclude that prior research on object affordance detection is effective, however, among others, it has the following technical gaps: (i) the methods are limited to a single object ↔ affordance hypothesis, and (ii) they cannot guarantee task completion or any level of performance for the manipulation task alone nor (iii) in collaboration with other agents. In this research thesis, we propose solutions to these technical challenges. In an incremental fashion, we start by addressing the limited generalisation capabilities of, at the time state-of-the-art methods, by strengthening the perception to action connection through the construction of an Knowledge Base (KB). We then leverage the information encapsulated in the KB to design and implement a reasoning and understanding method based on statistical relational leaner (SRL) that allows us to cope with uncertainty in testing environments, and thus, improve generalisation capabilities in affordance-aware manipulation tasks. The KB in conjunctions with our SRL are the base for our designed solutions that guarantee task completion when the robot is performing a task alone as well as when in collaboration with other agents. We finally expose and discuss a range of interesting avenues that have the potential to thrive the capabilities of a robotic agent through the use of the concept of affordances for manipulation tasks. A summary of the contributions of this thesis can be found at: https://bit.ly/grasp_affordance_reasonin

    Maximising Coefficiency of Human-Robot Handovers through Reinforcement Learning

    Get PDF
    Handing objects to humans is an essential capability for collaborative robots. Previous research works on human-robot handovers focus on facilitating the performance of the human partner and possibly minimising the physical effort needed to grasp the object. However, altruistic robot behaviours may result in protracted and awkward robot motions, contributing to unpleasant sensations by the human partner and affecting perceived safety and social acceptance. This paper investigates whether transferring the cognitive science principle that “humans act coefficiently as a group” (i.e. simultaneously maximising the benefits of all agents involved) to human-robot cooperative tasks promotes a more seamless and natural interaction. Human-robot coefficiency is first modelled by identifying implicit indicators of human comfort and discomfort as well as calculating the robot energy consumption in performing the desired trajectory. We then present a reinforcement learning approach that uses the human-robot coefficiency score as reward to adapt and learn online the combination of robot interaction parameters that maximises such coefficiency . Results proved that by acting coefficiently the robot could meet the individual preferences of most subjects involved in the experiments, improve the human perceived comfort, and foster trust in the robotic partner

    Addressing joint action challenges in HRI: Insights from psychology and philosophy

    Get PDF
    The vast expansion of research in human-robot interactions (HRI) these last decades has been accompanied by the design of increasingly skilled robots for engaging in joint actions with humans. However, these advances have encountered significant challenges to ensure fluent interactions and sustain human motivation through the different steps of joint action. After exploring current literature on joint action in HRI, leading to a more precise definition of these challenges, the present article proposes some perspectives borrowed from psychology and philosophy showing the key role of communication in human interactions. From mutual recognition between individuals to the expression of commitment and social expectations, we argue that communicative cues can facilitate coordination, prediction, and motivation in the context of joint action. The description of several notions thus suggests that some communicative capacities can be implemented in the context of joint action for HRI, leading to an integrated perspective of robotic communication.French National Research Agency (ANR) ANR-16-CE33-0017 ANR-17-EURE-0017 FrontCog ANR-10-IDEX-0001-02 PSLJuan de la Cierva-Incorporacion grant IJC2019-040199-ISpanish Government PID2019-108870GB-I00 PID2019-109764RB-I0

    Grounded Semantic Reasoning for Robotic Interaction with Real-World Objects

    Get PDF
    Robots are increasingly transitioning from specialized, single-task machines to general-purpose systems that operate in unstructured environments, such as homes, offices, and warehouses. In these real-world domains, robots need to manipulate novel objects while adapting to changes in environments and goals. Semantic knowledge, which concisely describes target domains with symbols, can potentially reveal the meaningful patterns shared between problems and environments. However, existing robots are yet to effectively reason about semantic data encoding complex relational knowledge or jointly reason about symbolic semantic data and multimodal data pertinent to robotic manipulation (e.g., object point clouds, 6-DoF poses, and attributes detected with multimodal sensing). This dissertation develops semantic reasoning frameworks capable of modeling complex semantic knowledge grounded in robot perception and action. We show that grounded semantic reasoning enables robots to more effectively perceive, model, and interact with objects in real-world environments. Specifically, this dissertation makes the following contributions: (1) a survey providing a unified view for the diversity of works in the field by formulating semantic reasoning as the integration of knowledge sources, computational frameworks, and world representations; (2) a method for predicting missing relations in large-scale knowledge graphs by leveraging type hierarchies of entities, effectively avoiding ambiguity while maintaining generalization of multi-hop reasoning patterns; (3) a method for predicting unknown properties of objects in various environmental contexts, outperforming prior knowledge graph and statistical relational learning methods due to the use of n-ary relations for modeling object properties; (4) a method for purposeful robotic grasping that accounts for a broad range of contexts (including object visual affordance, material, state, and task constraint), outperforming existing approaches in novel contexts and for unknown objects; (5) a systematic investigation into the generalization of task-oriented grasping that includes a benchmark dataset of 250k grasps, and a novel graph neural network that incorporates semantic relations into end-to-end learning of 6-DoF grasps; (6) a method for rearranging novel objects into semantically meaningful spatial structures based on high-level language instructions, more effectively capturing multi-object spatial constraints than existing pairwise spatial representations; (7) a novel planning-inspired approach that iteratively optimizes placements of partially observed objects subject to both physical constraints and semantic constraints inferred from language instructions.Ph.D
    corecore