37 research outputs found

    On the manipulation of articulated objects in human-robot cooperation scenarios

    Get PDF
    Articulated and flexible objects constitute a challenge for robot manipulation tasks, but are present in different real-world settings, including home and industrial environments. Approaches to the manipulation of such objects employ ad hoc strategies to sequence and perform actions on them depending on their physical or geometrical features, and on a priori target object configurations, whereas principled strategies to sequence basic manipulation actions for these objects have not been fully explored in the literature. In this paper, we propose a novel action planning and execution framework for the manipulation of articulated objects, which (i) employs action planning to sequence a set of actions leading to a target articulated object configuration, and (ii) allows humans to collaboratively carry out the plan with the robot, also interrupting its execution if needed. The framework adopts a formally defined representation of articulated objects. A link exists between the way articulated objects are perceived by the robot, how they are formally represented in the action planning and execution framework, and the complexity of the planning process. Results related to planning performance, and examples with a Baxter dualarm manipulator operating on articulated objects with humans are shown

    Learning Grasp Affordance Densities

    Full text link
    We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of grasp-and-drop actions: the robot uses visual cues to generate a set of grasp hypotheses, which it then executes and records their outcomes. When a satisfactory amount of grasp data is available, an importance-sampling algorithm turns it into a grasp density. We evaluate our method in a largely autonomous learning experiment, run on three objects with distinct shapes. The experiment shows how learning increases success rates. It also measures the success rate of grasps chosen to maximize the probability of success, given reaching constraints

    Increasing trust in human–robot medical interactions: effects of transparency and adaptability

    No full text
    In this paper, we examine trust in a human-robot medical interaction. We focus on the influence of transparency and robot adaptability on people’s trust in a human-robot blood pressure measuring scenario. Our results show that increased transparency, i.e. robot explanations of its own actions designed to make the process and robot behaviors and capabilities accessible to the user, has a consistent effect on people’s trust and perceived comfort. In contrast, robot adaptability, i.e., the opportunity to adjust the robot’s position according to users’ needs, influences users’ evaluations of the robot as trustworthy only marginally. Our qualitative analyses indicate that this is due to the fact that transparency and adaptability are complex factors; the investigation of the interactional dynamics shows that users have very specific needs, and that adaptability may have to be paired with responsivity in order to make people feel in control
    corecore