87 research outputs found

    PMK : a knowledge processing framework for autonomous robotics perception and manipulation

    Get PDF
    Autonomous indoor service robots are supposed to accomplish tasks, like serve a cup, which involve manipulation actions. Particularly, for complex manipulation tasks which are subject to geometric constraints, spatial information and a rich semantic knowledge about objects, types, and functionality are required, together with the way in which these objects can be manipulated. In this line, this paper presents an ontological-based reasoning framework called Perception and Manipulation Knowledge (PMK) that includes: (1) the modeling of the environment in a standardized way to provide common vocabularies for information exchange in human-robot or robot-robot collaboration, (2) a sensory module to perceive the objects in the environment and assert the ontological knowledge, (3) an evaluation-based analysis of the situation of the objects in the environment, in order to enhance the planning of manipulation tasks. The paper describes the concepts and the implementation of PMK, and presents an example demonstrating the range of information the framework can provide for autonomous robots.Peer ReviewedPostprint (published version

    A review and comparison of ontology-based approaches to robot autonomy

    Get PDF
    Within the next decades, robots will need to be able to execute a large variety of tasks autonomously in a large variety of environments. To relax the resulting programming effort, a knowledge-enabled approach to robot programming can be adopted to organize information in re-usable knowledge pieces. However, for the ease of reuse, there needs to be an agreement on the meaning of terms. A common approach is to represent these terms using ontology languages that conceptualize the respective domain. In this work, we will review projects that use ontologies to support robot autonomy. We will systematically search for projects that fulfill a set of inclusion criteria and compare them with each other with respect to the scope of their ontology, what types of cognitive capabilities are supported by the use of ontologies, and which is their application domain.Peer ReviewedPostprint (author's final draft

    BE-AWARE: an ontology-based adaptive robotic manipulation framework

    Get PDF
    Autonomous service robots are conceived to work in semi-structured and complex human environments performing a wide range of tasks and, hence, one of their main challenges is to be able to adapt the stages of the perceive-plan-execute cycle to perturbations ranging from small deviations on the poses of objects to large unexpected changes in the environment, as well as to recover from potential failures. To advance in this direction, this paper proposes an ontology-based manipulation framework where reasoning is used to enhance perception with situation awareness, planning with domain awareness and execution with the awareness of the execution structures. The combination of these different types of awareness allows the robot to have different adaptation capabilities. The conceptual schema of the framework is presented and discussed and the main future implementation challenges are pointed out.Peer ReviewedPostprint (author's final draft

    FailRecOnt - An ontology-based framework for failure interpretation and recovery in planning and execution

    Get PDF
    Autonomous mobile robot manipulators have the potential to act as robot helpers at home to improve quality of life for various user populations, such as elderly or handicapped people, or to act as robot co-workers on factory floors, helping in assembly applications where collaborating with other operators may be required. However, robotic systems do not show robust performance when placed in environments that are not tightly controlled. An important cause of this is that failure handling often consists of scripted responses to foreseen complications, which leaves the robot vulnerable to new situations and ill-equipped to reason about failure and recovery strategies. Instead of libraries of hard-coded reactions that are expensive to develop and maintain, more sophisticated reasoning mechanisms are needed to handle failure. This requires an ontological characterization of what failure is, what concepts are useful to formulate causal explanations of failure, and integration with knowledge of available resources including the capabilities of the robot as well as those of other potential cooperative agents in the environment, e.g. a human user. We propose the FailRecOnt framework as a step in this direction. We have integrated an ontology for failure interpretation and recovery with a contingency-based task and motion planning framework such that a robot can deal with uncertainty, recover from failures, and deal with human-robot interactions. A motivating example has been introduced to justify this proposal. The proposal has been tested with a challenging scenarioPeer ReviewedPostprint (published version

    Perception and reasoning for the automatic configuration of task and motion planning problems

    Get PDF
    This thesis proposes a framework for configuring Task Planning Problems flexibly in an automatic manner using two main modules which are the Perception Module and the Reasoning Module. In order to automatize the overall process, initially, a knowledge layer is generated manually, in which the information regarding the environment is stored using ontologies, whereas the environmental state where the task is taking place is observed with the help of the Perception Module. The knowledge layer is then reasoned within the Reasoner Module in order to automatically configure task planning problems by filling Planning Domain Definition Language (PDDL) [1] files. During this reasoning process, the information retrieved from the Perception Module is used. In this paper, both of these modules mentioned above are explained in detail before providing the results separately for each module. Then, in addition to individual results, a scenario is created within a lab environment to test the overall system including both modules. Furthermore, alternative areas where the Reasoning Module implementation can be benefited from is also discusse

    Reasoning and state monitoring for the robust execution of robotic manipulation tasks

    Get PDF
    The execution of robotic manipulation tasks needs to be robust in front of failures or changes in the environment, and for this purpose, Behavior Trees (BT) are a good alternative to Finite State Machines, because the ability of BTs to be edited during run time and the fact that one can design reactive systems with BTs, makes the BT executor a robust execution manager. However, the good monitoring of the system state is required in order to react to errors at either geometric or symbolic level requiring, respectively, replanning at motion or at task level. This paper make a proposal in this line and, moreover, makes task planning adaptive to the actual situations encountered by knowledge-based reasoning procedures to automatically generate the Planning Domain Definition Language (PDDL) files that define the task.Peer ReviewedPostprint (published version

    Contingent task and motion planning under uncertainty for human–robot interactions

    Get PDF
    Manipulation planning under incomplete information is a highly challenging task for mobile manipulators. Uncertainty can be resolved by robot perception modules or using human knowledge in the execution process. Human operators can also collaborate with robots for the execution of some difficult actions or as helpers in sharing the task knowledge. In this scope, a contingent-based task and motion planning is proposed taking into account robot uncertainty and human–robot interactions, resulting a tree-shaped set of geometrically feasible plans. Different sorts of geometric reasoning processes are embedded inside the planner to cope with task constraints like detecting occluding objects when a robot needs to grasp an object. The proposal has been evaluated with different challenging scenarios in simulation and a real environment.Postprint (published version

    An ontology for failure interpretation in automated planning and execution

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in ROBOT - Iberian Robotics Conference. The final authenticated version is available online at: http://dx.doi.org/10.1007/978-3-030-35990-4_31”.Autonomous indoor robots are supposed to accomplish tasks, like serve a cup, which involve manipulation actions, where task and motion planning levels are coupled. In both planning levels and execution phase, several source of failures can occur. In this paper, an interpretation ontology covering several sources of failures in automated planning and also during the execution phases is introduced with the purpose of working the planning more informed and the execution prepared for recovery. The proposed failure interpretation ontological module covers: (1) geometric failures, that may appear when e.g. the robot can not reach to grasp/place an object, there is no free-collision path or there is no feasible Inverse Kinematic (IK) solution. (2) hardware related failures that may appear when e.g. the robot in a real environment requires to be re-calibrated (gripper or arm), or it is sent to a non-reachable configuration. (3) software agent related failures, that may appear when e.g. the robot has software components that fail like when an algorithm is not able to extract the proper features. The paper describes the concepts and the implementation of failure interpretation ontology in several foundations like DUL and SUMO, and presents an example showing different situations in planning demonstrating the range of information the framework can provide for autonomous robotsPeer ReviewedPostprint (author's final draft

    OCRA – An ontology for collaborative robotics and adaptation

    Get PDF
    Industrial collaborative robots will be used in unstructured scenarios and a large variety of tasks in the near future. These robots shall collaborate with humans, who will add uncertainty and safety constraints to the execution of industrial robotic tasks. Hence, trustworthy collaborative robots must be able to reason about their collaboration’s requirements (e.g., safety), as well as the adaptation of their plans due to unexpected situations. A common approach to reasoning is to represent the knowledge of interest using logic-based formalisms, such as ontologies. However, there is not an established ontology defining notions such as collaboration or adaptation yet. In this article, we propose an Ontology for Collaborative Robotics and Adaptation (OCRA), which is built around two main notions: collaboration, and plan adaptation. OCRA ensures a reliable human-robot collaboration, since robots can formalize, and reason about their plan adaptations and collaborations in unstructured collaborative robotic scenarios. Furthermore, our ontology enhances the reusability of the domain’s terminology, allowing robots to represent their knowledge about different collaborative and adaptive situations. We validate our formal model, first, by demonstrating that a robot may answer a set of competency questions using OCRA. Second, by studying the formalization’s performance in limit cases that include instances with incongruent and incomplete axioms. For both validations, the example use case consists in a human and a robot collaborating on the filling of a tray.Peer ReviewedPostprint (published version
    • …
    corecore