504 research outputs found
PMK : a knowledge processing framework for autonomous robotics perception and manipulation
Autonomous indoor service robots are supposed to accomplish tasks, like serve a cup, which involve manipulation actions. Particularly, for complex manipulation tasks which are subject to geometric constraints, spatial information and a rich semantic knowledge about objects, types, and functionality are required, together with the way in which these objects can be manipulated. In this line, this paper presents an ontological-based reasoning framework called Perception and Manipulation Knowledge (PMK) that includes: (1) the modeling of the environment in a standardized way to provide common vocabularies for information exchange in human-robot or robot-robot collaboration, (2) a sensory module to perceive the objects in the environment and assert the ontological knowledge, (3) an evaluation-based analysis of the situation of the objects in the environment, in order to enhance the planning of manipulation tasks. The paper describes the concepts and the implementation of PMK, and presents an example demonstrating the range of information the framework can provide for autonomous robots.Peer ReviewedPostprint (published version
Fuzzy optimisation based symbolic grounding for service robots
A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySymbolic grounding is a bridge between task level planning and actual robot sensing and actuation. Uncertainties raised by unstructured environments make a bottleneck for integrating traditional artificial intelligence with service robotics. In this research, a fuzzy optimisation based symbolic grounding approach is presented. This approach can handle uncertainties and helps service robots to determine the most comfortable base region for grasping objects in a fetch and carry task. Novel techniques are applied to establish fuzzy objective function, to model fuzzy constraints and to perform fuzzy optimisation. The approach does not have the short comings of others’ work and the computation time is dramatically reduced in compare with other methods. The advantages of the proposed fuzzy optimisation based approach are evidenced by experiments that were undertaken in Care-O-bot 3 (COB 3) and Robot Operating System (ROS) platforms
On-the-Fly Workspace Visualization for Redundant Manipulators
This thesis explores the possibilities of on-line workspace rendering for redundant robotic manipulators via parallelized computation on the graphics card. Several visualization schemes for different workspace types are devised, implemented and evaluated. Possible applications are visual support for the operation of manipulators, fast workspace analyses in time-critical scenarios and interactive workspace exploration for design and comparison of robots and tools
Recommended from our members
Learning To Grasp
Providing robots with the ability to grasp objects has, despite decades of research, remained a challenging problem. The problem is approachable in constrained environments where there is ample prior knowledge of the scene and objects that will be manipulated. The challenge is in building systems that scale beyond specific situational instances and gracefully operate in novel conditions. In the past, heuristic and simple rule based strategies were used to accomplish tasks such as scene segmentation or reasoning about occlusion. These heuristic strategies work in constrained environments where a roboticist can make simplifying assumptions about everything from the geometries of the objects to be interacted with, level of clutter, camera position, lighting, and a myriad of other relevant variables. With these assumptions in place, it becomes tractable for a roboticist to hardcode desired behaviour and build a robotic system capable of completing repetitive tasks. These hardcoded behaviours will quickly fail if the assumptions about the environment are invalidated. In this thesis we will demonstrate how a robust grasping system can be built that is capable of operating under a more variable set of conditions without requiring significant engineering of behavior by a roboticist.
This robustness is enabled by a new found ability to empower novel machine learning techniques with massive amounts of synthetic training data. The ability of simulators to create realistic sensory data enables the generation of massive corpora of labeled training data for various grasping related tasks. The use of simulation allows for the creation of a wide variety of environments and experiences exposing the robotic system to a large number of scenarios before ever operating in the real world. This thesis demonstrates that it is now possible to build systems that work in the real world trained using deep learning on synthetic data. The sheer volume of data that can be produced via simulation enables the use of powerful deep learning techniques whose performance scales with the amount of data available. This thesis will explore how deep learning and other techniques can be used to encode these massive datasets for efficient runtime use. The ability to train and test on synthetic data allows for quick iterative development of new perception, planning and grasp execution algorithms that work in a large number of environments. Creative applications of machine learning and massive synthetic datasets are allowing robotic systems to learn skills, and move beyond repetitive hardcoded tasks
- …