4 research outputs found
Sidebar- Programming Commercial Robots
P. 125-132Manual systems require the user/programmer to directly enter the desired
behaviour of the robot, usually using a graphical or text-based programming
language, as shown in Fig. 1. Text-based systems are either controller-specific
languages, generic procedural languages, or behavioural languages, which typically differ by the flexibility and method of expression of the system. Graphical
languages [BKS02, BI01] use a graph, flow-chart or diagram based graphical
interface to programming, sacrificing some flexibility and expressiveness for
ease of use.
The user/programmer has little or no direct control over the robot code in
an automatic programming system, which may acquire the program by learning, programming by demonstration (PbD), or by instruction, as indicated
in Fig. 2. Often automatic systems are used “online,” with a running robot,
although a simulation can also be used.
In this sidebar we will focus on the characteristics of commercial programming environments. Simple robots can be programmed directly using their
own operating systems. More sophisticated robots include SDKs to simplify
the programming of their robots. Mobile robots programming environments
vs. industrial manipulators are also presente
Recommended from our members
Remote Access to a Prototyping Laboratory
There is a growing global demand for continuing adult higher education particularly in science and engineering subjects. New technologies are emerging which would enable the development of a Remote Access Laboratory for rapid prototyping of Artificial Intelligence, as a learning environment for mechatronic engineering, in which high precision electromechanical devices are designed to exhibit autonomous behaviour.
Secondary research investigated the learning theories for a Remote Access Laboratory, and the current practices for distance learning, involving groupware in shared activity 'collaboratories'. Having determined that the laboratory would need a multi-user interactive environment architecture, with the requirement for adaptability to rapid developments,a distributed software architecture was selected. The laboratory design was subsequently argued to be best served by Intelligent Agents in a Multi-Agent system.
The aims of the research were to establish the viability of a Remote Access Laboratory for mechatronic experimentation, and to evaluate the technologies required to implement such a laboratory environment for rapid prototyping. These were achieved by developing a novel user interface, based on a multi-functional screen layout, and a graphical specification facility to provide robotic navigation that is intuitive to use and does not require text-based programming.
The research investigated the prototyping of robotic behaviour, which used Programming by Demonstration as an innovative technique to prototype robot navigation. The method of designing behaviours met an anticipated need to allow the robot to interact with an environment, to achieve goals under conditions of uncertainty, while requiring a level of abstraction in the behaviour design. The interface structured a composite of the designed behaviours into prototype Artificial Intelligence using a hierarchical behaviour architecture, which complied with the principles of Object Orientated programming. This was subsequently a new and original programming method to facilitate rapid prototyping of Artificial Intelligence design and structuring.
Experimentation involved 20 participants attempting to accomplish a series of tasks which involved using the prototyped interface and an existing text-based robot programming system. The participants were profiled by their formal qualifications, knowledge and experience. The experimental data obtained were used to establish a comparative measure of the prototype interface success compared with an existing distance-learning, home experiment kit, in the form of a small controllable model vehicle. The data obtained provided strong evidence to support the hypothesis that a Programming by Demonstration based system for rapid prototyping is more flexible and easier to use than a previously existing distance learning text-based system. The Programming by Demonstration system showed great promise, being quicker for prototyping, and more intuitive. The learning interface design pioneered new techniques and technologies for rapid prototyping of Artificial Intelligence in a Mechatronics Remote Access Laboratory
Intuitive Teleoperation of an Intelligent Robotic System Using Low-Cost 6-DOF Motion Capture
There is currently a wide variety of six degree-of-freedom (6-DOF) motion capture technologies available. However, these systems tend to be very expensive and thus cost prohibitive. A software system was developed to provide 6-DOF motion capture using the Nintendo Wii remote’s (wiimote) sensors, an infrared beacon, and a novel hierarchical linear-quaternion Kalman filter. The software is made freely available, and the hardware costs less than one hundred dollars. Using this motion capture software, a robotic control system was developed to teleoperate a 6-DOF robotic manipulator via the operator’s natural hand movements.
The teleoperation system requires calibration of the wiimote’s infrared cameras to obtain an estimate of the wiimote’s 6-DOF pose. However, since the raw images from the wiimote’s infrared camera are not available, a novel camera-calibration method was developed to obtain the camera’s intrinsic parameters, which are used to obtain a low-accuracy estimate of the 6-DOF pose. By fusing the low-accuracy estimate of 6-DOF pose with accelerometer and gyroscope measurements, an accurate estimation of 6-DOF pose is obtained for teleoperation.
Preliminary testing suggests that the motion capture system has an accuracy of less than a millimetre in position and less than one degree in attitude. Furthermore, whole-system tests demonstrate that the teleoperation system is capable of controlling the end effector of a robotic manipulator to match the pose of the wiimote. Since this system can provide 6-DOF motion capture at a fraction of the cost of traditional methods, it has wide applicability in the field of robotics and as a 6-DOF human input device to control 3D virtual computer environments
Human-Inspired Robot Task Teaching and Learning
Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups.
The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback.
The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames.
An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction.
A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills