This thesis describes work on using vision verification within an
object level language for describing robot assembly (RAPT). The motivation
for this thesis is provided by two problems. The first is how to
enhance a high level robot programming language so that it can encompass
vision commands to locate workpieces of an assembly. The second is how
to find a way of making full use of sensory information to update the
robot system's knowledge about the environment. The work described in
this thesis consists of three parts:
(1) adding vision commands into the RAPT input language so that
the user can specify vision verification tasks;
(2) implementing a symbolic geometrical reasoning system so that
vision data can be reasoned about symbolically at compile time
in order to speed up run time operations;
(3) providing a framework which enables the RAPT system to make
full use of the sensory information.
The vision commands allow partial information about positions to be
combined with sensory information in a general way, and the symbolic
reasoning system allows much of the reasoning work about vision information
to be done before the actual information is obtained. The framework
combines a verification vision facility with an object level
language in an intelligent way so that all ramifications of the effects
of sensory data are taken account of. The heart of the framework is the
modifying factor array. The position of each object is expressed as the
product of two parts: the planned position and the difference between
this and "he actual one. This difference, referred to as the modifying
factor of an object, is stored in the modifying factor array. The planned position is described by the user in the usual way in a RAPT
program and its value is inferred by the RAPT reasoning system. Modifying
factors of objects whose positions are directly verified are defined
at compile time as symbolic expressions containing variables whose value
will become known at run time. The modifying factors of other objects
(not directly verified) may be dependent upon positions of objects which
are verified. At compile time the framework reasons about the influence
of the sensory information on the objects which are not verified
directly by the vision system, and establishes connections among modifying
factors of objects in each situation. This framework makes the
representation of the influence of vision information on the robot's
knowledge of the environment compact and simple.
All the programming has been done. It has been tested with simulated
data and works successfully