Many types of intelligent adaptive systems use
vast databases of a-priori knowledge during training
phases. These systems are then reliant on both the accuracy
of this data and on the breadth of the data. It is assumed
whilst training that the data encompasses the total
operating window for the system in enough detail to
generate an accurate ‘black box’ model of the plant under
control. It may be that under certain unforeseen operating
conditions, or in a scenario where there is little prior
knowledge, the system may be forced to operate outside the
scope of the original a-priori knowledge. Lastly the data
gathered into the a-priori source may have been
unintentionally corrupted. This paper aims to examine some
of these effects upon two common adaptive intelligent tools,
neural networks and an adaptive neuro-fuzzy inference
system, ANFIS, network