Skip to main content
Article thumbnail
Location of Repository

A Theoretical Framework for Multiple Neural Network Systems

By Mike W Shields and Matthew C Casey

Abstract

<p>Multiple neural network systems have become popular techniques for tackling complex tasks, often giving improved performance compared to single network systems. For example, modular systems can provide improvements in generalisation through task decomposition, whereas multiple classifier and regressor systems typically improve generalisation through the ensemble combination of redundant networks. Whilst there has been significant focus on understanding the theoretical properties of some of these multi-net systems, particularly ensemble systems, there has been little theoretical work on understanding the properties of the generic combination of networks, important in developing more complex systems, perhaps even those a step closer to their biological counterparts. In this article, we provide a formal framework in which the generic combination of neural networks can be described, and in which the properties of the system can be rigorously analysed. We achieve this by describing multi-net systems in terms of partially ordered sets and state transition systems. By way of example, we explore an abstract version of learning applied to a generic multi-net system that can combine an arbitrary number of networks in sequence and in parallel. By using the framework we show with a constructive proof that, under specific conditions, if it is possible to train the generic system, then training can be achieved by the abstract technique described.</p

Year: 2008
DOI identifier: 10.1016/j.neucom.2007.05.008
OAI identifier: oai:epubs.surrey.ac.uk:500

Suggested articles


To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.