Theoretical properties of recursive neural networks with linear neurons

Abstract

Recursive neural networks are a powerful tool for processing structured data, thus filling the gap between connectionism, which is usually related to poorly organized data, and a great variety of real-world problems, where the information is naturally encoded in the relationships among the basic entities. In this paper, some theoretical results about linear recursive neural networks are presented that allow us to establish conditions on their dynamical properties and their capability to encode and classify structured information. A lot of the limitations of the linear model, intrinsically related to recursive processing, are inherited by the general model, thus establishing also their computational capabilities and range of applicability. As a byproduct of our study some connections with the classical linear system theory are given where the processing is extended from sequences to graphs

Similar works

Full text

thumbnail-image

Archivio della Ricerca - Università degli Studi di Siena

redirect
Last time updated on 12/11/2016

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.