2,520 research outputs found
Tree Echo State Networks
In this paper we present the Tree Echo State Network (TreeESN) model, generalizing the paradigm of Reservoir Computing to tree structured data. TreeESNs exploit an untrained generalized recursive reservoir, exhibiting extreme efficiency for learning in structured domains. In addition, we highlight through the paper other characteristics of the approach: First, we discuss the Markovian characterization of reservoir dynamics, extended to the case of tree domains, that is implied by the contractive setting of the TreeESN state transition function. Second, we study two types of state mapping functions to map the tree structured state of TreeESN into a fixed-size feature representation for classification or regression tasks. The critical role of the relation between the choice of the state mapping function and the Markovian characterization of the task is analyzed and experimentally investigated on both artificial and real-world tasks. Finally, experimental results on benchmark and real-world tasks show that the TreeESN approach, in spite of its efficiency, can achieve comparable results with state-of-the-art, although more complex, neural and kernel based models for tree structured data
Reservoir Computing for Learning in Structured Domains
The study of learning models for direct processing complex data structures has gained an
increasing interest within the Machine Learning (ML) community during the last decades.
In this concern, efficiency, effectiveness and adaptivity of the ML models on large classes
of data structures represent challenging and open research issues.
The paradigm under consideration is Reservoir Computing (RC), a novel and extremely
efficient methodology for modeling Recurrent Neural Networks (RNN) for adaptive
sequence processing. RC comprises a number of different neural models, among which the
Echo State Network (ESN) probably represents the most popular, used and studied one.
Another research area of interest is represented by Recursive Neural Networks (RecNNs),
constituting a class of neural network models recently proposed for dealing with
hierarchical data structures directly.
In this thesis the RC paradigm is investigated and suitably generalized in order to
approach the problems arising from learning in structured domains. The research studies
described in this thesis cover classes of data structures characterized by increasing
complexity, from sequences, to trees and graphs structures. Accordingly, the research focus
goes progressively from the analysis of standard ESNs for sequence processing, to the
development of new models for trees and graphs structured domains. The analysis of ESNs
for sequence processing addresses the interesting problem of identifying and
characterizing the relevant factors which influence the reservoir dynamics and the ESN performance.
Promising applications of ESNs in the emerging field of Ambient Assisted Living are also
presented and discussed. Moving towards highly structured data representations, the
ESN model is extended to deal with complex structures directly, resulting in the proposed
TreeESN, which is suitable for domains comprising hierarchical structures, and Graph-ESN,
which generalizes the approach to a large class of cyclic/acyclic directed/undirected
labeled graphs. TreeESNs and GraphESNs represent both novel RC models for structured
data and extremely efficient approaches for modeling RecNNs, eventually contributing
to the definition of an RC framework for learning in structured domains. The problem
of adaptively exploiting the state space in GraphESNs is also investigated, with specific
regard to tasks in which input graphs are required to be mapped into flat vectorial outputs,
resulting in the GraphESN-wnn and GraphESN-NG models. As a further point, the
generalization performance of the proposed models is evaluated considering both artificial
and complex real-world tasks from different application domains, including Chemistry,
Toxicology and Document Processing
Questionnaire integration system based on question classification and short text semantic textual similarity, A
2018 Fall.Includes bibliographical references.Semantic integration from heterogeneous sources involves a series of NLP tasks. Existing re- search has focused mainly on measuring two paired sentences. However, to find possible identical texts between two datasets, the sentences are not paired. To avoid pair-wise comparison, this thesis proposed a semantic similarity measuring system equipped with a precategorization module. It applies a hybrid question classification module, which subdivides all texts to coarse categories. The sentences are then paired from these subcategories. The core task is to detect identical texts between two sentences, which relates to the semantic textual similarity task in the NLP field. We built a short text semantic textual similarity measuring module. It combined conventional NLP techniques, including both semantic and syntactic features, with a Recurrent Convolutional Neural Network to accomplish an ensemble model. We also conducted a set of empirical evaluations. The results show that our system possesses a degree of generalization ability, and it performs well on heterogeneous sources
- …