158,844 research outputs found
Deep Tree Transductions - A Short Survey
The paper surveys recent extensions of the Long-Short Term Memory networks to
handle tree structures from the perspective of learning non-trivial forms of
isomorph structured transductions. It provides a discussion of modern TreeLSTM
models, showing the effect of the bias induced by the direction of tree
processing. An empirical analysis is performed on real-world benchmarks,
highlighting how there is no single model adequate to effectively approach all
transduction problems.Comment: To appear in the Proceedings of the 2019 INNS Big Data and Deep
Learning (INNSBDDL 2019). arXiv admin note: text overlap with
arXiv:1809.0909
StrAE: Autoencoding for Pre-Trained Embeddings using Explicit Structure
This work presents StrAE: a Structured Autoencoder framework that through
strict adherence to explicit structure, and use of a novel contrastive
objective over tree-structured representations, enables effective learning of
multi-level representations. Through comparison over different forms of
structure, we verify that our results are directly attributable to the
informativeness of the structure provided as input, and show that this is not
the case for existing tree models. We then further extend StrAE to allow the
model to define its own compositions using a simple localised-merge algorithm.
This variant, called Self-StrAE, outperforms baselines that don't involve
explicit hierarchical compositions, and is comparable to models given
informative structure (e.g. constituency parses). Our experiments are conducted
in a data-constrained (circa 10M tokens) setting to help tease apart the
contribution of the inductive bias to effective learning. However, we find that
this framework can be robust to scale, and when extended to a much larger
dataset (circa 100M tokens), our 430 parameter model performs comparably to a
6-layer RoBERTa many orders of magnitude larger in size. Our findings support
the utility of incorporating explicit composition as an inductive bias for
effective representation learning.Comment: EMNLP 2023 Mai
Ensemble Learning for Graph Neural Networks
Graph Neural Networks (GNNs) have shown success in various fields for
learning from graph-structured data. This paper investigates the application of
ensemble learning techniques to improve the performance and robustness of Graph
Neural Networks (GNNs). By training multiple GNN models with diverse
initializations or architectures, we create an ensemble model named ELGNN that
captures various aspects of the data and uses the Tree-Structured Parzen
Estimator algorithm to determine the ensemble weights. Combining the
predictions of these models enhances overall accuracy, reduces bias and
variance, and mitigates the impact of noisy data. Our findings demonstrate the
efficacy of ensemble learning in enhancing GNN capabilities for analyzing
complex graph-structured data. The code is public at
https://github.com/wongzhenhao/ELGNN
S-TREE: Self-Organizing Trees for Data Clustering and Online Vector Quantization
This paper introduces S-TREE (Self-Organizing Tree), a family of models that use unsupervised learning to construct hierarchical representations of data and online tree-structured vector quantizers. The S-TREE1 model, which features a new tree-building algorithm, can be implemented with various cost functions. An alternative implementation, S-TREE2, which uses a new double-path search procedure, is also developed. S-TREE2 implements an online procedure that approximates an optimal (unstructured) clustering solution while imposing a tree-structure constraint. The performance of the S-TREE algorithms is illustrated with data clustering and vector quantization examples, including a Gauss-Markov source benchmark and an image compression application. S-TREE performance on these tasks is compared with the standard tree-structured vector quantizer (TSVQ) and the generalized Lloyd algorithm (GLA). The image reconstruction quality with S-TREE2 approaches that of GLA while taking less than 10% of computer time. S-TREE1 and S-TREE2 also compare favorably with the standard TSVQ in both the time needed to create the codebook and the quality of image reconstruction.Office of Naval Research (N00014-95-10409, N00014-95-0G57
- …