2 research outputs found
Linear Self-Attention Approximation via Trainable Feedforward Kernel
In pursuit of faster computation, Efficient Transformers demonstrate an
impressive variety of approaches -- models attaining sub-quadratic attention
complexity can utilize a notion of sparsity or a low-rank approximation of
inputs to reduce the number of attended keys; other ways to reduce complexity
include locality-sensitive hashing, key pooling, additional memory to store
information in compacted or hybridization with other architectures, such as
CNN. Often based on a strong mathematical basis, kernelized approaches allow
for the approximation of attention with linear complexity while retaining high
accuracy. Therefore, in the present paper, we aim to expand the idea of
trainable kernel methods to approximate the self-attention mechanism of the
Transformer architecture
Text-to-Ontology Mapping via Natural Language Processing with Application to Search for Relevant Ontologies in Catalysis
The paper presents a machine-learning based approach to text-to-ontology mapping. We explore a possibility of matching texts to the relevant ontologies using a combination of artificial neural networks and classifiers. Ontologies are formal specifications of the shared conceptualizations of application domains. While describing the same domain, different ontologies might be created by different domain experts. To enhance the reasoning and data handling of concepts in scientific papers, finding the best fitting ontology regarding description of the concepts contained in a text corpus. The approach presented in this work attempts to solve this by selection of a representative text paragraph from a set of scientific papers, which are used as data set. Then, using a pre-trained and fine-tuned Transformer, the paragraph is embedded into a vector space. Finally, the embedded vector becomes classified with respect to its relevance regarding a selected target ontology. To construct representative embeddings, we experiment with different training pipelines for natural language processing models. Those embeddings in turn are later used in the task of matching text to ontology. Finally, the result is assessed by compressing and visualizing the latent space and exploring the mappings between text fragments from a database and the set of chosen ontologies. To confirm the differences in behavior of the proposed ontology mapper models, we test five statistical hypotheses about their relative performance on ontology classification. To categorize the output from the Transformer, different classifiers are considered. These classifiers are, in detail, the Support Vector Machine (SVM), k-Nearest Neighbor, Gaussian Process, Random Forest, and Multilayer Perceptron. Application of these classifiers in a domain of scientific texts concerning catalysis research and respective ontologies, the suitability of the classifiers is evaluated, where the best result was achieved by the SVM classifier