411 research outputs found
Grammar-Based Geodesics in Semantic Networks
A geodesic is the shortest path between two vertices in a connected network.
The geodesic is the kernel of various network metrics including radius,
diameter, eccentricity, closeness, and betweenness. These metrics are the
foundation of much network research and thus, have been studied extensively in
the domain of single-relational networks (both in their directed and undirected
forms). However, geodesics for single-relational networks do not translate
directly to multi-relational, or semantic networks, where vertices are
connected to one another by any number of edge labels. Here, a more
sophisticated method for calculating a geodesic is necessary. This article
presents a technique for calculating geodesics in semantic networks with a
focus on semantic networks represented according to the Resource Description
Framework (RDF). In this framework, a discrete "walker" utilizes an abstract
path description called a grammar to determine which paths to include in its
geodesic calculation. The grammar-based model forms a general framework for
studying geodesic metrics in semantic networks.Comment: First draft written in 200
Grammar-Based Random Walkers in Semantic Networks
Semantic networks qualify the meaning of an edge relating any two vertices.
Determining which vertices are most "central" in a semantic network is
difficult because one relationship type may be deemed subjectively more
important than another. For this reason, research into semantic network metrics
has focused primarily on context-based rankings (i.e. user prescribed
contexts). Moreover, many of the current semantic network metrics rank semantic
associations (i.e. directed paths between two vertices) and not the vertices
themselves. This article presents a framework for calculating semantically
meaningful primary eigenvector-based metrics such as eigenvector centrality and
PageRank in semantic networks using a modified version of the random walker
model of Markov chain analysis. Random walkers, in the context of this article,
are constrained by a grammar, where the grammar is a user defined data
structure that determines the meaning of the final vertex ranking. The ideas in
this article are presented within the context of the Resource Description
Framework (RDF) of the Semantic Web initiative.Comment: First draft of manuscript originally written in November 200
`The frozen accident' as an evolutionary adaptation: A rate distortion theory perspective on the dynamics and symmetries of genetic coding mechanisms
We survey some interpretations and related issues concerning the frozen hypothesis due to F. Crick and how it can be explained in terms of several natural mechanisms involving error correction codes, spin glasses, symmetry breaking and the characteristic robustness of genetic networks. The approach to most of these questions involves using elements of Shannon's rate distortion theory incorporating a semantic system which is meaningful for the relevant alphabets and vocabulary implemented in transmission of the genetic code. We apply the fundamental homology between information source uncertainty with the free energy density of a thermodynamical system with respect to transcriptional regulators and the communication channels of sequence/structure in proteins. This leads to the suggestion that the frozen accident may have been a type of evolutionary adaptation
The Infinity Mirror Test for Graph Models
Graph models, like other machine learning models, have implicit and explicit
biases built-in, which often impact performance in nontrivial ways. The model's
faithfulness is often measured by comparing the newly generated graph against
the source graph using any number or combination of graph properties.
Differences in the size or topology of the generated graph therefore indicate a
loss in the model. Yet, in many systems, errors encoded in loss functions are
subtle and not well understood. In the present work, we introduce the Infinity
Mirror test for analyzing the robustness of graph models. This straightforward
stress test works by repeatedly fitting a model to its own outputs. A
hypothetically perfect graph model would have no deviation from the source
graph; however, the model's implicit biases and assumptions are exaggerated by
the Infinity Mirror test, exposing potential issues that were previously
obscured. Through an analysis of thousands of experiments on synthetic and
real-world graphs, we show that several conventional graph models degenerate in
exciting and informative ways. We believe that the observed degenerative
patterns are clues to the future development of better graph models.Comment: This was submitted to IEEE TKDE 2020, 12 pages and 8 figure
Anatomy of word and sentence meaning
Reading and listening involve complex psychological processes that recruit many brain areas. The anatomy of processing English words has been studied by a variety of imaging methods. Although there is widespread agreement on the general anatomical areas involved in comprehending words, there are still disputes about the computations that go on in these areas. Examination of the time relations (circuitry) among these anatomical areas can aid in under-standing their computations. In this paper we concentrate on tasks which involve obtaining the meaning of a word in isolation or in relation to a sentence. Our current data support a finding in the literature that frontal semantic areas are active well before posterior areas. We use the subjects attention to amplify relevant brain areas involved either in semantic classification or in judging the relation of the word to a sentence in order to test the hypothesis that frontal areas are concerned with lexical semantics while posterior areas are more involved in comprehension of propositions that involve several words
Sequence Processing with Quantum Tensor Networks
We introduce complex-valued tensor network models for sequence processing
motivated by correspondence to probabilistic graphical models, interpretability
and resource compression. Inductive bias is introduced to our models via
network architecture, and is motivated by the correlation structure inherent in
the data, as well as any relevant compositional structure, resulting in
tree-like connectivity. Our models are specifically constructed using
parameterised quantum circuits, widely used in quantum machine learning,
effectively using Hilbert space as a feature space. Furthermore, they are
efficiently trainable due to their tree-like structure. We demonstrate
experimental results for the task of binary classification of sequences from
real-world datasets relevant to natural language and bioinformatics,
characterised by long-range correlations and often equipped with syntactic
information. Since our models have a valid operational interpretation as
quantum processes, we also demonstrate their implementation on Quantinuum's
H2-1 trapped-ion quantum processor, demonstrating the possibility of efficient
sequence processing on near-term quantum devices. This work constitutes the
first scalable implementation of near-term quantum language processing,
providing the tools for large-scale experimentation on the role of tensor
structure and syntactic priors. Finally, this work lays the groundwork for
generative sequence modelling in a hybrid pipeline where the training may be
conducted efficiently in simulation, while sampling from learned probability
distributions may be done with polynomial speed-up on quantum devices
Multidimensional analysis of linguistic networks
Network-based approaches play an increasingly important role in the analysis of data even in systems in which a network representation is not immediately apparent. This is particularly true for linguistic networks, which use to be induced from a linguistic data set for which a network perspective is only one out of several options for representation. Here we introduce a multidimensional framework for network construction and analysis with special focus on linguistic networks. Such a framework is used to show that the higher is the abstraction level of network induction, the harder is the interpretation of the topological indicators used in network analysis. Several examples are provided allowing for the comparison of different linguistic networks as well as to networks in other fields of application of network theory. The computation and the intelligibility of some statistical indicators frequently used in linguistic networks are discussed. It suggests that the field of linguistic networks, by applying statistical tools inspired by network studies in other domains, may, in its current state, have only a limited contribution to the development of linguistic theory.info:eu-repo/semantics/publishedVersio
Applauding with Closed Hands: Neural Signature of Action-Sentence Compatibility Effects
BACKGROUND: Behavioral studies have provided evidence for an action-sentence compatibility effect (ACE) that suggests a coupling of motor mechanisms and action-sentence comprehension. When both processes are concurrent, the action sentence primes the actual movement, and simultaneously, the action affects comprehension. The aim of the present study was to investigate brain markers of bidirectional impact of language comprehension and motor processes. METHODOLOGY/PRINCIPAL FINDINGS: Participants listened to sentences describing an action that involved an open hand, a closed hand, or no manual action. Each participant was asked to press a button to indicate his/her understanding of the sentence. Each participant was assigned a hand-shape, either closed or open, which had to be used to activate the button. There were two groups (depending on the assigned hand-shape) and three categories (compatible, incompatible and neutral) defined according to the compatibility between the response and the sentence. ACEs were found in both groups. Brain markers of semantic processing exhibited an N400-like component around the Cz electrode position. This component distinguishes between compatible and incompatible, with a greater negative deflection for incompatible. Motor response elicited a motor potential (MP) and a re-afferent potential (RAP), which are both enhanced in the compatible condition. CONCLUSIONS/SIGNIFICANCE: The present findings provide the first ACE cortical measurements of semantic processing and the motor response. N400-like effects suggest that incompatibility with motor processes interferes in sentence comprehension in a semantic fashion. Modulation of motor potentials (MP and RAP) revealed a multimodal semantic facilitation of the motor response. Both results provide neural evidence of an action-sentence bidirectional relationship. Our results suggest that ACE is not an epiphenomenal post-sentence comprehension process. In contrast, motor-language integration occurring during the verb onset supports a genuine and ongoing brain motor-language interaction
- …