41,467 research outputs found
Constrained structure of ancient Chinese poetry facilitates speech content grouping
Ancient Chinese poetry is constituted by structured language that deviates from ordinary language usage [1, 2]; its poetic genres impose unique combinatory constraints on linguistic elements [3]. How does the constrained poetic structure facilitate speech segmentation when common linguistic [4, 5, 6, 7, 8] and statistical cues [5, 9] are unreliable to listeners in poems? We generated artificial Jueju, which arguably has the most constrained structure in ancient Chinese poetry, and presented each poem twice as an isochronous sequence of syllables to native Mandarin speakers while conducting magnetoencephalography (MEG) recording. We found that listeners deployed their prior knowledge of Jueju to build the line structure and to establish the conceptual flow of Jueju. Unprecedentedly, we found a phase precession phenomenon indicating predictive processes of speech segmentation—the neural phase advanced faster after listeners acquired knowledge of incoming speech. The statistical co-occurrence of monosyllabic words in Jueju negatively correlated with speech segmentation, which provides an alternative perspective on how statistical cues facilitate speech segmentation. Our findings suggest that constrained poetic structures serve as a temporal map for listeners to group speech contents and to predict incoming speech signals. Listeners can parse speech streams by using not only grammatical and statistical cues but also their prior knowledge of the form of language
Txt2vz: a new tool for generating graph clouds
We present txt2vz (txt2vz.appspot.com), a new tool for automatically generating a visual summary of unstructured text data found in documents or web sites. The main purpose of the tool is to give the user information about the text so that they can quickly get a good idea about the topics covered. Txt2vz is able to identify important concepts from unstructured text data and to reveal relationships between those concepts. We discuss other approaches to generating diagrams from text and highlight the differences between tag clouds, word clouds, tree clouds and graph clouds
Designing a training tool for imaging mental models
The training process can be conceptualized as the student acquiring an evolutionary sequence of classification-problem solving mental models. For example a physician learns (1) classification systems for patient symptoms, diagnostic procedures, diseases, and therapeutic interventions and (2) interrelationships among these classifications (e.g., how to use diagnostic procedures to collect data about a patient's symptoms in order to identify the disease so that therapeutic measures can be taken. This project developed functional specifications for a computer-based tool, Mental Link, that allows the evaluative imaging of such mental models. The fundamental design approach underlying this representational medium is traversal of virtual cognition space. Typically intangible cognitive entities and links among them are visible as a three-dimensional web that represents a knowledge structure. The tool has a high degree of flexibility and customizability to allow extension to other types of uses, such a front-end to an intelligent tutoring system, knowledge base, hypermedia system, or semantic network
Semantic spaces
Any natural language can be considered as a tool for producing large
databases (consisting of texts, written, or discursive). This tool for its
description in turn requires other large databases (dictionaries, grammars
etc.). Nowadays, the notion of database is associated with computer processing
and computer memory. However, a natural language resides also in human brains
and functions in human communication, from interpersonal to intergenerational
one. We discuss in this survey/research paper mathematical, in particular
geometric, constructions, which help to bridge these two worlds. In particular,
in this paper we consider the Vector Space Model of semantics based on
frequency matrices, as used in Natural Language Processing. We investigate
underlying geometries, formulated in terms of Grassmannians, projective spaces,
and flag varieties. We formulate the relation between vector space models and
semantic spaces based on semic axes in terms of projectability of subvarieties
in Grassmannians and projective spaces. We interpret Latent Semantics as a
geometric flow on Grassmannians. We also discuss how to formulate G\"ardenfors'
notion of "meeting of minds" in our geometric setting.Comment: 32 pages, TeX, 1 eps figur
Structural Stability of Lexical Semantic Spaces: Nouns in Chinese and French
Many studies in the neurosciences have dealt with the semantic processing of
words or categories, but few have looked into the semantic organization of the
lexicon thought as a system. The present study was designed to try to move
towards this goal, using both electrophysiological and corpus-based data, and
to compare two languages from different families: French and Mandarin Chinese.
We conducted an EEG-based semantic-decision experiment using 240 words from
eight categories (clothing, parts of a house, tools, vehicles,
fruits/vegetables, animals, body parts, and people) as the material. A
data-analysis method (correspondence analysis) commonly used in computational
linguistics was applied to the electrophysiological signals.
The present cross-language comparison indicated stability for the following
aspects of the languages' lexical semantic organizations: (1) the
living/nonliving distinction, which showed up as a main factor for both
languages; (2) greater dispersion of the living categories as compared to the
nonliving ones; (3) prototypicality of the \emph{animals} category within the
living categories, and with respect to the living/nonliving distinction; and
(4) the existence of a person-centered reference gradient. Our
electrophysiological analysis indicated stability of the networks at play in
each of these processes. Stability was also observed in the data taken from
word usage in the languages (synonyms and associated words obtained from
textual corpora).Comment: 17 pages, 4 figure
Recommended from our members
Conspiracy in the Time of Corona: Automatic detection of Emerging Covid-19 Conspiracy Theories in Social Media and the News
Abstract
Rumors and conspiracy theories thrive in environments of low confi- dence and low trust. Consequently, it is not surprising that ones related to the Covid-19 pandemic are proliferating given the lack of scientific consensus on the virus’s spread and containment, or on the long term social and economic ramifications of the pandemic. Among the stories currently circulating are ones suggesting that the 5G telecommunication network activates the virus, that the pandemic is a hoax perpetrated by a global cabal, that the virus is a bio-weapon released deliberately by the Chinese, or that Bill Gates is using it as cover to launch a broad vaccination program to facilitate a global surveillance regime. While some may be quick to dismiss these stories as having little impact on real-world behavior, recent events including the destruction of cell phone towers, racially fueled attacks against Asian Americans, demonstrations espousing resistance to public health orders, and wide-scale defiance of scientifically sound public mandates such as those to wear masks and practice social distancing, countermand such conclusions. Inspired by narrative theory, we crawl social media sites and news reports and, through the application of automated machine-learning methods, discover the underlying narrative frame- works supporting the generation of rumors and conspiracy theories. We show how the various narrative frameworks fueling these stories rely on the alignment of otherwise disparate domains of knowledge, and consider how they attach to the broader reporting on the pandemic. These alignments and attachments, which can be monitored in near real-time, may be useful for identifying areas in the news that are particularly vulnerable to reinterpretation by conspiracy theorists. Understanding the dynamics of storytelling on social media and the narrative frameworks that provide the generative basis for these stories may also be helpful for devising methods to disrupt their spread
Learning Language from a Large (Unannotated) Corpus
A novel approach to the fully automated, unsupervised extraction of
dependency grammars and associated syntax-to-semantic-relationship mappings
from large text corpora is described. The suggested approach builds on the
authors' prior work with the Link Grammar, RelEx and OpenCog systems, as well
as on a number of prior papers and approaches from the statistical language
learning literature. If successful, this approach would enable the mining of
all the information needed to power a natural language comprehension and
generation system, directly from a large, unannotated corpus.Comment: 29 pages, 5 figures, research proposa
- …