1,795 research outputs found
Cech Closure Spaces: A Unified Framework for Discrete Homotopy
Motivated by constructions in topological data analysis and algebraic
combinatorics, we study homotopy theory on the category of Cech closure spaces
, the category whose objects are sets endowed with a Cech closure
operator and whose morphisms are the continuous maps between them. We introduce
new classes of Cech closure structures on metric spaces, graphs, and simplicial
complexes, and we show how each of these cases gives rise to an interesting
homotopy theory. In particular, we show that there exists a natural family of
Cech closure structures on metric spaces which produces a non-trivial homotopy
theory for finite metric spaces, i.e. point clouds, the spaces of interest in
topological data analysis. We then give a Cech closure structure to graphs and
simplicial complexes which may be used to construct a new combinatorial (as
opposed to topological) homotopy theory for each skeleton of those spaces. We
further show that there is a Seifert-van Kampen theorem for closure spaces, a
well-defined notion of persistent homotopy, and an associated interleaving
distance. As an illustration of the difference with the topological setting, we
calculate the fundamental group for the circle, `circular graphs', and the
wedge of circles endowed with different closure structures. Finally, we produce
a continuous map from the topological circle to `circular graphs' which, given
the appropriate closure structures, induces an isomorphism on the fundamental
groups.Comment: Incorporated referee comments, 41 page
A Topological Approach to Spectral Clustering
We propose two related unsupervised clustering algorithms which, for input,
take data assumed to be sampled from a uniform distribution supported on a
metric space , and output a clustering of the data based on the selection of
a topological model for the connected components of . Both algorithms work
by selecting a graph on the samples from a natural one-parameter family of
graphs, using a geometric criterion in the first case and an information
theoretic criterion in the second. The estimated connected components of
are identified with the kernel of the associated graph Laplacian, which allows
the algorithm to work without requiring the number of expected clusters or
other auxiliary data as input.Comment: 21 Page
Coisotropic Hofer-Zehnder capacities and non-squeezing for relative embeddings
We introduce the notion of a symplectic capacity relative to a coisotropic
submanifold of a symplectic manifold, and we construct two examples of such
capacities through modifications of the Hofer-Zehnder capacity. As a
consequence, we obtain a non-squeezing theorem for symplectic embeddings
relative to coisotropic constraints and existence results for leafwise chords
on energy surfaces.Comment: 33 pages, 4 figures; further corrections thanks to comments from the
referee; accepted for publication in Journal of Symplectic Geometr
Referenceless Quality Estimation for Natural Language Generation
Traditional automatic evaluation measures for natural language generation
(NLG) use costly human-authored references to estimate the quality of a system
output. In this paper, we propose a referenceless quality estimation (QE)
approach based on recurrent neural networks, which predicts a quality score for
a NLG system output by comparing it to the source meaning representation only.
Our method outperforms traditional metrics and a constant baseline in most
respects; we also show that synthetic data helps to increase correlation
results by 21% compared to the base system. Our results are comparable to
results obtained in similar QE tasks despite the more challenging setting.Comment: Accepted as a regular paper to 1st Workshop on Learning to Generate
Natural Language (LGNL), Sydney, 10 August 201
Natural Language Generation enhances human decision-making with uncertain information
Decision-making is often dependent on uncertain data, e.g. data associated
with confidence scores or probabilities. We present a comparison of different
information presentations for uncertain data and, for the first time, measure
their effects on human decision-making. We show that the use of Natural
Language Generation (NLG) improves decision-making under uncertainty, compared
to state-of-the-art graphical-based representation methods. In a task-based
study with 442 adults, we found that presentations using NLG lead to 24% better
decision-making on average than the graphical presentations, and to 44% better
decision-making when NLG is combined with graphics. We also show that women
achieve significantly better results when presented with NLG output (an 87%
increase on average compared to graphical presentations).Comment: 54th annual meeting of the Association for Computational Linguistics
(ACL), Berlin 201
RankME: Reliable Human Ratings for Natural Language Generation
Human evaluation for natural language generation (NLG) often suffers from
inconsistent user ratings. While previous research tends to attribute this
problem to individual user preferences, we show that the quality of human
judgements can also be improved by experimental design. We present a novel
rank-based magnitude estimation method (RankME), which combines the use of
continuous scales and relative assessments. We show that RankME significantly
improves the reliability and consistency of human ratings compared to
traditional evaluation methods. In addition, we show that it is possible to
evaluate NLG systems according to multiple, distinct criteria, which is
important for error analysis. Finally, we demonstrate that RankME, in
combination with Bayesian estimation of system quality, is a cost-effective
alternative for ranking multiple NLG systems.Comment: Accepted to NAACL 2018 (The 2018 Conference of the North American
Chapter of the Association for Computational Linguistics
Crowd-sourcing NLG Data: Pictures Elicit Better Data
Recent advances in corpus-based Natural Language Generation (NLG) hold the
promise of being easily portable across domains, but require costly training
data, consisting of meaning representations (MRs) paired with Natural Language
(NL) utterances. In this work, we propose a novel framework for crowdsourcing
high quality NLG training data, using automatic quality control measures and
evaluating different MRs with which to elicit data. We show that pictorial MRs
result in better NL data being collected than logic-based MRs: utterances
elicited by pictorial MRs are judged as significantly more natural, more
informative, and better phrased, with a significant increase in average quality
ratings (around 0.5 points on a 6-point scale), compared to using the logical
MRs. As the MR becomes more complex, the benefits of pictorial stimuli
increase. The collected data will be released as part of this submission.Comment: The 9th International Natural Language Generation conference INLG,
2016. 10 pages, 2 figures, 3 table
Findings of the E2E NLG Challenge
This paper summarises the experimental setup and results of the first shared
task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue
systems. Recent end-to-end generation systems are promising since they reduce
the need for data annotation. However, they are currently limited to small,
delexicalised datasets. The E2E NLG shared task aims to assess whether these
novel approaches can generate better-quality output by learning from a dataset
containing higher lexical richness, syntactic complexity and diverse discourse
phenomena. We compare 62 systems submitted by 17 institutions, covering a wide
range of approaches, including machine learning architectures -- with the
majority implementing sequence-to-sequence models (seq2seq) -- as well as
systems based on grammatical rules and templates.Comment: Accepted to INLG 201
- …