241,227 research outputs found
Augmented Creativity: Leveraging Natural Language Processing for Creative Writing
Recent advances have moved natural language processing (NLP) capabilities with artificial intelligence beyond mere grammar and spell-checking functionality. One such new use that has arisen is the ability to suggest new content to writers to inspire new ideas by using “machine-in-the-loop” strategies in creative writing. In order to explore the possibilities of such a strategy, this study provides a model to be adopted in creative writing courses in higher education. An NLP application was created using Python and spaCy and deployed via Streamlit. The AI allowed students to see if their grammar aligned with those principles and techniques taught in class to assist with a deeper understanding of the grammatical aspects of the content and also to improve their creativity as writers. The study at hand seeks to determine the efficacy of a new proprietary NLP on improving understanding of grammar and creativity in student writing. Participants in the study were assessed through surveys and open-ended questions. Findings note that participants agreed the algorithm assisted them in a better understanding of grammar but were not as receptive to assistance in improving their creativity. It should also be noted that the suggestions provided by the algorithm did not necessarily improve the written artifacts submitted in the study. Results indicate that students enjoy using NLP as part of the creative writing process but largely, as with other language processing tools, to assist with grammar and synta
Almost Every Simply Typed Lambda-Term Has a Long Beta-Reduction Sequence
It is well known that the length of a beta-reduction sequence of a simply
typed lambda-term of order k can be huge; it is as large as k-fold exponential
in the size of the lambda-term in the worst case. We consider the following
relevant question about quantitative properties, instead of the worst case: how
many simply typed lambda-terms have very long reduction sequences? We provide a
partial answer to this question, by showing that asymptotically almost every
simply typed lambda-term of order k has a reduction sequence as long as
(k-1)-fold exponential in the term size, under the assumption that the arity of
functions and the number of variables that may occur in every subterm are
bounded above by a constant. To prove it, we have extended the infinite monkey
theorem for strings to a parametrized one for regular tree languages, which may
be of independent interest. The work has been motivated by quantitative
analysis of the complexity of higher-order model checking
Multi-dimensional Type Theory: Rules, Categories, and Combinators for Syntax and Semantics
We investigate the possibility of modelling the syntax and semantics of
natural language by constraints, or rules, imposed by the multi-dimensional
type theory Nabla. The only multiplicity we explicitly consider is two, namely
one dimension for the syntax and one dimension for the semantics, but the
general perspective is important. For example, issues of pragmatics could be
handled as additional dimensions.
One of the main problems addressed is the rather complicated repertoire of
operations that exists besides the notion of categories in traditional Montague
grammar. For the syntax we use a categorial grammar along the lines of Lambek.
For the semantics we use so-called lexical and logical combinators inspired by
work in natural logic. Nabla provides a concise interpretation and a sequent
calculus as the basis for implementations.Comment: 20 page
Towards a Robuster Interpretive Parsing
The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this problem. We generalize RIP and suggest replacing the winner candidate with a weighted mean violation of the potential winner candidates. A Boltzmann distribution is introduced on the winner set, and the distribution’s parameter is gradually decreased. Finally, we show that GRIP, the Generalized Robust Interpretive Parsing Algorithm significantly improves the learning success rate in a model with standard constraints for metrical stress assignment
Utterance Selection Model of Language Change
We present a mathematical formulation of a theory of language change. The
theory is evolutionary in nature and has close analogies with theories of
population genetics. The mathematical structure we construct similarly has
correspondences with the Fisher-Wright model of population genetics, but there
are significant differences. The continuous time formulation of the model is
expressed in terms of a Fokker-Planck equation. This equation is exactly
soluble in the case of a single speaker and can be investigated analytically in
the case of multiple speakers who communicate equally with all other speakers
and give their utterances equal weight. Whilst the stationary properties of
this system have much in common with the single-speaker case, time-dependent
properties are richer. In the particular case where linguistic forms can become
extinct, we find that the presence of many speakers causes a two-stage
relaxation, the first being a common marginal distribution that persists for a
long time as a consequence of ultimate extinction being due to rare
fluctuations.Comment: 21 pages, 17 figure
Search and Result Presentation in Scientific Workflow Repositories
We study the problem of searching a repository of complex hierarchical
workflows whose component modules, both composite and atomic, have been
annotated with keywords. Since keyword search does not use the graph structure
of a workflow, we develop a model of workflows using context-free bag grammars.
We then give efficient polynomial-time algorithms that, given a workflow and a
keyword query, determine whether some execution of the workflow matches the
query. Based on these algorithms we develop a search and ranking solution that
efficiently retrieves the top-k grammars from a repository. Finally, we propose
a novel result presentation method for grammars matching a keyword query, based
on representative parse-trees. The effectiveness of our approach is validated
through an extensive experimental evaluation
- …