77,465 research outputs found
The Benefits of Label-Description Training for Zero-Shot Text Classification
Pretrained language models have improved zero-shot text classification by
allowing the transfer of semantic knowledge from the training data in order to
classify among specific label sets in downstream tasks. We propose a simple way
to further improve zero-shot accuracies with minimal effort. We curate small
finetuning datasets intended to describe the labels for a task. Unlike typical
finetuning data, which has texts annotated with labels, our data simply
describes the labels in language, e.g., using a few related terms,
dictionary/encyclopedia entries, and short templates. Across a range of topic
and sentiment datasets, our method is more accurate than zero-shot by 17-19%
absolute. It is also more robust to choices required for zero-shot
classification, such as patterns for prompting the model to classify and
mappings from labels to tokens in the model's vocabulary. Furthermore, since
our data merely describes the labels but does not use input texts, finetuning
on it yields a model that performs strongly on multiple text domains for a
given label set, even improving over few-shot out-of-domain classification in
multiple settings.Comment: Accepted at the EMNLP 2023 main conference (long paper
POS Tagging and its Applications for Mathematics
Content analysis of scientific publications is a nontrivial task, but a
useful and important one for scientific information services. In the Gutenberg
era it was a domain of human experts; in the digital age many machine-based
methods, e.g., graph analysis tools and machine-learning techniques, have been
developed for it. Natural Language Processing (NLP) is a powerful
machine-learning approach to semiautomatic speech and language processing,
which is also applicable to mathematics. The well established methods of NLP
have to be adjusted for the special needs of mathematics, in particular for
handling mathematical formulae. We demonstrate a mathematics-aware part of
speech tagger and give a short overview about our adaptation of NLP methods for
mathematical publications. We show the use of the tools developed for key
phrase extraction and classification in the database zbMATH
The Evolution of Wikipedia's Norm Network
Social norms have traditionally been difficult to quantify. In any particular
society, their sheer number and complex interdependencies often limit a
system-level analysis. One exception is that of the network of norms that
sustain the online Wikipedia community. We study the fifteen-year evolution of
this network using the interconnected set of pages that establish, describe,
and interpret the community's norms. Despite Wikipedia's reputation for
\textit{ad hoc} governance, we find that its normative evolution is highly
conservative. The earliest users create norms that both dominate the network
and persist over time. These core norms govern both content and interpersonal
interactions using abstract principles such as neutrality, verifiability, and
assume good faith. As the network grows, norm neighborhoods decouple
topologically from each other, while increasing in semantic coherence. Taken
together, these results suggest that the evolution of Wikipedia's norm network
is akin to bureaucratic systems that predate the information age.Comment: 22 pages, 9 figures. Matches published version. Data available at
http://bit.ly/wiki_nor
Encyclopedia of software components
Intelligent browsing through a collection of reusable software components is facilitated with a computer having a video monitor and a user input interface such as a keyboard or a mouse for transmitting user selections, by presenting a picture of encyclopedia volumes with respective visible labels referring to types of software, in accordance with a metaphor in which each volume includes a page having a list of general topics under the software type of the volume and pages having lists of software components for each one of the generic topics, altering the picture to open one of the volumes in response to an initial user selection specifying the one volume to display on the monitor a picture of the page thereof having the list of general topics and altering the picture to display the page thereof having a list of software components under one of the general topics in response to a next user selection specifying the one general topic, and then presenting a picture of a set of different informative plates depicting different types of information about one of the software components in response to a further user selection specifying the one component
Comparing SVM and Naive Bayes classifiers for text categorization with Wikitology as knowledge enrichment
The activity of labeling of documents according to their content is known as
text categorization. Many experiments have been carried out to enhance text
categorization by adding background knowledge to the document using knowledge
repositories like Word Net, Open Project Directory (OPD), Wikipedia and
Wikitology. In our previous work, we have carried out intensive experiments by
extracting knowledge from Wikitology and evaluating the experiment on Support
Vector Machine with 10- fold cross-validations. The results clearly indicate
Wikitology is far better than other knowledge bases. In this paper we are
comparing Support Vector Machine (SVM) and Na\"ive Bayes (NB) classifiers under
text enrichment through Wikitology. We validated results with 10-fold cross
validation and shown that NB gives an improvement of +28.78%, on the other hand
SVM gives an improvement of +6.36% when compared with baseline results. Na\"ive
Bayes classifier is better choice when external enriching is used through any
external knowledge base.Comment: 5 page
- …