317 research outputs found
The entropy of Ćukasiewicz-languages
The paper presents an elementary approach for the calculation of the entropy
of a class of languages. This approach is based on the consideration of
roots of a real polynomial and is also suitable for calculating the
Bernoulli measure. The class of languages we consider here is a
generalisation of the Ćukasiewicz language
Nonmonotonic Probabilistic Logics between Model-Theoretic Probabilistic Logic and Probabilistic Logic under Coherence
Recently, it has been shown that probabilistic entailment under coherence is
weaker than model-theoretic probabilistic entailment. Moreover, probabilistic
entailment under coherence is a generalization of default entailment in System
P. In this paper, we continue this line of research by presenting probabilistic
generalizations of more sophisticated notions of classical default entailment
that lie between model-theoretic probabilistic entailment and probabilistic
entailment under coherence. That is, the new formalisms properly generalize
their counterparts in classical default reasoning, they are weaker than
model-theoretic probabilistic entailment, and they are stronger than
probabilistic entailment under coherence. The new formalisms are useful
especially for handling probabilistic inconsistencies related to conditioning
on zero events. They can also be applied for probabilistic belief revision.
More generally, in the same spirit as a similar previous paper, this paper
sheds light on exciting new formalisms for probabilistic reasoning beyond the
well-known standard ones.Comment: 10 pages; in Proceedings of the 9th International Workshop on
Non-Monotonic Reasoning (NMR-2002), Special Session on Uncertainty Frameworks
in Nonmonotonic Reasoning, pages 265-274, Toulouse, France, April 200
Random-bit optimal uniform sampling for rooted planar trees with given sequence of degrees and Applications
In this paper, we redesign and simplify an algorithm due to Remy et al. for
the generation of rooted planar trees that satisfies a given partition of
degrees. This new version is now optimal in terms of random bit complexity, up
to a multiplicative constant. We then apply a natural process
"simulate-guess-and-proof" to analyze the height of a random Motzkin in
function of its frequency of unary nodes. When the number of unary nodes
dominates, we prove some unconventional height phenomenon (i.e. outside the
universal square root behaviour.)Comment: 19 page
Fredkin Gates for Finite-valued Reversible and Conservative Logics
The basic principles and results of Conservative Logic introduced by Fredkin
and Toffoli on the basis of a seminal paper of Landauer are extended to
d-valued logics, with a special attention to three-valued logics. Different
approaches to d-valued logics are examined in order to determine some possible
universal sets of logic primitives. In particular, we consider the typical
connectives of Lukasiewicz and Godel logics, as well as Chang's MV-algebras. As
a result, some possible three-valued and d-valued universal gates are described
which realize a functionally complete set of fundamental connectives.Comment: 57 pages, 10 figures, 16 tables, 2 diagram
Ontology-Mediated Query Answering over Log-Linear Probabilistic Data: Extended Version
Large-scale knowledge bases are at the heart of modern information systems. Their knowledge is inherently uncertain, and hence they are often materialized as probabilistic databases. However, probabilistic database management systems typically lack the capability to incorporate implicit background knowledge and, consequently, fail to capture some intuitive query answers. Ontology-mediated query answering is a popular paradigm for encoding commonsense knowledge, which can provide more complete answers to user queries. We propose a new data model that integrates the paradigm of ontology-mediated query answering with probabilistic databases, employing a log-linear probability model. We compare our approach to existing proposals, and provide supporting computational results
Fuzzy expert systems in civil engineering
Imperial Users onl
CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
Annotated data plays a critical role in Natural Language Processing (NLP) in
training models and evaluating their performance. Given recent developments in
Large Language Models (LLMs), models such as ChatGPT demonstrate zero-shot
capability on many text-annotation tasks, comparable with or even exceeding
human annotators. Such LLMs can serve as alternatives for manual annotation,
due to lower costs and higher scalability. However, limited work has leveraged
LLMs as complementary annotators, nor explored how annotation work is best
allocated among humans and LLMs to achieve both quality and cost objectives. We
propose CoAnnotating, a novel paradigm for Human-LLM co-annotation of
unstructured texts at scale. Under this framework, we utilize uncertainty to
estimate LLMs' annotation capability. Our empirical study shows CoAnnotating to
be an effective means to allocate work from results on different datasets, with
up to 21% performance improvement over random baseline. For code
implementation, see https://github.com/SALT-NLP/CoAnnotating
- âŠ