53,206 research outputs found
Towards Logical Specification of Statistical Machine Learning
We introduce a logical approach to formalizing statistical properties of
machine learning. Specifically, we propose a formal model for statistical
classification based on a Kripke model, and formalize various notions of
classification performance, robustness, and fairness of classifiers by using
epistemic logic. Then we show some relationships among properties of
classifiers and those between classification performance and robustness, which
suggests robustness-related properties that have not been formalized in the
literature as far as we know. To formalize fairness properties, we define a
notion of counterfactual knowledge and show techniques to formalize conditional
indistinguishability by using counterfactual epistemic operators. As far as we
know, this is the first work that uses logical formulas to express statistical
properties of machine learning, and that provides epistemic (resp.
counterfactually epistemic) views on robustness (resp. fairness) of
classifiers.Comment: SEFM'19 conference paper (full version with errors corrected
kLog: A Language for Logical and Relational Learning with Kernels
We introduce kLog, a novel approach to statistical relational learning.
Unlike standard approaches, kLog does not represent a probability distribution
directly. It is rather a language to perform kernel-based learning on
expressive logical and relational representations. kLog allows users to specify
learning problems declaratively. It builds on simple but powerful concepts:
learning from interpretations, entity/relationship data modeling, logic
programming, and deductive databases. Access by the kernel to the rich
representation is mediated by a technique we call graphicalization: the
relational representation is first transformed into a graph --- in particular,
a grounded entity/relationship diagram. Subsequently, a choice of graph kernel
defines the feature space. kLog supports mixed numerical and symbolic data, as
well as background knowledge in the form of Prolog or Datalog programs as in
inductive logic programming systems. The kLog framework can be applied to
tackle the same range of tasks that has made statistical relational learning so
popular, including classification, regression, multitask learning, and
collective classification. We also report about empirical comparisons, showing
that kLog can be either more accurate, or much faster at the same level of
accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at
http://klog.dinfo.unifi.it along with tutorials
Ontology of core data mining entities
In this article, we present OntoDM-core, an ontology of core data mining
entities. OntoDM-core defines themost essential datamining entities in a three-layered
ontological structure comprising of a specification, an implementation and an application
layer. It provides a representational framework for the description of mining
structured data, and in addition provides taxonomies of datasets, data mining tasks,
generalizations, data mining algorithms and constraints, based on the type of data.
OntoDM-core is designed to support a wide range of applications/use cases, such as
semantic annotation of data mining algorithms, datasets and results; annotation of
QSAR studies in the context of drug discovery investigations; and disambiguation of
terms in text mining. The ontology has been thoroughly assessed following the practices
in ontology engineering, is fully interoperable with many domain resources and
is easy to extend
Research Priorities for Robust and Beneficial Artificial Intelligence
Success in the quest for artificial intelligence has the potential to bring
unprecedented benefits to humanity, and it is therefore worthwhile to
investigate how to maximize these benefits while avoiding potential pitfalls.
This article gives numerous examples (which should by no means be construed as
an exhaustive list) of such worthwhile research aimed at ensuring that AI
remains robust and beneficial.Comment: This article gives examples of the type of research advocated by the
open letter for robust & beneficial AI at
http://futureoflife.org/ai-open-lette
Semantics, Modelling, and the Problem of Representation of Meaning -- a Brief Survey of Recent Literature
Over the past 50 years many have debated what representation should be used
to capture the meaning of natural language utterances. Recently new needs of
such representations have been raised in research. Here I survey some of the
interesting representations suggested to answer for these new needs.Comment: 15 pages, no figure
Modelling and analyzing adaptive self-assembling strategies with Maude
Building adaptive systems with predictable emergent behavior is a challenging task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures, to programming paradigms, to analysis techniques. We recently proposed a conceptual framework for adaptation centered around the role of control data. In this paper we show that it can be naturally realized in a reflective logical language like Maude by using the Reflective Russian Dolls model. Moreover, we exploit this model to specify, validate and analyse a prominent example of adaptive system: robot swarms equipped with self-assembly strategies. The analysis exploits the statistical model checker PVeStA
- …