145 research outputs found
Behavior of Analogical Inference w.r.t. Boolean Functions
International audienceIt has been observed that a particular form of ana-logical inference, based on analogical proportions, yields competitive results in classification tasks. Using the algebraic normal form of Boolean functions , it has been shown that analogical prediction is always exact iff the labeling function is affine. We point out that affine functions are also meaningful when using another view of analogy. We address the accuracy of analogical inference for arbitrary Boolean functions and show that if a function is Δ-close to an affine function, then the probability of making a wrong prediction is upper bounded by 4Δ. This result is confirmed by an empirical study showing that the upper bound is tight. It highlights the specificity of analogical inference, also characterized in terms of the Hamming distance
Relevant Entity Selection: Knowledge Graph Bootstrapping via Zero-Shot Analogical Pruning
Knowledge Graph Construction (KGC) can be seen as an iterative process
starting from a high quality nucleus that is refined by knowledge extraction
approaches in a virtuous loop. Such a nucleus can be obtained from knowledge
existing in an open KG like Wikidata. However, due to the size of such generic
KGs, integrating them as a whole may entail irrelevant content and scalability
issues. We propose an analogy-based approach that starts from seed entities of
interest in a generic KG, and keeps or prunes their neighboring entities. We
evaluate our approach on Wikidata through two manually labeled datasets that
contain either domain-homogeneous or -heterogeneous seed entities. We
empirically show that our analogy-based approach outperforms LSTM, Random
Forest, SVM, and MLP, with a drastically lower number of parameters. We also
evaluate its generalization potential in a transfer learning setting. These
results advocate for the further integration of analogy-based inference in
tasks related to the KG lifecycle
A Galois Framework for the Study of Analogical Classifiers
International audienceIn this paper, we survey some recent advances in the study of analogical classifiers, i.e., classifiers that are compatible with the principle of analogical inference. We will present a Galois framework induced by relation between formal models of analogy and the corresponding classes of analogy preserving functions. The usefulness these general results will be illustrated over Boolean domains, which explicitly present the Galois closed sets of analogical classifiers for different pairs of formal models of Boolean analogies
Free energies of Boltzmann Machines: self-averaging, annealed and replica symmetric approximations in the thermodynamic limit
Restricted Boltzmann machines (RBMs) constitute one of the main models for
machine statistical inference and they are widely employed in Artificial
Intelligence as powerful tools for (deep) learning. However, in contrast with
countless remarkable practical successes, their mathematical formalization has
been largely elusive: from a statistical-mechanics perspective these systems
display the same (random) Gibbs measure of bi-partite spin-glasses, whose
rigorous treatment is notoriously difficult. In this work, beyond providing a
brief review on RBMs from both the learning and the retrieval perspectives, we
aim to contribute to their analytical investigation, by considering two
distinct realizations of their weights (i.e., Boolean and Gaussian) and
studying the properties of their related free energies. More precisely,
focusing on a RBM characterized by digital couplings, we first extend the
Pastur-Shcherbina-Tirozzi method (originally developed for the Hopfield model)
to prove the self-averaging property for the free energy, over its quenched
expectation, in the infinite volume limit, then we explicitly calculate its
simplest approximation, namely its annealed bound. Next, focusing on a RBM
characterized by analogical weights, we extend Guerra's interpolating scheme to
obtain a control of the quenched free-energy under the assumption of replica
symmetry: we get self-consistencies for the order parameters (in full agreement
with the existing Literature) as well as the critical line for ergodicity
breaking that turns out to be the same obtained in AGS theory. As we discuss,
this analogy stems from the slow-noise universality. Finally, glancing beyond
replica symmetry, we analyze the fluctuations of the overlaps for an estimate
of the (slow) noise affecting the retrieval of the signal, and by a stability
analysis we recover the Aizenman-Contucci identities typical of glassy systems.Comment: 21 pages, 1 figur
Galois theory for analogical classifiers
International audienceAnalogical proportions are 4-ary relations that read âA is to B as C is to Dâ. Recent works have highlighted the fact that such relations can support a specific form of inference, called analogical inference. This inference mechanism was empirically proved to be efficient in several reasoning and classification tasks. In the latter case, it relies on the notion of analogy preservation. In this paper, we explore this relation between formal models of analogy and the corresponding classes of analogy preserving functions, and we establish a Galois theory of analogical classifiers. We illustrate the usefulness of this Galois framework over Boolean domains, and we explicitly determine the closed sets of analogical classifiers, i.e., classifiers that are compatible with the analogical inference, for each pair of Boolean analogies
Learning predictive categories using lifted relational neural networks
Lifted relational neural networks (LRNNs) are a flexible neural-symbolic framework based on the idea of lifted modelling. In this paper we show how LRNNs can be easily used to specify declaratively and solve learning problems in which latent categories of entities, properties and relations need to be jointly induced
Solving morphological analogies: from retrieval to generation
Analogical inference is a remarkable capability of human reasoning, and has
been used to solve hard reasoning tasks. Analogy based reasoning (AR) has
gained increasing interest from the artificial intelligence community and has
shown its potential in multiple machine learning tasks such as classification,
decision making and recommendation with competitive results. We propose a deep
learning (DL) framework to address and tackle two key tasks in AR: analogy
detection and solving. The framework is thoroughly tested on the Siganalogies
dataset of morphological analogical proportions (APs) between words, and shown
to outperform symbolic approaches in many languages. Previous work have
explored the behavior of the Analogy Neural Network for classification (ANNc)
on analogy detection and of the Analogy Neural Network for retrieval (ANNr) on
analogy solving by retrieval, as well as the potential of an autoencoder (AE)
for analogy solving by generating the solution word. In this article we
summarize these findings and we extend them by combining ANNr and the AE
embedding model, and checking the performance of ANNc as an retrieval method.
The combination of ANNr and AE outperforms the other approaches in almost all
cases, and ANNc as a retrieval method achieves competitive or better
performance than 3CosMul. We conclude with general guidelines on using our
framework to tackle APs with DL.Comment: Preprint submitted to Springer special Issue in Annals of Mathematics
and Artificial Intelligence: Mathematical Foundations of analogical reasoning
and application
Workshop Notes of the Sixth International Workshop "What can FCA do for Artificial Intelligence?"
International audienc
- âŠ