17 research outputs found
Checking Trustworthiness of Probabilistic Computations in a Typed Natural Deduction System
In this paper we present the probabilistic typed natural deduction calculus
TPTND, designed to reason about and derive trustworthiness properties of
probabilistic computational processes, like those underlying current AI
applications. Derivability in TPTND is interpreted as the process of extracting
samples of possibly complex outputs with a certain frequency from a given
categorical distribution. We formalize trust for such outputs as a form of
hypothesis testing on the distance between such frequency and the intended
probability. The main advantage of the calculus is to render such notion of
trustworthiness checkable. We present a computational semantics for the terms
over which we reason and then the semantics of TPTND, where logical operators
as well as a Trust operator are defined through introduction and elimination
rules. We illustrate structural and metatheoretical properties, with particular
focus on the ability to establish under which term evolutions and logical rules
applications the notion of trustworhtiness can be preserved
An Answer Set Programming-based Implementation of Epistemic Probabilistic Event Calculus
We describe a general procedure for translating Epistemic Probabilistic Event Calculus (EPEC) action language domains into Answer Set Programs (ASP), and show how the Python-driven features of the ASP solver Clingo can be used to provide efficient computation in this probabilistic setting. EPEC supports probabilistic, epistemic reasoning in domains containing narratives that include both an agent’s own action executions and environmentally triggered events. Some of the agent’s actions may be belief-conditioned, and some may be imperfect sensing actions that alter the strengths of previously held beliefs. We show that our ASP implementation can be used to provide query answers that fully correspond to EPEC’s own declarative, Bayesian-inspired semantics
Using Inductive Logic Programming to globally approximate Neural Networks for preference learning: challenges and preliminary results
In this paper we explore the use of Answer Set Programming (ASP), and in particular the state-of-the-art Inductive Logic Programming (ILP) system ILASP, as a method to explain black-box models, e.g. Neural Networks (NN), when they are used to learn user preferences. To this aim, we created a dataset of users preferences over a set of recipes, trained a set of NNs on these data, and performed preliminary experiments that investigate how ILASP can globally approximate these NNs. Since computational time required for training ILASP on high dimensional feature spaces is very high, we focused on the problem of making global approximation more scalable. In particular we experimented with the use of Principal Component Analysis (PCA) to reduce the dimensionality of the dataset while trying to keep our explanations transparent
A Unifying Framework for Learning Argumentation Semantics
Argumentation is a very active research field of Artificial Intelligence
concerned with the representation and evaluation of arguments used in dialogues
between humans and/or artificial agents. Acceptability semantics of formal
argumentation systems define the criteria for the acceptance or rejection of
arguments. Several software systems, known as argumentation solvers, have been
developed to compute the accepted/rejected arguments using such criteria. These
include systems that learn to identify the accepted arguments using
non-interpretable methods. In this paper we present a novel framework, which
uses an Inductive Logic Programming approach to learn the acceptability
semantics for several abstract and structured argumentation frameworks in an
interpretable way. Through an empirical evaluation we show that our framework
outperforms existing argumentation solvers, thus opening up new future research
directions in the area of formal argumentation and human-machine dialogues
Dissemination Corner: BRIO (2023)
The inaugural BEWARE workshop and its sequel, BEWARE-23, are discussed in detail, including their focus on themes such as Bias, Risk, Explainability, and the influence of Logic in AI. A novel tool developed by BRIO and Alkemy for post-hoc analysis of AI classifiers, aimed at exposing potential biases, is also discussed. This article emphasizes the need for ongoing scrutiny, evaluation, and ethical consideration in the development and application of AI technologies, underlining BRIO’s commitment to advancing the field in an ethical, fair, and accessible manner
Probabilistic epistemic reasoning about actions
Modelling agents that are able to reason about actions in an ever-changing environment continues to be a central challenge in Artificial Intelligence, and many technical frameworks that tackle it have been proposed over the past few decades. This thesis deals with this problem in the case in which the envi- ronment and its evolution is incompletely known, and agents can seek to gain further information about it and act accordingly. Two languages are proposed, namely PEC+ and EPEC, which extend a standard logical language for reasoning about actions known as the Event Calculus, and use Probability Theory as a measure of the agent’s degree of belief about aspects of the domain. These languages are then shown to satisfy some essential properties. PEC+ is implemented and tested against a number of real world scenarios
Introducing k-lingo: a k-depth Bounded Version of ASP System Clingo
Depth-Bounded Boolean Logics (DBBL for short) are well-understood frameworks to model rational agents equipped with limited deductive capabilities. These Logics use a parameter k>=0 to limit the amount of virtual information, i.e., the information that the agent may temporarily assume throughout the deductive process. This restriction brings several advantageous properties over classical Propositional Logic, including polynomial decision procedures for deducibility and refutability. Inspired by DBBL, we propose a limited-depth version of the popular ASP system clingo, tentatively dubbed k-lingo after the bound k on virtual information. We illustrate the connection between DBBL and ASP through examples involving both proof-theoretical and implementative aspects. The paper concludes with some comments on future work, which include a computational complexity characterization of the system, applications to multi-agent systems and feasible approximations of probability functions
Towards a logic-based approach for multi-modal fusion and decision making during motor rehabilitation sessions
We introduce a general approach which aims at combining machine learning and logic-based techniques in order to model its user’s cognitive and motor abilities. In the context of motor rehabilitation, hybrid systems are a convenient option as they allow both for the representation of formal constraints needed to implement a clinically valid exercise, and for the statistical modelling of intrinsically noisy data sources. Moreover, logic-based systems offer a transparent way to look at the decisions taken by an automated system. This is particularly useful when an AI system needs to interact with a therapist in order to assist therapeutic intervention, e.g. by explaining why a given decision is sound. This methodology is currently being developed within the context of the AVATEA project
Agents Displacement in Arbitrary Geometrical Spaces: An Evolutionary Computation based Approach
In many different social contexts, communication allows a collective intelligence to emerge. However, a correct way of exchanging information usually requires determined topological configurations of the agents involved in the process. Such a configuration should take into account several parameters, e.g. agents positioning, their proximity and time efficiency of communication. Our aim is to present an algorithm, based on evolutionary programming, which optimizes agents placement on arbitrarily shaped areas. In order to show its ability to deal with arbitrary bi-dimensional topologies, this algorithm has been tested on a set of differently shaped areas that present concavities, convexities and obstacles. This approach can be extended to deal with concrete cases, such as object localization in a delimited area