214 research outputs found
Machine ethics via logic programming
Machine ethics is an interdisciplinary field of inquiry that emerges from the need of
imbuing autonomous agents with the capacity of moral decision-making. While some
approaches provide implementations in Logic Programming (LP) systems, they have not
exploited LP-based reasoning features that appear essential for moral reasoning.
This PhD thesis aims at investigating further the appropriateness of LP, notably a
combination of LP-based reasoning features, including techniques available in LP systems,
to machine ethics. Moral facets, as studied in moral philosophy and psychology, that
are amenable to computational modeling are identified, and mapped to appropriate LP
concepts for representing and reasoning about them.
The main contributions of the thesis are twofold.
First, novel approaches are proposed for employing tabling in contextual abduction
and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets.
Second, a variety of LP-based reasoning features are applied to model the identified
moral facets, through moral examples taken off-the-shelf from the morality literature.
These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.Fundação para a Ciência e a Tecnologia (FCT)-grant SFRH/BD/72795/2010 ; CENTRIA
and DI/FCT/UNL for the supplementary fundin
Recommended from our members
System Design for Digital Experimentation and Explanation Generation
Experimentation increasingly drives everyday decisions in modern life, as it is considered by some to be the gold standard for determining cause and effect within any system. Digital experiments have expanded the scope and frequency of experiments, which can range in complexity from classic A/B tests to contextual bandits experiments, which share features with reinforcement learning.
Although there exists a large body of prior work on estimating treatment effects using experiments, this prior work did not anticipate the new challenges and opportu- nities introduced by digital experimentation. Novel errors and threats to validity arise at the intersection of software and experimentation, especially when experimentation is in service of understanding humans behavior or autonomous black-box agents.
We present several novel tools for automating aspects of the experimentation- analysis pipeline. We propose new methods for evaluating online field experimentation, automatically generating corresponding analyses of treatment effects. We then draw the connection between software testing and experimental design and argue that applying software testing techniques to a kind of autonomous agent—a deep reinforcement learning agent—to demonstrate the need for novel testing paradigms when a software stack uses learned components that may have emergent behavior. We show how our system may be used to evaluate claims made about the behavior of autonomous agents and find that some claims do not hold up under test. Finally, we show how to produce explanations of the behavior of black-box software-defined agents interacting with white-box environments via automated experimentation. We show how an automated system can be used for exploratory data analysis, with a human in the loop, to investigate a large space of possible counterfactual explanations
Simple low cost causal discovery using mutual information and domain knowledge
PhDThis thesis examines causal discovery within datasets, in particular observational datasets where
normal experimental manipulation is not possible. A number of machine learning techniques
are examined in relation to their use of knowledge and the insights they can provide regarding
the situation under study. Their use of prior knowledge and the causal knowledge produced by
the learners are examined. Current causal learning algorithms are discussed in terms of their
strengths and limitations. The main contribution of the thesis is a new causal learner LUMIN
that operates with a polynomial time complexity in both the number of variables and records
examined. It makes no prior assumptions about the form of the relationships and is capable of
making extensive use of available domain information. This learner is compared to a number of
current learning algorithms and it is shown to be competitive with them
Synergies between machine learning and reasoning - An introduction by the Kay R. Amel group
This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developed quite separately in the last four decades. First, some common concerns are identified and discussed such as the types of representation used, the roles of knowledge and data, the lack or the excess of information, or the need for explanations and causal understanding. Then, the survey is organised in seven sections covering most of the territory where KRR and ML meet. We start with a section dealing with prototypical approaches from the literature on learning and reasoning: Inductive Logic Programming, Statistical Relational Learning, and Neurosymbolic AI, where ideas from rule-based reasoning are combined with ML. Then we focus on the use of various forms of background knowledge in learning, ranging from additional regularisation terms in loss functions, to the problem of aligning symbolic and vector space representations, or the use of knowledge graphs for learning. Then, the next section describes how KRR notions may benefit to learning tasks. For instance, constraints can be used as in declarative data mining for influencing the learned patterns; or semantic features are exploited in low-shot learning to compensate for the lack of data; or yet we can take advantage of analogies for learning purposes. Conversely, another section investigates how ML methods may serve KRR goals. For instance, one may learn special kinds of rules such as default rules, fuzzy rules or threshold rules, or special types of information such as constraints, or preferences. The section also covers formal concept analysis and rough sets-based methods. Yet another section reviews various interactions between Automated Reasoning and ML, such as the use of ML methods in SAT solving to make reasoning faster. Then a section deals with works related to model accountability, including explainability and interpretability, fairness and robustness. Finally, a section covers works on handling imperfect or incomplete data, including the problem of learning from uncertain or coarse data, the use of belief functions for regression, a revision-based view of the EM algorithm, the use of possibility theory in statistics, or the learning of imprecise models. This paper thus aims at a better mutual understanding of research in KRR and ML, and how they can cooperate. The paper is completed by an abundant bibliography
Discovering Causal Relations and Equations from Data
Physics is a field of science that has traditionally used the scientific
method to answer questions about why natural phenomena occur and to make
testable models that explain the phenomena. Discovering equations, laws and
principles that are invariant, robust and causal explanations of the world has
been fundamental in physical sciences throughout the centuries. Discoveries
emerge from observing the world and, when possible, performing interventional
studies in the system under study. With the advent of big data and the use of
data-driven methods, causal and equation discovery fields have grown and made
progress in computer science, physics, statistics, philosophy, and many applied
fields. All these domains are intertwined and can be used to discover causal
relations, physical laws, and equations from observational data. This paper
reviews the concepts, methods, and relevant works on causal and equation
discovery in the broad field of Physics and outlines the most important
challenges and promising future lines of research. We also provide a taxonomy
for observational causal and equation discovery, point out connections, and
showcase a complete set of case studies in Earth and climate sciences, fluid
dynamics and mechanics, and the neurosciences. This review demonstrates that
discovering fundamental laws and causal relations by observing natural
phenomena is being revolutionised with the efficient exploitation of
observational data, modern machine learning algorithms and the interaction with
domain knowledge. Exciting times are ahead with many challenges and
opportunities to improve our understanding of complex systems.Comment: 137 page
Achieving Causal Fairness in Machine Learning
Fairness is a social norm and a legal requirement in today\u27s society. Many laws and regulations (e.g., the Equal Credit Opportunity Act of 1974) have been established to prohibit discrimination and enforce fairness on several grounds, such as gender, age, sexual orientation, race, and religion, referred to as sensitive attributes. Nowadays machine learning algorithms are extensively applied to make important decisions in many real-world applications, e.g., employment, admission, and loans. Traditional machine learning algorithms aim to maximize predictive performance, e.g., accuracy. Consequently, certain groups may get unfairly treated when those algorithms are applied for decision-making. Therefore, it is an imperative task to develop fairness-aware machine learning algorithms such that the decisions made by them are not only accurate but also subject to fairness requirements. In the literature, machine learning researchers have proposed association-based fairness notions, e.g., statistical parity, disparate impact, equality of opportunity, etc., and developed respective discrimination mitigation approaches. However, these works did not consider that fairness should be treated as a causal relationship. Although it is well known that association does not imply causation, the gap between association and causation is not paid sufficient attention by the fairness researchers and stakeholders.
The goal of this dissertation is to study fairness in machine learning, define appropriate fairness notions, and develop novel discrimination mitigation approaches from a causal perspective. Based on Pearl\u27s structural causal model, we propose to formulate discrimination as causal effects of the sensitive attribute on the decision. We consider different types of causal effects to cope with different situations, including the path-specific effect for direct/indirect discrimination, the counterfactual effect for group/individual discrimination, and the path-specific counterfactual effect for general cases. In the attempt to measure discrimination, the unidentifiable situations pose an inevitable barrier to the accurate causal inference. To address this challenge, we propose novel bounding methods to accurately estimate the strength of unidentifiable fairness notions, including path-specific fairness, counterfactual fairness, and path-specific counterfactual fairness. Based on the estimation of fairness, we develop novel and efficient algorithms for learning fair classification models. Besides classification, we also investigate the discrimination issues in other machine learning scenarios, such as ranked data analysis
Modular Logic Programming: Full Compositionality and Conflict Handling for Practical Reasoning
With the recent development of a new ubiquitous nature of data and the profusity
of available knowledge, there is nowadays the need to reason from multiple sources
of often incomplete and uncertain knowledge. Our goal was to provide a way to
combine declarative knowledge bases – represented as logic programming modules
under the answer set semantics – as well as the individual results one already inferred
from them, without having to recalculate the results for their composition and without
having to explicitly know the original logic programming encodings that produced
such results. This posed us many challenges such as how to deal with fundamental
problems of modular frameworks for logic programming, namely how to define a
general compositional semantics that allows us to compose unrestricted modules.
Building upon existing logic programming approaches, we devised a framework
capable of composing generic logic programming modules while preserving the
crucial property of compositionality, which informally means that the combination of
models of individual modules are the models of the union of modules. We are also
still able to reason in the presence of knowledge containing incoherencies, which is
informally characterised by a logic program that does not have an answer set due
to cyclic dependencies of an atom from its default negation. In this thesis we also
discuss how the same approach can be extended to deal with probabilistic knowledge
in a modular and compositional way.
We depart from the Modular Logic Programming approach in Oikarinen &
Janhunen (2008); Janhunen et al. (2009) which achieved a restricted form of compositionality
of answer set programming modules. We aim at generalising this
framework of modular logic programming and start by lifting restrictive conditions
that were originally imposed, and use alternative ways of combining these (so called
by us) Generalised Modular Logic Programs. We then deal with conflicts arising
in generalised modular logic programming and provide modular justifications and
debugging for the generalised modular logic programming setting, where justification
models answer the question: Why is a given interpretation indeed an Answer Set?
and Debugging models answer the question: Why is a given interpretation not an
Answer Set?
In summary, our research deals with the problematic of formally devising a
generic modular logic programming framework, providing: operators for combining
arbitrary modular logic programs together with a compositional semantics; We
characterise conflicts that occur when composing access control policies, which are
generalisable to our context of generalised modular logic programming, and ways of
dealing with them syntactically: provided a unification for justification and debugging
of logic programs; and semantically: provide a new semantics capable of dealing
with incoherences. We also provide an extension of modular logic programming
to a probabilistic setting. These goals are already covered with published work. A prototypical tool implementing the unification of justifications and debugging is
available for download from http://cptkirk.sourceforge.net
- …