731 research outputs found
Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support
A framework and methodology---termed LogiKEy---for the design and engineering
of ethical reasoners, normative theories and deontic logics is presented. The
overall motivation is the development of suitable means for the control and
governance of intelligent autonomous systems. LogiKEy's unifying formal
framework is based on semantical embeddings of deontic logics, logic
combinations and ethico-legal domain theories in expressive classic
higher-order logic (HOL). This meta-logical approach enables the provision of
powerful tool support in LogiKEy: off-the-shelf theorem provers and model
finders for HOL are assisting the LogiKEy designer of ethical intelligent
agents to flexibly experiment with underlying logics and their combinations,
with ethico-legal domain theories, and with concrete examples---all at the same
time. Continuous improvements of these off-the-shelf provers, without further
ado, leverage the reasoning performance in LogiKEy. Case studies, in which the
LogiKEy framework and methodology has been applied and tested, give evidence
that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure
A Neutral Temporal Deontic STIT Logic
In this work we answer a long standing request for temporal embeddings of deontic STIT logics by introducing the multi-agent STIT logic TDS . The logic is based upon atemporal utilitarian STIT logic. Yet, the logic presented here will be neutral: instead of committing ourselves to utilitarian theories, we prove the logic TDS sound and complete with respect to relational frames not employing any utilitarian function. We demonstrate how these neutral frames can be transformed into utilitarian temporal frames, while preserving validity. Last, we discuss problems that arise from employing binary utility functions in a temporal setting
Knowledge and Blameworthiness
Blameworthiness of an agent or a coalition of agents is often defined in
terms of the principle of alternative possibilities: for the coalition to be
responsible for an outcome, the outcome must take place and the coalition
should have had a strategy to prevent it. In this article we argue that in the
settings with imperfect information, not only should the coalition have had a
strategy, but it also should have known that it had a strategy, and it should
have known what the strategy was. The main technical result of the article is a
sound and complete bimodal logic that describes the interplay between knowledge
and blameworthiness in strategic games with imperfect information
Grounding power on actions and mental attitudes
International audienceThe main objective of this work is to develop a logic called IAL (Intentional Agency Logic) in which we can reason about mental states of agents, action occurrences, and agentive and group powers. IAL will be exploited for a formal analysis of different forms of power such as an agent i's power of achieving a certain result and an agent i's power over another agent j (alias social power)
A Logic-Based Analysis of Responsibility
This paper presents a logic-based framework to analyze responsibility, which
I refer to as intentional epistemic act-utilitarian stit theory (IEAUST). To be
precise, IEAUST is used to model and syntactically characterize various modes
of responsibility, where by 'modes of responsibility' I mean instances of
Broersen's three categories of responsibility (causal, informational, and
motivational responsibility), cast against the background of particular deontic
contexts. IEAUST is obtained by integrating a modal language to express the
following components of responsibility on stit models: agency, epistemic
notions, intentionality, and different senses of obligation. With such a
language, I characterize the components of responsibility using particular
formulas. Then, adopting a compositional approach -- where complex modalities
are built out of more basic ones -- these characterizations of the components
are used to formalize the aforementioned modes of responsibility.Comment: In Proceedings TARK 2023, arXiv:2307.0400
Deontic Epistemic stit Logic Distinguishing Modes of `Mens Rea\u27
Most juridical systems contain the principle that an act is only unlaw-
ful if the agent conducting the act has a `guilty mind\u27 (`mens rea\u27). Dif-
ferent law systems distinguish different modes of mens rea. For instance,
American law distinguishes between `knowingly\u27 performing a criminal
act, `recklessness\u27, `strict liability\u27, etc. I will show we can formalize several
of these categories. The formalism I use is a complete stit-logic featuring
operators for stit-actions taking effect in `next\u27 states, S5-knowledge op-
erators and SDL-type obligation operators. The different modes of `mens
rea\u27 correspond to the violation conditions of different types of obligation
definable in the logic
Some examples formulated in a ‘seeing to it that’ logic: Illustrations, observations, problems
The paper presents a series of small examples and discusses how they might be formulated in a ‘seeing to it that ’ logic. The aim is to identify some of the strengths and weaknesses of this approach to the treatment of action. The examples have a very simple temporal structure. An element of indeterminism is introduced by uncertainty in the environment and by the actions of other agents. The formalism chosen combines a logic of agency with a transition-based account of action: the semantical framework is a labelled transition system extended with a component that picks out the contribution of a particular agent in a given transition. Although this is not a species of the stit logics associated with Nuel Belnap and colleagues, it does have many features in common. Most of the points that arise apply equally to stit logics. They are, in summary: whether explicit names for actions can be avoided, the need for weaker forms of responsibility or ‘bringing it about ’ than are captured by stit and similar logics, some common patterns in which one agent’s actions constrain or determine the actions of another, and some comments on the effects that level of detail, or ‘granularity’, of a representation can have on the properties we wish to examine.
- …