703 research outputs found
Current and Future Challenges in Knowledge Representation and Reasoning
Knowledge Representation and Reasoning is a central, longstanding, and active
area of Artificial Intelligence. Over the years it has evolved significantly;
more recently it has been challenged and complemented by research in areas such
as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl
Perspectives workshop was held on Knowledge Representation and Reasoning. The
goal of the workshop was to describe the state of the art in the field,
including its relation with other areas, its shortcomings and strengths,
together with recommendations for future progress. We developed this manifesto
based on the presentations, panels, working groups, and discussions that took
place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge
Representation: its origins, goals, milestones, and current foci; its relation
to other disciplines, especially to Artificial Intelligence; and on its
challenges, along with key priorities for the next decade
Handbook Transdisciplinary Learning
What is transdisciplinarity - and what are its methods? How does a living lab work? What is the purpose of citizen science, student-organized teaching and cooperative education? This handbook unpacks key terms and concepts to describe the range of transdisciplinary learning in the context of academic education. Transdisciplinary learning turns out to be a comprehensive innovation process in response to the major global challenges such as climate change, urbanization or migration. A reference work for students, lecturers, scientists, and anyone wanting to understand the profound changes in higher education
A Non-Ideal Epistemology of Disagreement: Pragmatism and the Need for Democratic Inquiry
The aim of this thesis is to provide a non-ideal epistemic account of disagreement, one which explains how epistemic agents can find a rational resolution to disagreement in actual epistemic practice. To do this, this thesis will compare two non-ideal epistemic accounts of disagreement which have been proposed within the contemporary philosophical literature. The first is the evidentialist response to disagreement given within the recent literature on the analytic epistemology of disagreement. According to the evidentialist response to disagreement, an epistemic agent can rationally respond to disagreement by evaluating other epistemic agents as higher-order evidence, and adjusting one's belief accordingly. The second is the pragmatist response to disagreement given within the recent literature on the intersection between American pragmatism and democratic theory. According to the pragmatist response to disagreement, a collective group of epistemic agents can come to a rational resolution of disagreement through a process of social inquiry where epistemic agents cooperatively exchange ideas, reasons, and objections, and collectively form plans of action which settle collective belief. This thesis will critically examine both of these accounts, and explain how the pragmatist response to disagreement provides a better account of both the epistemic challenges which disagreement poses, and the method in which epistemic agent can come to rationally resolve disagreement in actual epistemic practice
A Behavioural Decision-Making Framework For Agent-Based Models
In the last decades, computer simulation has become one of the mainstream modelling techniques in many scientific fields. Social simulation with Agent-based Modelling (ABM) allows users to capture higher-level system properties that emerge from the interactions of lower-level subsystems. ABM is itself an area of application of Distributed Artificial Intelligence and Multiagent Systems (MAS). Despite that, researchers using ABM for social science studies do not fully benefit from the development in the field of MAS. It is mainly because the MAS architectures and frameworks are built upon cognitive and computer science foundations and principles, creating a gap in concepts and methodology between the two fields. Building agent frameworks based on behaviour theory is a promising direction to minimise this gap. It can provide a standard practice in interdisciplinary teams and facilitate better usage of MAS technological advancement in social research. From our survey, Triandis' Theory of Interpersonal Behaviour (TIB) was chosen due to its broad set of determinants and inclusion of an additive value function to calculate utility values of different outcomes. As TIB's determinants can be organised in a tree-like structure, we utilise layered architectures to formalise the agent's components. The additive function of TIB is then used to combine the utilities of different level determinants. The framework is then applied to create models for different case studies from various domains to test its ability to explain the importance of multiple behavioural aspects and environmental properties. The first case study simulates the mobility demand for Swiss households. We propose an experimental method to test and investigate the impact of core determinants in the TIB on the usage of different transportation modes. The second case study presents a novel solution to simulate trust and reputation by applying subjective logic as a metric to measure an agent's belief about the consequence(s) of action, which can be updated through feedback. The third case study investigates the possibility of simulating bounded rationality effects in an agent's decision-making scheme by limiting its capability of perceiving information. In the final study, a model is created to simulate migrants' choice of activities in centres by applying our framework in conjunction with Maslow's hierarchy of needs. The experiment can then be used to test the impact of different combinations of core determinants on the migrants' activities. Overall, the design of different components in our framework enables adaptations for various contexts, including transportation modal choice, buying a vehicle or daily activities. Most of the work can be done by changing the first-level determinants in the TIB's model based on the phenomena simulated and the available data. Several environmental properties can also be considered by extending the core components or employing other theoretical assumptions and concepts from the social study. The framework can then serve the purpose of theoretical exposition and allow the users to assess the causal link between the TIB's determinants and behaviour output. This thesis also highlights the importance of data collection and experimental design to capture better and understand different aspects of human decision-making
Reasoning with Attitude
This book presents and develops inferential expressivism, a novel approach to the study of meaning which combines elements of the expressivist and inferentialist programmes. Expressivists explain the meaning of words in terms of the attitudes that they are used to express; inferentialists explain the meaning of words in terms of the inferences that they are used to draw. The book lays out the philosophical foundations of inferential expressivism by articulating and defending the view that the meaning of an expression is to be explained in terms of the inferences we draw involving the attitudes we express. The book, moreover, lays out the logical foundations of inferential expressivism by showing how to implement the view rigorously by means of novel formal systems which can deal with a variety of speech acts. As the book shows, by joining forces expressivism and inferentialism can meet their key challenges whilst retaining their distinctive insights and advantages. The book goes on to demonstrate the fruitfulness of the inferential expressivist approach to meaning by applying it to a diverse range of linguistic phenomena, including epistemic modals, probability operators, conditionals, moral predicates, the truth predicate, and propositional attitude predicates
Logics of Responsibility
The study of responsibility is a complicated matter. The term is used in different ways in different fields, and it is easy to engage in everyday discussions as to why someone should be considered responsible for something. Typically, the backdrop of these discussions involves social, legal, moral, or philosophical problems. A clear pattern in all these spheres is the intent of issuing standards for when---and to what extent---an agent should be held responsible for a state of affairs. This is where Logic lends a hand. The development of expressive logics---to reason about agents' decisions in situations with moral consequences---involves devising unequivocal representations of components of behavior that are highly relevant to systematic responsibility attribution and to systematic blame-or-praise assignment. To put it plainly, expressive syntactic-and-semantic frameworks help us analyze responsibility-related problems in a methodical way. This thesis builds a formal theory of responsibility. The main tool used toward this aim is modal logic and, more specifically, a class of modal logics of action known as stit theory. The underlying motivation is to provide theoretical foundations for using symbolic techniques in the construction of ethical AI. Thus, this work means a contribution to formal philosophy and symbolic AI. The thesis's methodology consists in the development of stit-theoretic models and languages to explore the interplay between the following components of responsibility: agency, knowledge, beliefs, intentions, and obligations. Said models are integrated into a framework that is rich enough to provide logic-based characterizations for three categories of responsibility: causal, informational, and motivational responsibility. The thesis is structured as follows. Chapter 2 discusses at length stit theory, a logic that formalizes the notion of agency in the world over an indeterministic conception of time known as branching time. The idea is that agents act by constraining possible futures to definite subsets. On the road to formalizing informational responsibility, Chapter 3 extends stit theory with traditional epistemic notions (knowledge and belief). Thus, the chapter formalizes important aspects of agents' reasoning in the choice and performance of actions. In a context of responsibility attribution and excusability, Chapter 4 extends epistemic stit theory with measures of optimality of actions that underlie obligations. In essence, this chapter formalizes the interplay between agents' knowledge and what they ought to do. On the road to formalizing motivational responsibility, Chapter 5 adds intentions and intentional actions to epistemic stit theory and reasons about the interplay between knowledge and intentionality. Finally, Chapter 6 merges the previous chapters' formalisms into a rich logic that is able to express and model different modes of the aforementioned categories of responsibility. Technically, the most important contributions of this thesis lie in the axiomatizations of all the introduced logics. In particular, the proofs of soundness & completeness results involve long, step-by-step procedures that make use of novel techniques
Rethinking inconsistent mathematics
This dissertation has two main goals. The first is to provide a practice-based analysis of the field of inconsistent mathematics: what motivates it? what role does logic have in it? what distinguishes it from classical mathematics? is it alternative or revolutionary? The second goal is to introduce and defend a new conception of inconsistent mathematics - queer incomaths - as a particularly effective answer to feminist critiques of classical logic and mathematics. This sets the stage for a genuine revolution in mathematics, insofar as it suggests the need for a shift in mainstream attitudes about the rolee of logic and ethics in the practice of mathematics
Intentional dialogues in multi-agent systems based on ontologies and argumentation
Some areas of application, for example, healthcare, are known to resist the replacement of human operators by fully autonomous systems. It is typically not transparent to users how artificial intelligence systems make decisions or obtain information, making it difficult for users to trust them. To address this issue, we investigate how argumentation theory and ontology techniques can be used together with reasoning about intentions to build complex natural language dialogues to support human decision-making. Based on such an investigation, we propose MAIDS, a framework for developing multi-agent intentional dialogue systems, which can be used in different domains. Our framework is modular so that it can be used in its entirety or just the modules that fulfil the requirements of each system to be developed. Our work also includes the formalisation of a novel dialogue-subdialogue structure with which we can address ontological or theory-of-mind issues and later return to the main subject. As a case study, we have developed a multi-agent system using the MAIDS framework to support healthcare professionals in making decisions on hospital bed allocations. Furthermore, we evaluated this multi-agent system with domain experts using real data from a hospital. The specialists who evaluated our system strongly agree or agree that the dialogues in which they participated fulfil Cohen’s desiderata for task-oriented dialogue systems. Our agents have the ability to explain to the user how they arrived at certain conclusions. Moreover, they have semantic representations as well as representations of the mental state of the dialogue participants, allowing the formulation of coherent justifications expressed in natural language, therefore, easy for human participants to understand. This indicates the potential of the framework introduced in this thesis for the practical development of explainable intelligent systems as well as systems supporting hybrid intelligence
- …