151 research outputs found
Logic-based Technologies for Intelligent Systems: State of the Art and Perspectives
Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future
A Framework for Data-Driven Explainability in Mathematical Optimization
Advancements in mathematical programming have made it possible to efficiently
tackle large-scale real-world problems that were deemed intractable just a few
decades ago. However, provably optimal solutions may not be accepted due to the
perception of optimization software as a black box. Although well understood by
scientists, this lacks easy accessibility for practitioners. Hence, we advocate
for introducing the explainability of a solution as another evaluation
criterion, next to its objective value, which enables us to find trade-off
solutions between these two criteria. Explainability is attained by comparing
against (not necessarily optimal) solutions that were implemented in similar
situations in the past. Thus, solutions are preferred that exhibit similar
features. Although we prove that already in simple cases the explainable model
is NP-hard, we characterize relevant polynomially solvable cases such as the
explainable shortest-path problem. Our numerical experiments on both artificial
as well as real-world road networks show the resulting Pareto front. It turns
out that the cost of enforcing explainability can be very small
Abstract argumentation for explainable satellite scheduling
Satellite schedules are derived from satellite mission objectives, which are mostly managed manually from the ground. This increases the need to develop autonomous on-board schedul- ing capabilities and reduce the requirement for manual manage- ment of satellite schedules. Additionally, this allows the unlocking of more capabilities on-board for decision-making, leading to an optimal campaign. However, there remain trust issues in decisions made by Artificial Intelligence (AI) systems, especially in risk-averse environments, such as satellite operations. Thus, an explanation layer is required to assist operators in understanding decisions made, or planned, autonomously on-board. To this aim, a satellite scheduling problem is formulated, utilizing real world data, where the total number of actions are maximised based on the environmental constraints that limit observation and down-link capabilities. The formulated optimisation problem is solved with a Constraint Programming (CP) method. Later, the mathematical derivation for an Abstract Argumentation Framework (AAF) for the test case is provided. This is proposed as the solution to provide an explanation layer to the autonomous decision-making system. The effectiveness of the defined AAF layer is proven on the daily schedule of an Earth Observation (EO) mission, monitoring land surfaces, demonstrating greater capabilities and flexibility, for a human operator to inspect the machine provided solution
Current and Future Challenges in Knowledge Representation and Reasoning
Knowledge Representation and Reasoning is a central, longstanding, and active
area of Artificial Intelligence. Over the years it has evolved significantly;
more recently it has been challenged and complemented by research in areas such
as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl
Perspectives workshop was held on Knowledge Representation and Reasoning. The
goal of the workshop was to describe the state of the art in the field,
including its relation with other areas, its shortcomings and strengths,
together with recommendations for future progress. We developed this manifesto
based on the presentations, panels, working groups, and discussions that took
place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge
Representation: its origins, goals, milestones, and current foci; its relation
to other disciplines, especially to Artificial Intelligence; and on its
challenges, along with key priorities for the next decade
Proceedings of the 1st Doctoral Consortium at the European Conference on Artificial Intelligence (DC-ECAI 2020)
1st Doctoral Consortium at the European Conference on
Artificial Intelligence (DC-ECAI 2020), 29-30 August, 2020
Santiago de Compostela, SpainThe DC-ECAI 2020 provides a unique opportunity for PhD students, who are close to finishing their doctorate research, to interact with experienced researchers in the field. Senior members of the community are assigned as mentors for each group of students based on the student’s research or similarity of research interests. The DC-ECAI 2020, which is held virtually this year, allows students from all over the world to present their research and discuss their ongoing research and career plans with their mentor, to do networking with other participants, and to receive training and mentoring about career planning and career option
Building bridges for better machines : from machine ethics to machine explainability and back
Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System
Don't Treat the Symptom, Find the Cause! Efficient Artificial-Intelligence Methods for (Interactive) Debugging
In the modern world, we are permanently using, leveraging, interacting with,
and relying upon systems of ever higher sophistication, ranging from our cars,
recommender systems in e-commerce, and networks when we go online, to
integrated circuits when using our PCs and smartphones, the power grid to
ensure our energy supply, security-critical software when accessing our bank
accounts, and spreadsheets for financial planning and decision making. The
complexity of these systems coupled with our high dependency on them implies
both a non-negligible likelihood of system failures, and a high potential that
such failures have significant negative effects on our everyday life. For that
reason, it is a vital requirement to keep the harm of emerging failures to a
minimum, which means minimizing the system downtime as well as the cost of
system repair. This is where model-based diagnosis comes into play.
Model-based diagnosis is a principled, domain-independent approach that can
be generally applied to troubleshoot systems of a wide variety of types,
including all the ones mentioned above, and many more. It exploits and
orchestrates i.a. techniques for knowledge representation, automated reasoning,
heuristic problem solving, intelligent search, optimization, stochastics,
statistics, decision making under uncertainty, machine learning, as well as
calculus, combinatorics and set theory to detect, localize, and fix faults in
abnormally behaving systems.
In this thesis, we will give an introduction to the topic of model-based
diagnosis, point out the major challenges in the field, and discuss a selection
of approaches from our research addressing these issues.Comment: Habilitation Thesi
BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference
- …