17 research outputs found
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
On the Perception of Difficulty: Differences between Humans and AI
With the increased adoption of artificial intelligence (AI) in industry and society, effective human-AI interaction systems are becoming increasingly important. A central challenge in the interaction of humans with AI is the estimation of difficulty for human and AI agents for single task instances. These estimations are crucial to evaluate each agent\u27s capabilities and, thus, required to facilitate effective collaboration. So far, research in the field of human-AI interaction estimates the perceived difficulty of humans and AI independently from each other. However, the effective interaction of human and AI agents depends on metrics that accurately reflect each agent\u27s perceived difficulty in achieving valuable outcomes. Research to date has not yet adequately examined the differences in the perceived difficulty of humans and AI. Thus, this work reviews recent research on the perceived difficulty in human-AI interaction and contributing factors to consistently compare each agent\u27s perceived difficulty, e.g., creating the same prerequisites. Furthermore, we present an experimental design to thoroughly examine the perceived difficulty of both agents and contribute to a better understanding of the design of such systems
Beyond Traditional Teaching: The Potential of Large Language Models and Chatbots in Graduate Engineering Education
In the rapidly evolving landscape of education, digital technologies have
repeatedly disrupted traditional pedagogical methods. This paper explores the
latest of these disruptions: the potential integration of large language models
(LLMs) and chatbots into graduate engineering education. We begin by tracing
historical and technological disruptions to provide context and then introduce
key terms such as machine learning and deep learning and the underlying
mechanisms of recent advancements, namely attention/transformer models and
graphics processing units. The heart of our investigation lies in the
application of an LLM-based chatbot in a graduate fluid mechanics course. We
developed a question bank from the course material and assessed the chatbot's
ability to provide accurate, insightful responses. The results are encouraging,
demonstrating not only the bot's ability to effectively answer complex
questions but also the potential advantages of chatbot usage in the classroom,
such as the promotion of self-paced learning, the provision of instantaneous
feedback, and the reduction of instructors' workload. The study also examines
the transformative effect of intelligent prompting on enhancing the chatbot's
performance. Furthermore, we demonstrate how powerful plugins like Wolfram
Alpha for mathematical problem-solving and code interpretation can
significantly extend the chatbot's capabilities, transforming it into a
comprehensive educational tool. While acknowledging the challenges and ethical
implications surrounding the use of such AI models in education, we advocate
for a balanced approach. The use of LLMs and chatbots in graduate education can
be greatly beneficial but requires ongoing evaluation and adaptation to ensure
ethical and efficient use.Comment: 44 pages, 16 figures, preprint for PLOS ON
On the Perception of Difficulty: Differences between Humans and AI
With the increased adoption of artificial intelligence (AI) in industry and
society, effective human-AI interaction systems are becoming increasingly
important. A central challenge in the interaction of humans with AI is the
estimation of difficulty for human and AI agents for single task
instances.These estimations are crucial to evaluate each agent's capabilities
and, thus, required to facilitate effective collaboration. So far, research in
the field of human-AI interaction estimates the perceived difficulty of humans
and AI independently from each other. However, the effective interaction of
human and AI agents depends on metrics that accurately reflect each agent's
perceived difficulty in achieving valuable outcomes. Research to date has not
yet adequately examined the differences in the perceived difficulty of humans
and AI. Thus, this work reviews recent research on the perceived difficulty in
human-AI interaction and contributing factors to consistently compare each
agent's perceived difficulty, e.g., creating the same prerequisites.
Furthermore, we present an experimental design to thoroughly examine the
perceived difficulty of both agents and contribute to a better understanding of
the design of such systems
Computational Complexity of Strong Admissibility for Abstract Dialectical Frameworks
Abstract dialectical frameworks (ADFs) have been introduced as a formalism for modeling and evaluating argumentation allowing general logical satisfaction conditions. Different criteria used to settle the acceptance of arguments arecalled semantics. Semantics of ADFs have so far mainly been defined based on the concept of admissibility. Recently, the notion of strong admissibility has been introduced for ADFs. In the current work we study the computational complexityof the following reasoning tasks under strong admissibility semantics. We address 1. the credulous/skeptical decision problem; 2. the verification problem; 3. the strong justification problem; and 4. the problem of finding a smallest witness of strong justification of a queried argument
Tracking the Temporal-Evolution of Supernova Bubbles in Numerical Simulations
The study of low-dimensional, noisy manifolds embedded in a higher dimensional space has been extremely useful in many applications, from the chemical analysis of multi-phase flows to simulations of galactic mergers. Building a probabilistic model of the manifolds has helped in describing their essential properties and how they vary in space. However, when the manifold is evolving through time, a joint spatio-temporal modelling is needed, in order to fully comprehend its nature. We propose a first-order Markovian process that propagates the spatial probabilistic model of a manifold at fixed time, to its adjacent temporal stages. The proposed methodology is demonstrated using a particle simulation of an interacting dwarf galaxy to describe the evolution of a cavity generated by a Supernov
Multi-Agent Systems
A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to solve. Agent systems are open and extensible systems that allow for the deployment of autonomous and proactive software components. Multi-agent systems have been brought up and used in several application domains