865 research outputs found
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
Flexible Decision Support in Dynamic Interorganizational Networks
An effective Decision Support System (DSS) should help its users improve decision-making in complex, information-rich, environments. We present a feature gap analysis that shows that current decision support technologies lack important qualities for a new generation of agile business models that require easy, temporary integration across organisational boundaries. We enumerate these qualities as DSS Desiderata, properties that can contribute both effectiveness and flexibility to users in such environments. To address this gap, we describe a new design approach that enables users to compose decision behaviours from separate, configurable components, and allows dynamic construction of analysis and modelling tools from small, single-purpose evaluator services. The result is what we call an âevaluator service networkâ that can easily be configured to test hypotheses and analyse the impact of various choices for elements of decision processes. We have implemented and tested this design in an interactive version of the MinneTAC trading agent, an agent designed for the Trading Agent Competition for Supply Chain Management
MANCaLog: A Logic for Multi-Attribute Network Cascades (Technical Report)
The modeling of cascade processes in multi-agent systems in the form of
complex networks has in recent years become an important topic of study due to
its many applications: the adoption of commercial products, spread of disease,
the diffusion of an idea, etc. In this paper, we begin by identifying a
desiderata of seven properties that a framework for modeling such processes
should satisfy: the ability to represent attributes of both nodes and edges, an
explicit representation of time, the ability to represent non-Markovian
temporal relationships, representation of uncertain information, the ability to
represent competing cascades, allowance of non-monotonic diffusion, and
computational tractability. We then present the MANCaLog language, a formalism
based on logic programming that satisfies all these desiderata, and focus on
algorithms for finding minimal models (from which the outcome of cascades can
be obtained) as well as how this formalism can be applied in real world
scenarios. We are not aware of any other formalism in the literature that meets
all of the above requirements
Exploring Effectiveness of Explanations for Appropriate Trust: Lessons from Cognitive Psychology
The rapid development of Artificial Intelligence (AI) requires developers and
designers of AI systems to focus on the collaboration between humans and
machines. AI explanations of system behavior and reasoning are vital for
effective collaboration by fostering appropriate trust, ensuring understanding,
and addressing issues of fairness and bias. However, various contextual and
subjective factors can influence an AI system explanation's effectiveness. This
work draws inspiration from findings in cognitive psychology to understand how
effective explanations can be designed. We identify four components to which
explanation designers can pay special attention: perception, semantics, intent,
and user & context. We illustrate the use of these four explanation components
with an example of estimating food calories by combining text with visuals,
probabilities with exemplars, and intent communication with both user and
context in mind. We propose that the significant challenge for effective AI
explanations is an additional step between explanation generation using
algorithms not producing interpretable explanations and explanation
communication. We believe this extra step will benefit from carefully
considering the four explanation components outlined in our work, which can
positively affect the explanation's effectiveness.Comment: 2022 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX
EMIL: Extracting Meaning from Inconsistent Language
Developments in formal and computational theories of argumentation reason with inconsistency. Developments in Computational Linguistics extract arguments from large textual corpora. Both developments head in the direction of automated processing and reasoning with inconsistent, linguistic knowledge so as to explain and justify arguments in a humanly accessible form. Yet, there is a gap between the coarse-grained, semi-structured knowledge-bases of computational theories of argumentation and fine-grained, highly-structured inferences from knowledge-bases derived from natural language. We identify several subproblems which must be addressed in order to bridge the gap. We provide a direct semantics for argumentation. It has attractive properties in terms of expressivity and complexity, enables reasoning by cases, and can be more highly structured. For language processing, we work with an existing controlled natural language (CNL), which interfaces with our computational theory of argumentation; the tool processes natural language input, translates them into a form for automated inference engines, outputs argument extensions, then generates natural language statements. The key novel adaptation incorporates the defeasible expression âit is usual thatâ. This is an important, albeit incremental, step to incorporate linguistic expressions of defeasibility. Overall, the novel contribution of the paper is an integrated, end-to-end argumentation system which bridges between automated defeasible reasoning and a natural language interface. Specific novel contributions are the theory of âdirect semanticsâ, motivations for our theory, results with respect to the direct semantics, an implementation, experimental results, the tie between the formalisation and the CNL, the introduction into a CNL of a natural language expression of defeasibility, and an âengineeringâ approach to fine-grained argument analysis
Towards the mental health ontology
Lots of research have been done within the mental health domain, but exact causes of mental illness are still unknown. Concerningly, the number of people being affected by mental conditions is rapidly increasing and it has been predicted that depression would be the world's leading cause of disabilityby 2020. Most mental health information is found in electronic form. Application of the cutting-edge information technologies within the mental health domain has the potential to greatly increase the value of the available information. Specifically, ontologies form the basis for collaboration between researchteams, for creation of semantic web services and intelligent multi-agent systems, for intelligent information retrieval, and for automatic data analysis such as data mining. In this paper, we present Mental Health Ontology which can be used to underpin a variety of automatic tasks and positively transform the way information is being managed and used within the mental health domain
Intentional dialogues in multi-agent systems based on ontologies and argumentation
Some areas of application, for example, healthcare, are known to resist the replacement of human operators by fully autonomous systems. It is typically not transparent to users how artificial intelligence systems make decisions or obtain information, making it difficult for users to trust them. To address this issue, we investigate how argumentation theory and ontology techniques can be used together with reasoning about intentions to build complex natural language dialogues to support human decision-making. Based on such an investigation, we propose MAIDS, a framework for developing multi-agent intentional dialogue systems, which can be used in different domains. Our framework is modular so that it can be used in its entirety or just the modules that fulfil the requirements of each system to be developed. Our work also includes the formalisation of a novel dialogue-subdialogue structure with which we can address ontological or theory-of-mind issues and later return to the main subject. As a case study, we have developed a multi-agent system using the MAIDS framework to support healthcare professionals in making decisions on hospital bed allocations. Furthermore, we evaluated this multi-agent system with domain experts using real data from a hospital. The specialists who evaluated our system strongly agree or agree that the dialogues in which they participated fulfil Cohenâs desiderata for task-oriented dialogue systems. Our agents have the ability to explain to the user how they arrived at certain conclusions. Moreover, they have semantic representations as well as representations of the mental state of the dialogue participants, allowing the formulation of coherent justifications expressed in natural language, therefore, easy for human participants to understand. This indicates the potential of the framework introduced in this thesis for the practical development of explainable intelligent systems as well as systems supporting hybrid intelligence
- âŠ