58,031 research outputs found
AI for Explaining Decisions in Multi-Agent Environments
Explanation is necessary for humans to understand and accept decisions made
by an AI system when the system's goal is known. It is even more important when
the AI system makes decisions in multi-agent environments where the human does
not know the systems' goals since they may depend on other agents' preferences.
In such situations, explanations should aim to increase user satisfaction,
taking into account the system's decision, the user's and the other agents'
preferences, the environment settings and properties such as fairness, envy and
privacy. Generating explanations that will increase user satisfaction is very
challenging; to this end, we propose a new research direction: xMASE. We then
review the state of the art and discuss research directions towards efficient
methodologies and algorithms for generating explanations that will increase
users' satisfaction from AI system's decisions in multi-agent environments.Comment: This paper has been submitted to the Blue Sky Track of the AAAI 2020
conference. At the time of submission, it is under review. The tentative
notification date will be November 10, 2019. Current version: Name of first
author had been added in metadat
HIV/AIDS, Security and Conflict: New Realities, New Responses
Ten years after the HIV/AIDS epidemic itself was identified as a threat to international peace and security, findings from the three-year AIDS, Security and Conflict Initiative (ASCI)(1) present evidence of the mutually reinforcing dynamics linking HIV/AIDS, conflict and security
On the Revelation Principle and Reciprocal Mechanisms in Competing Mechanism Games
This paper provides a set of mechanisms that we refer to as emph{reciprocal mechanisms. }These mechanisms have the property that every outcome that can be supported as a Bayesian equilibrium in a competing mechanism game can be supported as an equilibrium in reciprocal mechanisms. In this sense, reciprocal mechanisms play the same role as direct mechanisms do in single principal problems. The advantage of these mechanisms over alternatives like the universal set of mechanisms is that they are conceptually straightforward and no more difficult to deal with than the simple direct mechanisms used in single principal mechanism design.competing mechanisms, revelation principle
The Autism Toolbox : An Autism Resource for Scottish Schools
The Autism Toolbox will draw upon a range of practice experience, literature and research to offer guidance for authorities and schools providing for children and young people with Autism Spectrum Disorders (ASD)
A Co-design Study for Multi-Stakeholder Job Recommender System Explanations
Recent legislation proposals have significantly increased the demand for
eXplainable Artificial Intelligence (XAI) in many businesses, especially in
so-called `high-risk' domains, such as recruitment. Within recruitment, AI has
become commonplace, mainly in the form of job recommender systems (JRSs), which
try to match candidates to vacancies, and vice versa. However, common XAI
techniques often fall short in this domain due to the different levels and
types of expertise of the individuals involved, making explanations difficult
to generalize. To determine the explanation preferences of the different
stakeholder types - candidates, recruiters, and companies - we created and
validated a semi-structured interview guide. Using grounded theory, we
structurally analyzed the results of these interviews and found that different
stakeholder types indeed have strongly differing explanation preferences.
Candidates indicated a preference for brief, textual explanations that allow
them to quickly judge potential matches. On the other hand, hiring managers
preferred visual graph-based explanations that provide a more technical and
comprehensive overview at a glance. Recruiters found more exhaustive textual
explanations preferable, as those provided them with more talking points to
convince both parties of the match. Based on these findings, we describe
guidelines on how to design an explanation interface that fulfills the
requirements of all three stakeholder types. Furthermore, we provide the
validated interview guide, which can assist future research in determining the
explanation preferences of different stakeholder types
Stigmergic epistemology, stigmergic cognition
To know is to cognize, to cognize is to be a culturally bounded, rationality-bounded and environmentally located agent. Knowledge and cognition are thus dual aspects of human sociality. If social epistemology has the formation, acquisition, mediation, transmission and dissemination of knowledge in complex communities of knowers as its subject matter, then its third party character is essentially stigmergic. In its most generic formulation, stigmergy is the phenomenon of indirect communication mediated by modiļ¬cations of the environment. Extending this notion one might conceive of social stigmergy as the extra-cranial analog of an artiļ¬cial neural network providing epistemic structure. This paper recommends a stigmergic framework for social epistemology to account for the supposed tension between individual action, wants and beliefs and the social corpora. We also propose that the so-called "extended mind" thesis oļ¬ers the requisite stigmergic cognitive analog to stigmergic knowledge. Stigmergy as a theory of interaction within complex systems theory is illustrated through an example that runs on a particle swarm optimization algorithm
- ā¦