2,653 research outputs found
Interpretable preference learning: a game theoretic framework for large margin on-line feature and rule learning
A large body of research is currently investigating on the connection between
machine learning and game theory. In this work, game theory notions are
injected into a preference learning framework. Specifically, a preference
learning problem is seen as a two-players zero-sum game. An algorithm is
proposed to incrementally include new useful features into the hypothesis. This
can be particularly important when dealing with a very large number of
potential features like, for instance, in relational learning and rule
extraction. A game theoretical analysis is used to demonstrate the convergence
of the algorithm. Furthermore, leveraging on the natural analogy between
features and rules, the resulting models can be easily interpreted by humans.
An extensive set of experiments on classification tasks shows the effectiveness
of the proposed method in terms of interpretability and feature selection
quality, with accuracy at the state-of-the-art.Comment: AAAI 201
Building Ethically Bounded AI
The more AI agents are deployed in scenarios with possibly unexpected
situations, the more they need to be flexible, adaptive, and creative in
achieving the goal we have given them. Thus, a certain level of freedom to
choose the best path to the goal is inherent in making AI robust and flexible
enough. At the same time, however, the pervasive deployment of AI in our life,
whether AI is autonomous or collaborating with humans, raises several ethical
challenges. AI agents should be aware and follow appropriate ethical principles
and should thus exhibit properties such as fairness or other virtues. These
ethical principles should define the boundaries of AI's freedom and creativity.
However, it is still a challenge to understand how to specify and reason with
ethical boundaries in AI agents and how to combine them appropriately with
subjective preferences and goal specifications. Some initial attempts employ
either a data-driven example-based approach for both, or a symbolic rule-based
approach for both. We envision a modular approach where any AI technique can be
used for any of these essential ingredients in decision making or decision
support systems, paired with a contextual approach to define their combination
and relative weight. In a world where neither humans nor AI systems work in
isolation, but are tightly interconnected, e.g., the Internet of Things, we
also envision a compositional approach to building ethically bounded AI, where
the ethical properties of each component can be fruitfully exploited to derive
those of the overall system. In this paper we define and motivate the notion of
ethically-bounded AI, we describe two concrete examples, and we outline some
outstanding challenges.Comment: Published at AAAI Blue Sky Track, winner of Blue Sky Awar
- …