238,742 research outputs found
PREFERENCES: OPTIMIZATION, IMPORTANCE LEARNING AND STRATEGIC BEHAVIORS
Preferences are fundamental to decision making and play an important role in artificial intelligence. Our research focuses on three group of problems based on the preference formalism Answer Set Optimization (ASO): preference aggregation problems such as computing optimal (near optimal) solutions, strategic behaviors in preference representation, and learning ranks (weights) for preferences.
In the first group of problems, of interest are optimal outcomes, that is, outcomes that are optimal with respect to the preorder defined by the preference rules. In this work, we consider computational problems concerning optimal outcomes. We propose, implement and study methods to compute an optimal outcome; to compute another optimal outcome once the first one is found; to compute an optimal outcome that is similar to (or, dissimilar from) a given candidate outcome; and to compute a set of optimal answer sets each significantly different from the others. For the decision version of several of these problems we establish their computational complexity.
For the second topic, the strategic behaviors such as manipulation and bribery have received much attention from the social choice community. We study these concepts for preference formalisms that identify a set of optimal outcomes rather than a single winning outcome, the case common to social choice. Such preference formalisms are of interest in the context of combinatorial domains, where preference representations are only approximations to true preferences, and seeking a single optimal outcome runs a risk of missing the one which is optimal with respect to the actual preferences. In this work, we assume that preferences may be ranked (differ in importance), and we use the Pareto principle adjusted to the case of ranked preferences as the preference aggregation rule. For two important classes of preferences, representing the extreme ends of the spectrum, we provide characterizations of situations when manipulation and bribery is possible, and establish the complexity of the problem to decide that.
Finally, we study the problem of learning the importance of individual preferences in preference profiles aggregated by the ranked Pareto rule or positional scoring rules. We provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decided all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples is NP-hard. We obtain similar results for the case of weighted profiles
Mainstream economics and the Austrian school: toward reunification
In this paper, I compare the methodology of the Austrian school to two alternative methodologies from the economic mainstream: the ‘orthodox’ and revealed preference methodologies. I argue that Austrian school theorists should stop describing themselves as ‘extreme apriorists’ (or writing suggestively to that effect), and should start giving greater acknowledgement to the importance of empirical work within their research program. The motivation for this dialectical shift is threefold: the approach is more faithful to their actual practices, it better illustrates the underlying similarities between the mainstream and Austrian research paradigms, and it provides a philosophical
foundation that is much more plausible in itself
Neural Collaborative Ranking
Recommender systems are aimed at generating a personalized ranked list of
items that an end user might be interested in. With the unprecedented success
of deep learning in computer vision and speech recognition, recently it has
been a hot topic to bridge the gap between recommender systems and deep neural
network. And deep learning methods have been shown to achieve state-of-the-art
on many recommendation tasks. For example, a recent model, NeuMF, first
projects users and items into some shared low-dimensional latent feature space,
and then employs neural nets to model the interaction between the user and item
latent features to obtain state-of-the-art performance on the recommendation
tasks. NeuMF assumes that the non-interacted items are inherent negative and
uses negative sampling to relax this assumption. In this paper, we examine an
alternative approach which does not assume that the non-interacted items are
necessarily negative, just that they are less preferred than interacted items.
Specifically, we develop a new classification strategy based on the widely used
pairwise ranking assumption. We combine our classification strategy with the
recently proposed neural collaborative filtering framework, and propose a
general collaborative ranking framework called Neural Network based
Collaborative Ranking (NCR). We resort to a neural network architecture to
model a user's pairwise preference between items, with the belief that neural
network will effectively capture the latent structure of latent factors. The
experimental results on two real-world datasets show the superior performance
of our models in comparison with several state-of-the-art approaches.Comment: Proceedings of the 2018 ACM on Conference on Information and
Knowledge Managemen
Building Ethically Bounded AI
The more AI agents are deployed in scenarios with possibly unexpected
situations, the more they need to be flexible, adaptive, and creative in
achieving the goal we have given them. Thus, a certain level of freedom to
choose the best path to the goal is inherent in making AI robust and flexible
enough. At the same time, however, the pervasive deployment of AI in our life,
whether AI is autonomous or collaborating with humans, raises several ethical
challenges. AI agents should be aware and follow appropriate ethical principles
and should thus exhibit properties such as fairness or other virtues. These
ethical principles should define the boundaries of AI's freedom and creativity.
However, it is still a challenge to understand how to specify and reason with
ethical boundaries in AI agents and how to combine them appropriately with
subjective preferences and goal specifications. Some initial attempts employ
either a data-driven example-based approach for both, or a symbolic rule-based
approach for both. We envision a modular approach where any AI technique can be
used for any of these essential ingredients in decision making or decision
support systems, paired with a contextual approach to define their combination
and relative weight. In a world where neither humans nor AI systems work in
isolation, but are tightly interconnected, e.g., the Internet of Things, we
also envision a compositional approach to building ethically bounded AI, where
the ethical properties of each component can be fruitfully exploited to derive
those of the overall system. In this paper we define and motivate the notion of
ethically-bounded AI, we describe two concrete examples, and we outline some
outstanding challenges.Comment: Published at AAAI Blue Sky Track, winner of Blue Sky Awar
Configuring of extero- and interoceptive senses in actions on food
This paper reviews all the published evidence on the theory that the act of selecting a piece of food or drink structurally coordinates quantitative information across several sensory modalities. The existing data show that the momentary disposition to consume the item is strengthened or weakened by learnt configurations of stimuli perceived through both exteroceptive and interoceptive senses. The observed configural structure of performance shows that the multimodal stimuli are interacting perceptually, rather than merely combining quantities of information from the senses into the observed response
The Evolutionary Stability of Optimism, Pessimism and Complete Ignorance
We provide an evolutionary foundation to evidence that in some situations humans maintain optimistic or pessimistic attitudes towards uncertainty and are ignorant to relevant aspects of the environment. Players in strategic games face Knightian uncertainty about opponents’ actions and maximize individually their Choquet expected utility. Our Choquet expected utility model allows for both an optimistic or pessimistic attitude towards uncertainty as well as ignorance to strategic dependencies. An optimist (resp. pessimist) overweights good (resp. bad) outcomes. A complete ignorant never reacts to opponents’ change of actions. With qualifications we show that optimistic (resp. pessimistic) complete ignorance is evolutionary stable / yields a strategic advantage in submodular (resp. supermodular) games with aggregate externalities. Moreover, this evolutionary stable preference leads to Walrasian behavior in those classes of games
- …