12,225 research outputs found
Aspiration Dynamics of Multi-player Games in Finite Populations
Studying strategy update rules in the framework of evolutionary game theory,
one can differentiate between imitation processes and aspiration-driven
dynamics. In the former case, individuals imitate the strategy of a more
successful peer. In the latter case, individuals adjust their strategies based
on a comparison of their payoffs from the evolutionary game to a value they
aspire, called the level of aspiration. Unlike imitation processes of pairwise
comparison, aspiration-driven updates do not require additional information
about the strategic environment and can thus be interpreted as being more
spontaneous. Recent work has mainly focused on understanding how aspiration
dynamics alter the evolutionary outcome in structured populations. However, the
baseline case for understanding strategy selection is the well-mixed population
case, which is still lacking sufficient understanding. We explore how
aspiration-driven strategy-update dynamics under imperfect rationality
influence the average abundance of a strategy in multi-player evolutionary
games with two strategies. We analytically derive a condition under which a
strategy is more abundant than the other in the weak selection limiting case.
This approach has a long standing history in evolutionary game and is mostly
applied for its mathematical approachability. Hence, we also explore strong
selection numerically, which shows that our weak selection condition is a
robust predictor of the average abundance of a strategy. The condition turns
out to differ from that of a wide class of imitation dynamics, as long as the
game is not dyadic. Therefore a strategy favored under imitation dynamics can
be disfavored under aspiration dynamics. This does not require any population
structure thus highlights the intrinsic difference between imitation and
aspiration dynamics
Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data
There are threefold challenges in emotion recognition. First, it is difficult
to recognize human's emotional states only considering a single modality.
Second, it is expensive to manually annotate the emotional data. Third,
emotional data often suffers from missing modalities due to unforeseeable
sensor malfunction or configuration issues. In this paper, we address all these
problems under a novel multi-view deep generative framework. Specifically, we
propose to model the statistical relationships of multi-modality emotional data
using multiple modality-specific generative networks with a shared latent
space. By imposing a Gaussian mixture assumption on the posterior approximation
of the shared latent variables, our framework can learn the joint deep
representation from multiple modalities and evaluate the importance of each
modality simultaneously. To solve the labeled-data-scarcity problem, we extend
our multi-view model to semi-supervised learning scenario by casting the
semi-supervised classification problem as a specialized missing data imputation
task. To address the missing-modality problem, we further extend our
semi-supervised multi-view model to deal with incomplete data, where a missing
view is treated as a latent variable and integrated out during inference. This
way, the proposed overall framework can utilize all available (both labeled and
unlabeled, as well as both complete and incomplete) data to improve its
generalization ability. The experiments conducted on two real multi-modal
emotion datasets demonstrated the superiority of our framework.Comment: arXiv admin note: text overlap with arXiv:1704.07548, 2018 ACM
Multimedia Conference (MM'18
- …