601 research outputs found

    Constructing Hierarchical Image-tags Bimodal Representations for Word Tags Alternative Choice

    Full text link
    This paper describes our solution to the multi-modal learning challenge of ICML. This solution comprises constructing three-level representations in three consecutive stages and choosing correct tag words with a data-specific strategy. Firstly, we use typical methods to obtain level-1 representations. Each image is represented using MPEG-7 and gist descriptors with additional features released by the contest organizers. And the corresponding word tags are represented by bag-of-words model with a dictionary of 4000 words. Secondly, we learn the level-2 representations using two stacked RBMs for each modality. Thirdly, we propose a bimodal auto-encoder to learn the similarities/dissimilarities between the pairwise image-tags as level-3 representations. Finally, during the test phase, based on one observation of the dataset, we come up with a data-specific strategy to choose the correct tag words leading to a leap of an improved overall performance. Our final average accuracy on the private test set is 100%, which ranks the first place in this challenge.Comment: 6 pages, 1 figure, Presented at the Workshop on Representation Learning, ICML 201

    Role of the effective payoff function in evolutionary game dynamics

    Full text link
    In most studies regarding evolutionary game dynamics, the effective payoff, a quantity that translates the payoff derived from game interactions into reproductive success, is usually assumed to be a specific function of the payoff. Meanwhile, the effect of different function forms of effective payoff on evolutionary dynamics is always left in the basket. With introducing a generalized mapping that the effective payoff of individuals is a non-negative function of two variables on selection intensity and payoff, we study how different effective payoff functions affect evolutionary dynamics in a symmetrical mutation-selection process. For standard two-strategy two-player games, we find that under weak selection the condition for one strategy to dominate the other depends not only on the classical {\sigma}-rule, but also on an extra constant that is determined by the form of the effective payoff function. By changing the sign of the constant, we can alter the direction of strategy selection. Taking the Moran process and pairwise comparison process as specific models in well-mixed populations, we find that different fitness or imitation mappings are equivalent under weak selection. Moreover, the sign of the extra constant determines the direction of one-third law and risk-dominance for sufficiently large populations. This work thus helps to elucidate how the effective payoff function as another fundamental ingredient of evolution affect evolutionary dynamics.Comment: This paper has been accepted to publish on EP

    Influence of initial distributions on robust cooperation in evolutionary Prisoner's Dilemma

    Get PDF
    We study the evolutionary Prisoner's Dilemma game on scale-free networks for different initial distributions. We consider three types of initial distributions for cooperators and defectors: initially random distribution with different frequencies of defectors; intentional organization with defectors initially occupying the most connected nodes with different fractions of defectors; intentional assignment for cooperators occupying the most connected nodes with different proportions of defectors at the beginning. It is shown that initial configurations for cooperators and defectors can influence the stationary level of cooperation and the evolution speed of cooperation. Organizations with the vertices with highest connectivity representing individuals cooperators could exhibit the most robust cooperation and drive evolutionary process to converge fastest to the high steady cooperation in the three situations of initial distributions. Otherwise, we determine the critical initial frequencies of defectors above which the extinction of cooperators occurs for the respective initial distributions, and find that the presence of network loops and clusters for cooperators can favor the emergence of cooperation.Comment: Submitted to EP

    Training Group Orthogonal Neural Networks with Privileged Information

    Full text link
    Learning rich and diverse representations is critical for the performance of deep convolutional neural networks (CNNs). In this paper, we consider how to use privileged information to promote inherent diversity of a single CNN model such that the model can learn better representations and offer stronger generalization ability. To this end, we propose a novel group orthogonal convolutional neural network (GoCNN) that learns untangled representations within each layer by exploiting provided privileged information and enhances representation diversity effectively. We take image classification as an example where image segmentation annotations are used as privileged information during the training process. Experiments on two benchmark datasets -- ImageNet and PASCAL VOC -- clearly demonstrate the strong generalization ability of our proposed GoCNN model. On the ImageNet dataset, GoCNN improves the performance of state-of-the-art ResNet-152 model by absolute value of 1.2% while only uses privileged information of 10% of the training images, confirming effectiveness of GoCNN on utilizing available privileged knowledge to train better CNNs.Comment: Proceedings of the IJCAI-1

    Deep Self-Taught Learning for Weakly Supervised Object Localization

    Full text link
    Most existing weakly supervised localization (WSL) approaches learn detectors by finding positive bounding boxes based on features learned with image-level supervision. However, those features do not contain spatial location related information and usually provide poor-quality positive samples for training a detector. To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them. Consequently, the detector progressively improves its detection ability and localizes more informative positive samples. To implement such self-taught learning, we propose a seed sample acquisition method via image-to-object transferring and dense subgraph discovery to find reliable positive samples for initializing the detector. An online supportive sample harvesting scheme is further proposed to dynamically select the most confident tight positive samples and train the detector in a mutual boosting way. To prevent the detector from being trapped in poor optima due to overfitting, we propose a new relative improvement of predicted CNN scores for guiding the self-taught learning process. Extensive experiments on PASCAL 2007 and 2012 show that our approach outperforms the state-of-the-arts, strongly validating its effectiveness.Comment: Accepted as spotlight paper by CVPR 201
    • …
    corecore