4,983 research outputs found

    Price-taking Strategy Versus Dynamic Programming in Oligopoly

    Get PDF
    In a quantity-competed duopoly, one firm is a naive price-taker (who responses only to the last period’s price) while the other has all the market information so as be able to optimize its profit stream (either discounted or un-discounted) dynamically over a finite or infinite horizon. With a traditional linear economy, we are able to derive algebraically the optimal policies of all periods for the dynamic optimizer. A counter-intuitive phenomenon is then observed: regardless of the planning horizon and the discounted factor, there exists a relative profitability range of initial prices, starting with which the price-taker make higher profit than the dynamic optimizer. Furthermore, with the increase in the planning horizon, the price-taker’s relative profitability range increases accordingly and finally covers the entire economically meaningful range.Economics; dynamic programming; Bellman’s optimality principle; applied OR; duopoly

    Generative Model with Coordinate Metric Learning for Object Recognition Based on 3D Models

    Full text link
    Given large amount of real photos for training, Convolutional neural network shows excellent performance on object recognition tasks. However, the process of collecting data is so tedious and the background are also limited which makes it hard to establish a perfect database. In this paper, our generative model trained with synthetic images rendered from 3D models reduces the workload of data collection and limitation of conditions. Our structure is composed of two sub-networks: semantic foreground object reconstruction network based on Bayesian inference and classification network based on multi-triplet cost function for avoiding over-fitting problem on monotone surface and fully utilizing pose information by establishing sphere-like distribution of descriptors in each category which is helpful for recognition on regular photos according to poses, lighting condition, background and category information of rendered images. Firstly, our conjugate structure called generative model with metric learning utilizing additional foreground object channels generated from Bayesian rendering as the joint of two sub-networks. Multi-triplet cost function based on poses for object recognition are used for metric learning which makes it possible training a category classifier purely based on synthetic data. Secondly, we design a coordinate training strategy with the help of adaptive noises acting as corruption on input images to help both sub-networks benefit from each other and avoid inharmonious parameter tuning due to different convergence speed of two sub-networks. Our structure achieves the state of the art accuracy of over 50\% on ShapeNet database with data migration obstacle from synthetic images to real photos. This pipeline makes it applicable to do recognition on real images only based on 3D models.Comment: 14 page

    Relative Profitability of Dynamic Walrasian Strategies

    Get PDF
    The advantage of price-taking behavior in achieving relative profitability in oligopolistic quantity competition has been much appreciated recently from economic dynamics and evolutionary game theory, respectively. The current research intends to provide a direct economic interpretation as well as intuitive justification and further to build a linkage between different perspectives. In particular, a detailed illustration of an arbitrary oligopoly that produce a homogenous product is presented. So long as the outputs of other firms are fixed and the residual demand is downward sloping, for any two identical firms whose cost functions are convex, their output space can be divided symmetrically into mutually exclusive relatively profitability regimes. Furthermore, there exist infinitely many relative-profitability reactions for each firm in such “residual” duopoly, all of which intersect at the “residual” Walrasian equilibrium. This suggests that sticking to this dynamical equilibrium output constantly (i.e., the static Walrasian strategy) turns out to be a relative-profitability strategy at each period. On the other hand, regardless of what strategies its rival may take, a firm adopting price-taking strategy or more generally defined dynamic Walrasian strategies can achieve the relative profitability if an intertemporal equilibrium is reached. The methodology adopted and the conclusions arrived clarify the confusions and misunderstandings due to the different usages of same terminologies under different frameworks and generalize the previous available results in the literature to a higher level and a broader context.Price-taking, Walrasian behavior, Relative profit, Oligopoly, Cournot, dynamic Walrasian strategy.

    Energy Confused Adversarial Metric Learning for Zero-Shot Image Retrieval and Clustering

    Full text link
    Deep metric learning has been widely applied in many computer vision tasks, and recently, it is more attractive in \emph{zero-shot image retrieval and clustering}(ZSRC) where a good embedding is requested such that the unseen classes can be distinguished well. Most existing works deem this 'good' embedding just to be the discriminative one and thus race to devise powerful metric objectives or hard-sample mining strategies for leaning discriminative embedding. However, in this paper, we first emphasize that the generalization ability is a core ingredient of this 'good' embedding as well and largely affects the metric performance in zero-shot settings as a matter of fact. Then, we propose the Energy Confused Adversarial Metric Learning(ECAML) framework to explicitly optimize a robust metric. It is mainly achieved by introducing an interesting Energy Confusion regularization term, which daringly breaks away from the traditional metric learning idea of discriminative objective devising, and seeks to 'confuse' the learned model so as to encourage its generalization ability by reducing overfitting on the seen classes. We train this confusion term together with the conventional metric objective in an adversarial manner. Although it seems weird to 'confuse' the network, we show that our ECAML indeed serves as an efficient regularization technique for metric learning and is applicable to various conventional metric methods. This paper empirically and experimentally demonstrates the importance of learning embedding with good generalization, achieving state-of-the-art performances on the popular CUB, CARS, Stanford Online Products and In-Shop datasets for ZSRC tasks. \textcolor[rgb]{1, 0, 0}{Code available at http://www.bhchen.cn/}.Comment: AAAI 2019, Spotligh
    corecore