9,500 research outputs found

    Vertical competition between manufacturers and retailers and upstream incentives to innovate and differentiate

    Get PDF
    Vertical competition, namely competition between retailers' store brands (or private labels) and manufacturers' brands has become a crucial factor of change of the competitive environment in several industries, particularly in the grocery and food industries. Despite the growing literature on the determinants of the phenomenon, one topic area regarding the impact of vertical competition on the upstream incentives to adopt non-price strategies such as product innovation as well as horizontal and vertical product differentiation has so far received little attention. An idea often put forward is that the increasing bargaining power of retailers and higher vertical competitive pressures can have negative effects on such incentives by lowering manufacturers' profits. On the other hand, there is a significant empirical evidence supporting the view that non-price strategies of product innovation and differentiation continue to play a key role and remain a crucial source of competitive advantages for several manufacturers. In this paper, we present a simple conceptual framework which allows us to focus on two hypotheses which interacting explain why the disincentive effects are not so obvious. The first hypothesis regards the existence of an inverse relationship between the strength of a given brand and the retail margin as suggested by Robert Steiner. Through a two-stage model in which manufacturers do not sell directly to final consumers and the retail industry is not perfectly competitive, Steiner argued persuasively that in such models leading brands in a product category yield lower retail margins than less strong brands. Retailers are forced to stock strong brands and therefore have relatively less bargaining power in negotiating wholesale prices. In addition, price competition among retailers is more intense on strong brands since consumers select these brands to form their perceptions of stores' price competitiveness and are ready to shift to lower price stores if retail price of these brands is not perceived as competitive. Thus, intensive intrabrand competitive pressures discipline retailers pricing policy on stronger manufacturer brands much more than on weaker brands. A key prediction of Steiner's two-stage model is that, since manufacturers' non-price strategies have a margin depressing impact which is additional to their direct demand - creating effect, manufacturers face greater incentives to invest in advertising and R&D. The second central hypothesis in our framework is that in a world of asymmetric brands and intense vertical competition there is a further mechanism at work due to retailers' delisting decisions. Given that retailers have to make room for their store brands at the point of sale, they have to readjust their assortments delisting some manufacturer brands. Retailers would like delisting strong brands given that the retailer's margin on these brands is lower. The problem is that strong brands can contrast vertical pressures better than weaker brands and cannot be delisted. In making shelf - space decisions, rational retailers will recognise that they can delist only the brands whose brand loyalty is lower than their store loyalty. On the contrary, retailers cannot delist brands for which brand loyalty is greater than store loyalty. This implies that manufacturer brands operate in a two- region environment. We call these two regions, respectively, the 'delisting' and 'no-delisting' region and show that the demarcation point between them is given by the level of retailer's store loyalty. By combining the Steiner's hypothesis with the mechanism of delisting, we argue that in a competitive environment characterized by vertical competition is at work a threshold effect which increases optimal 2 R&D and advertising expenditures. The intuition is that it is vital for manufacturers willing to remain sellers of branded products to keep brand loyalty of their brands at a level higher than retailer's store loyalty. And the only way to pursue this goal and avoid to be involved into the risk of being delisted is to boost brands. We also show that vertical competitive pressures are particularly strong on second- tier brands. A brief review of some recent patterns and stylised facts in the food industries and grocery channels consistent with these predictions conclude the paper.vertical competition, store brands, delisting, optimal advertising, Industrial Organization,

    Looking into the black box of Schumpeterian Growth Theories: an empirical assessment of R&D races

    Get PDF
    This paper assesses whether the most important R&D technologies at the roots of second-generation Schumpeterian growth theories are consistent with patenting and innovation statistics. Using US manufacturing industry data, we estimate various systems of simultaneous equations modeling the innovation functions underlying growth frameworks based on variety expansion, diminishing technological opportunities and rent protection activities. Our evidence indicates that innovation functions characterized by the increasing difficulty of R&D activity fit US data better. This finding relaunches the debate on the soundness of the new Schumpeterian strand of endogenous growth literature.R&D, patenting, Schumpeterian growth, US manufacturing.

    Transitions involving conical magnetic phases in a model with bilinear and biquadratic interactions

    Full text link
    In a previous work a model was proposed for the phase transitions of crystals with localized magnetic moments which at low temperature have a "conical" arrangement that at higher T transforms into a more symmetrical structure (depending on the compound) before becoming totally disordered. The model assumes bilinear and biquadratic interactions between magnetic moments up to the fifth neighbours, and for any given T the structure with the least free energy is obtained by a mean-field approximation (MFA). The interaction constants are derived from ab initio energy calculations. In this work we improve upon that model modifying the MFA in such a way that a continuous (instead of discontinuous) spectrum of excited states is available to the system. In the previous work, which dealt with LaMn_2Ge_2 and LaMn_2Si_2, we found that transitions to different structures can be obtained for increasing T, in good qualitative agreement with experiment. The critical temperatures, however, were exaggerately high. With the new MFA we obtain essentially the same behaviour concerning the phase transitions, and critical temperatures much closer to the experimental ones.Comment: 8 pages, no figure

    Using Deep Neural Networks to compute the mass of forming planets

    Full text link
    Computing the mass of planetary envelopes and the critical mass beyond which planets accrete gas in a runaway fashion is important when studying planet formation, in particular for planets up to the Neptune mass range. This computation requires in principle solving a set of differential equations, the internal structure equations, for some boundary conditions (pressure, temperature in the protoplanetary disk where a planet forms, core mass and accretion rate of solids by the planet). Solving these equations in turn proves being time consuming and sometimes numerically unstable. We developed a method to approximate the result of integrating the internal structure equations for a variety of boundary conditions. We compute a set of planet internal structures for a very large number (millions) of boundary conditions, considering two opacities,(ISM and reduced). This database is then used to train Deep Neural Networks in order to predict the critical core mass as well as the mass of planetary envelopes as a function of the boundary conditions. We show that our neural networks provide a very good approximation (at the level of percents) of the result obtained by solving interior structure equations, but with a much smaller required computer time. The difference with the real solution is much smaller than the one obtained using some analytical formulas available in the literature which at best only provide the correct order of magnitude. We compare the results of the DNN with other popular machine learning methods (Random Forest, Gradient Boost, Support Vector Regression) and show that the DNN outperforms these methods by a factor of at least two. We show that some analytical formulas that can be found in various papers can severely overestimate the mass of planets, therefore predicting the formation of planets in the Jupiter-mass regime instead of the Neptune-mass regime.Comment: accepted in A&A. Animations visible at http://nccr-planets.ch/research/phase2/domain2/project5/machine-learning-and-advanced-statistical-analysis/ and code available at https://github.com/yalibert/DNN_internal_structur

    Handling Massive N-Gram Datasets Efficiently

    Get PDF
    This paper deals with the two fundamental problems concerning the handling of large n-gram language models: indexing, that is compressing the n-gram strings and associated satellite data without compromising their retrieval speed; and estimation, that is computing the probability distribution of the strings from a large textual source. Regarding the problem of indexing, we describe compressed, exact and lossless data structures that achieve, at the same time, high space reductions and no time degradation with respect to state-of-the-art solutions and related software packages. In particular, we present a compressed trie data structure in which each word following a context of fixed length k, i.e., its preceding k words, is encoded as an integer whose value is proportional to the number of words that follow such context. Since the number of words following a given context is typically very small in natural languages, we lower the space of representation to compression levels that were never achieved before. Despite the significant savings in space, our technique introduces a negligible penalty at query time. Regarding the problem of estimation, we present a novel algorithm for estimating modified Kneser-Ney language models, that have emerged as the de-facto choice for language modeling in both academia and industry, thanks to their relatively low perplexity performance. Estimating such models from large textual sources poses the challenge of devising algorithms that make a parsimonious use of the disk. The state-of-the-art algorithm uses three sorting steps in external memory: we show an improved construction that requires only one sorting step thanks to exploiting the properties of the extracted n-gram strings. With an extensive experimental analysis performed on billions of n-grams, we show an average improvement of 4.5X on the total running time of the state-of-the-art approach.Comment: Published in ACM Transactions on Information Systems (TOIS), February 2019, Article No: 2

    On Optimally Partitioning Variable-Byte Codes

    Get PDF
    The ubiquitous Variable-Byte encoding is one of the fastest compressed representation for integer sequences. However, its compression ratio is usually not competitive with other more sophisticated encoders, especially when the integers to be compressed are small that is the typical case for inverted indexes. This paper shows that the compression ratio of Variable-Byte can be improved by 2x by adopting a partitioned representation of the inverted lists. This makes Variable-Byte surprisingly competitive in space with the best bit-aligned encoders, hence disproving the folklore belief that Variable-Byte is space-inefficient for inverted index compression. Despite the significant space savings, we show that our optimization almost comes for free, given that: we introduce an optimal partitioning algorithm that does not affect indexing time because of its linear-time complexity; we show that the query processing speed of Variable-Byte is preserved, with an extensive experimental analysis and comparison with several other state-of-the-art encoders.Comment: Published in IEEE Transactions on Knowledge and Data Engineering (TKDE), 15 April 201
    • …
    corecore