37 research outputs found

    About the Efficiency of Input vs. Output Quotas

    Get PDF
    Output quotas are known to be more efficient than input quotas in transferring surplus from consumers to producers. Input quotas, by distorting the shadow prices of inputs, lead to inefficient production and generate larger deadweight losses, for a given amount of surplus transferred. Yet, input quotas have been a ubiquitous tool in agricultural policy. Practicality considerations, as well as the difficulty to control outputs that heavily depend on stochastic weather conditions, are arguments that help understand why policy makers may favor input quotas over output quotas. In this paper, we offer an additional explanation that rests on efficiency considerations. Assuming that the regulator only has limited knowledge about the market fundamentals (supply and demand elasticities, among others), seeks to transfer at least a given amount of surplus to producers and is influenced by the industry in his choice of the quota level, we show that an input quota becomes the optimal policy.Agricultural and Food Policy, H2, L2, Q1,

    Is There Market Power in the French Comte Cheese Market?

    Get PDF
    An NEIO approach is used to measure seller market power in the French Comté cheese market, characterised by government-approved supply control. The estimation is performed on quarterly data at the wholesale stage over the period 1985-2005. Three different elasticity shifters are included in the demand specification, and the supply equation accounts for the existence of the European dairy quota policy. The market power estimate is small and statistically insignificant. Monopoly is rejected, as well as weak forms of Cournot oligopoly. Results appear to be robust to the choice of functional form, and suggest little effect of the supply control scheme on consumer prices.Supply control, NEIO, protected designation of origin, Marketing,

    Will Geographical Indications Supply Excessive Quality?

    Get PDF
    Consumer/Household Economics, Marketing,

    The Coexistence of GM and non-GM Crops and the Role of Consumer Preferences

    Get PDF
    Crop coexistence is now at the core of the debate on GM technology in Europe. New regulations are being designed in the E.U. in order to "correct" potential production externalities and ensure that conventional and organic production will remain a profitable alternative for farmers. We use a simple Mussa-Rosen type model of preferences to capture the effects of introducing a cost-saving GM crop on incumbent crops, taking explicitly into account consumers' distaste for GM food products. Using a two-technology model, we derive necessary and sufficient conditions for coexistence and show that perfectly competitive farmers with rational expectations will adopt the socially efficient level of GM technology. We also solve a three-technology model to study the impacts of the availability of GM technology on conventional and organic production. We formally characterize the entire set of possible outcomes using only three parameters that reflect technologies' relative performance. We use our model to explore the effects of negative production externalities created by GM technology and of a change in consumers' tastes on coexistence.Consumer/Household Economics,

    Market Power after the Transition

    Get PDF
    Agribusiness, Financial Economics,

    AJAE Appendix: Optimal Investment in Transportation Infrastructure When Middlemen Have Market Power: A Developing-Country Analysis

    Get PDF
    The material contained herein is supplementary to the article named in the title and published in the American Journal of Agricultural Economics.Public Economics,

    A Fully Calibrated Generalized CES Programming Model of Agricultural Supply

    Get PDF
    The use of prior information on supply elasticities to calibrate programming models of agricultural supply has been advocated repeatedly in the recent literature (Heckelei and Britz 2005). Yet, Mérel and Bucaram (2009) have shown that the dual goal of calibrating such models to a reference allocation while replicating an exogenous set of supply elasticities is not always feasible. This article lays out the methodological foundation to exactly calibrate programming models of agricultural supply using generalized CES production functions. We formally derive the necessary and sufficient conditions under which such models can be calibrated to replicate the reference allocation while displaying crop-specific supply responses that are consistent with prior information. When it exists, the solution to the exact calibration problem is unique. From a microeconomic perspective, the generalized CES model is preferable to quadratic models that have been used extensively in policy analysis since the publication of Howitt’s (1995) Positive Mathematical Programming. The two types of specifications are also compared on the basis of their flexibility towards calibration, and it is shown that, provided myopic calibration is feasible, the generalized CES model can calibrate larger sets of supply elasticities than its quadratic counterpart. Our calibration criterion has relevance both for calibrated positive mathematical programming models and for “well-posed” models estimated through generalized maximum entropy following Heckelei and Wolff (2003), where it is deemed appropriate to include prior information regarding the value of own-price supply elasticities.Positive mathematical programming, generalized CES, supply elasticities, Crop Production/Industries, Production Economics,

    Common Limitations of Image Processing Metrics:A Picture Story

    Get PDF
    While the importance of automatic image analysis is continuously increasing, recent meta-research revealed major flaws with respect to algorithm validation. Performance metrics are particularly key for meaningful, objective, and transparent performance assessment and validation of the used automatic algorithms, but relatively little attention has been given to the practical pitfalls when using specific metrics for a given image analysis task. These are typically related to (1) the disregard of inherent metric properties, such as the behaviour in the presence of class imbalance or small target structures, (2) the disregard of inherent data set properties, such as the non-independence of the test cases, and (3) the disregard of the actual biomedical domain interest that the metrics should reflect. This living dynamically document has the purpose to illustrate important limitations of performance metrics commonly applied in the field of image analysis. In this context, it focuses on biomedical image analysis problems that can be phrased as image-level classification, semantic segmentation, instance segmentation, or object detection task. The current version is based on a Delphi process on metrics conducted by an international consortium of image analysis experts from more than 60 institutions worldwide.Comment: This is a dynamic paper on limitations of commonly used metrics. The current version discusses metrics for image-level classification, semantic segmentation, object detection and instance segmentation. For missing use cases, comments or questions, please contact [email protected] or [email protected]. Substantial contributions to this document will be acknowledged with a co-authorshi

    Understanding metric-related pitfalls in image analysis validation

    Get PDF
    Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.Comment: Shared first authors: Annika Reinke, Minu D. Tizabi; shared senior authors: Paul F. J\"ager, Lena Maier-Hei
    corecore