356 research outputs found

    Tariff-Rate Quotas : Difficult to model or plain simple?

    Get PDF
    The difficulty of reliably and accurately incorporating tariffrate quotas (TRQs) into trade models has received a lot of attention in recent years. As a result of the Uruguay Round of GATT negotiations, TRQs replaced an assortment of tariff and nontariff instruments in an effort to standardise trade barriers, and facilitate their future liberalisation. Understanding the nuances of TRQs is now particularly crucial for New Zealand because of the preferential access arrangements that New Zealand has for a number of products in highly protected markets such as the European Union, Japan, and the United States. It has been argued that TRQs are complex instruments and are difficult to model because for any trade flow between two countries, one of three regimes may be applicable : 1. The import quota may not be binding and the within-quota tariff applies; 2. The quota may be binding, the within-quota tariff applies, and a quota rent is created; or 3. Trade occurs over and above the quota, in which case an over-quota tariff applies (although, even in this regime, someone is still able to collect the quota rent on within-quota trade). But even this characterisation, which many claim is too complex to model, is a major simplification of reality. Bilateral preferences are ubiquitous, and such preferences are usually included in the determination of multilateral market access quotas. It is usual, therefore, that the TRQ instrument has several tiers to the quota schedule, plus a number of within and over-quota tariff rates applicable on either a bilateral or a multilateral basis. Further trade liberalisation creates something of a dilemma for New Zealand. Any decrease in over-quota tariffs and/or increase in quota levels potentially reduces the value of quota rents, many of which accrue to New Zealand due to the bilateral preferences. It is important, therefore, that New Zealand trade negotiators understand how much additional trade is required to offset the loss of New Zealands quota rents. Modelling trade in the presence of TRQs is the only way to ascertain this knowledge. The purpose of this paper is to show that complex TRQs can be modelled very easily and precisely. The only catch is that the model must be formulated as a complementarity problem rather than the more conventional linear or nonlinear optimisation problem. The concept will be demonstrated using a simple 3-region, single commodity spatial price equilibrium model of trade.Tariff-rate quota, trade modelling, mathematical programming, complementarity

    A Novel Method for Detecting and Computing Univolatility Curves in Ternary Mixtures

    Get PDF
    Residue curve maps (RCMs) and univolatility curves are crucial tools for analysis and design of distillation processes. Even in the case of ternary mixtures, the topology of these maps is highly non-trivial, as shown by Serafimov’s and Zhvanetskii’s classifications. We propose a novel method allowing detection and computation of the existence of univolatility curves in homogeneous ternary mixtures independently of the presence of azeotropes, which is particularly important in the case of zeotropic mixtures. The method is based on analysis of the geometry of the boiling temperature surface constrained by the univolatility or unidistribution condition. The introduced on the concepts of the generalized univolatility and unidistribution curves in the three dimensional composition – temperature state space that lead to a simple non iterative and efficient algorithm of computation of the univolatility curves. Two peculiar ternary systems, namely diethylamine – chloroform – methanol and hexane – benzene – hexafluorobenzene are used for illustration. When varying pressure, tangential azeotropy, bi-ternary azeotropy, saddle-node ternary azeotrope, and bi-binary azeotropy are found. In both examples, a distinctive crossing shape of the univolatility curve appears as a consequence of the existence of a common tangent point between the three dimensional univolatility hypersurface and the boiling temperature surface. Moreover, rare univolatility curves starting and ending on the same binary side are found

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report

    Techniques of replica symmetry breaking and the storage problem of the McCulloch-Pitts neuron

    Full text link
    In this article the framework for Parisi's spontaneous replica symmetry breaking is reviewed, and subsequently applied to the example of the statistical mechanical description of the storage properties of a McCulloch-Pitts neuron. The technical details are reviewed extensively, with regard to the wide range of systems where the method may be applied. Parisi's partial differential equation and related differential equations are discussed, and a Green function technique introduced for the calculation of replica averages, the key to determining the averages of physical quantities. The ensuing graph rules involve only tree graphs, as appropriate for a mean-field-like model. The lowest order Ward-Takahashi identity is recovered analytically and is shown to lead to the Goldstone modes in continuous replica symmetry breaking phases. The need for a replica symmetry breaking theory in the storage problem of the neuron has arisen due to the thermodynamical instability of formerly given solutions. Variational forms for the neuron's free energy are derived in terms of the order parameter function x(q), for different prior distribution of synapses. Analytically in the high temperature limit and numerically in generic cases various phases are identified, among them one similar to the Parisi phase in the Sherrington-Kirkpatrick model. Extensive quantities like the error per pattern change slightly with respect to the known unstable solutions, but there is a significant difference in the distribution of non-extensive quantities like the synaptic overlaps and the pattern storage stability parameter. A simulation result is also reviewed and compared to the prediction of the theory.Comment: 103 Latex pages (with REVTeX 3.0), including 15 figures (ps, epsi, eepic), accepted for Physics Report

    Utility Design for Distributed Resource Allocation -- Part I: Characterizing and Optimizing the Exact Price of Anarchy

    Full text link
    Game theory has emerged as a fruitful paradigm for the design of networked multiagent systems. A fundamental component of this approach is the design of agents' utility functions so that their self-interested maximization results in a desirable collective behavior. In this work we focus on a well-studied class of distributed resource allocation problems where each agent is requested to select a subset of resources with the goal of optimizing a given system-level objective. Our core contribution is the development of a novel framework to tightly characterize the worst case performance of any resulting Nash equilibrium (price of anarchy) as a function of the chosen agents' utility functions. Leveraging this result, we identify how to design such utilities so as to optimize the price of anarchy through a tractable linear program. This provides us with a priori performance certificates applicable to any existing learning algorithm capable of driving the system to an equilibrium. Part II of this work specializes these results to submodular and supermodular objectives, discusses the complexity of computing Nash equilibria, and provides multiple illustrations of the theoretical findings.Comment: 15 pages, 5 figure

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms

    The 7th Conference of PhD Students in Computer Science

    Get PDF
    • …
    corecore