14,715 research outputs found

    Carbon Leakage with International Technology Spillovers

    Get PDF
    In this paper we study the effect of international technology spillovers on carbon leakage. We first develop and analyse two simple competing models for carbon leakage. The first model represents the pollution haven hypothesis. It focuses on the international competition between firms that produce energy-intensive goods. The second model highlights the role of a globally integrated carbon-energy market. We calculate formulas for the leakage rates in both models and, through meta-analysis, show that the second model captures best the major mechanisms reported in the CGE literature on carbon leakage. We extend this model with endogenous energy-saving technology and international technology spillovers. This feature is shown to decrease carbon leakage. We build-in the endogenous energy-saving technology in a large CGE model and verify that the results from the formal model carry over. Carbon leakage becomes negative for moderate levels of international technology spillover.arbon-Leakage, Climate Policy, Induced Technological Change; Trade and Environment

    On the Complexity of Spill Everywhere under SSA Form

    Get PDF
    Compilation for embedded processors can be either aggressive (time consuming cross-compilation) or just in time (embedded and usually dynamic). The heuristics used in dynamic compilation are highly constrained by limited resources, time and memory in particular. Recent results on the SSA form open promising directions for the design of new register allocation heuristics for embedded systems and especially for embedded compilation. In particular, heuristics based on tree scan with two separated phases -- one for spilling, then one for coloring/coalescing -- seem good candidates for designing memory-friendly, fast, and competitive register allocators. Still, also because of the side effect on power consumption, the minimization of loads and stores overhead (spilling problem) is an important issue. This paper provides an exhaustive study of the complexity of the ``spill everywhere'' problem in the context of the SSA form. Unfortunately, conversely to our initial hopes, many of the questions we raised lead to NP-completeness results. We identify some polynomial cases but that are impractical in JIT context. Nevertheless, they can give hints to simplify formulations for the design of aggressive allocators.Comment: 10 page

    On the Federal Excise Tax Exemption for U.S. Gasoline Exports

    Get PDF

    The Contemporary Tax Journal Volume 5, No. 1 – Spring/Summer 2015

    Get PDF

    The Supersonic Performance of High Bypass Ratio Turbofan Engines with Fixed Conical Spike Inlets

    Get PDF
    abstract: The objective of this study is to understand how to integrate conical spike external compression inlets with high bypass turbofan engines for application on future supersonic airliners. Many performance problems arise when inlets are matched with engines as inlets come with a plethora of limitations and losses that greatly affect an engine’s ability to operate. These limitations and losses include drag due to inlet spillage, bleed ducts, and bypass doors, as well as the maximum and minimum values of mass flow ratio at each Mach number that define when an engine can no longer function. A collection of tools was developed that allow one to calculate the raw propulsion data of an engine, match the propulsion data with an inlet, calculate the aerodynamic data of an aircraft, and combine the propulsion and aerodynamic data to calculate the installed performance of the entire propulsion system. Several trade studies were performed that tested how changing specific design parameters of the engine affected propulsion performance. These engine trade studies proved that high bypass turbofan engines could be developed with external compression inlets and retain effective supersonic performance. Several engines of efficient fuel consumption and differing bypass ratios were developed through the engine trade studies and used with the aerodynamic data of the Concorde to test the aircraft performance of a supersonic airliner using these engines. It was found that none of the engines that were tested came close to matching the supersonic performance that the Concorde could achieve with its own turbojet engines. It is possible to speculate from the results several different reasons why these turbofan engines were unable to function effectively with the Concorde. These speculations show that more tests and trade studies need to be performed in order to determine if high bypass turbofan engines can be developed for effective usage with supersonic airliners in any possible way.Dissertation/ThesisRun file and text files from the propulsion simulations performed in NPSS.Input and output file used in EDET to generate aerodynamic data of Concorde.Five column propulsion data of tested engines after inlet matching.Masters Thesis Aerospace Engineering 201

    Product Market Integration and Income Taxation: Distortions and Gains from Trade

    Get PDF
    It is widely perceived that globalization is a threat to tax financed public sector activities. The argument is that public activities (public consumption and transfers) financed by income taxes may distort labour markets and cause higher wages and thus a loss of competitiveness. If the importance of the latter effect is reinforced by globalization, it is inferred that the marginal costs of public funds increase and a retrenchment of the public sector follows. We consider this issue in a Ricardian trade model in which production and specialization structures are endogenous. Even though income taxation unambiguously worsens wage competitiveness, it does not follow that tax distortions or marginal costs of public funds increase with product market integration. The reason is that gains from trade tend to reduce both. Moreover, non-cooperative fiscal policies do not have a bias towards retrenchment due to a positive terms of trade effect from taxation.labour taxation, open economy, policy spill-over, marginal costs of public funds

    Bowtie models as preventive models in maritime safety

    Get PDF
    Aquest treball ha sorgit d’una proposta del Dr. Rodrigo de Larrucea que ha acabat de publicar un llibre ambiciós sobre Seguretat Marítima. Com ell mateix diu, el tema “excedeix amb molt les potencialitats de l’autor”, així que en el meu cas això és més cert. Es pot aspirar, però, a fer una modesta contribució a l’estudi i difusió de la seguretat de la cultura marítima, que només apareix a les notícies quan tenen lloc desastres molt puntuals. En qualsevol cas, el professor em va proposar que em centrés en els Bowtie Models, models en corbatí, que integren l’arbre de causes y el de conseqüències (en anglès el Fault Tree Analysis, FTA, i l’Event Tree Analysis, ETA). Certament, existeixen altres metodologies i aproximacions (i en el seu llibre en presenta vàries, resumides), però per la seva senzillesa conceptual i possibilitat de generalització i integració dels resultats era una bona aposta. Així, després d’una fase de meditació i recopilació de informació, em vaig decidir a presentar un model en corbatí molt general on caben les principals causes d’accidents (factores ambientals, error humà i fallada mecànica), comptant també que pot existir una combinació de causes. De tota manera, a l’hora d’explotar aquest model existeix la gran dificultat de donar una probabilitat de ocurrència, un nombre entre 0 i 1, a cada branca. Normalment les probabilitats d’ocurrència són petites i degut a això difícils d’estimar. Cada accident és diferent, de grans catàstrofes n’hi ha poques, i cada accident ja és estudiat de manera exhaustiva (més exhaustiva quan més greu és). Un altre factor que dificulta l’estima de la probabilitat de fallada és l’evolució constant del món marítim, tant des del punt de vista tècnic, de formació, legal i fins i tot generacional doncs cada generació de marins és diferent. Els esforços estan doncs enfocats a augmentar la seguretat, encara que sempre amb un ull posat sobre els costs. Així, he presentat un model en corbatí pel seu valor didàctic i gràfic però sense entrar en detalls numèrics, que si s’escau ja aniré afinant i interioritzant en l’exercici de la professió. En aquest treball també he intentat no mantenir-me totalment al costat de la teoria (ja se sap que si tot es fa bé, tot surt perfecte, etc…) sinó presentar amb cert detall 2 casos ben coneguts d’accidents marítims: el petroler Exxon Valdez, el 1989 i el ferry Estonia en 1994, entre altres esmentats. Són casos ja una mica vells però que van contribuir a augmentar la cultura de la seguretat, fins a arribar al nivell del que gaudim actualment, al menys als països occidentals. Doncs la seguretat, com esmenta Rodrigo de Larrucea “és una actitud i mai és fortuïta; sempre és el resultat d’una voluntat decidida, un esforç sincer, una direcció intel·ligent i una execució acurada. Sens lloc a dubtes, sempre suposa la millor alternativa”. The work has been inspired in its initial aspects by the book of my tutor Jaime Rodrigo de Larrucea, that presents a state of the art of all the maritime aspects related to safety. Evidently, since it covers all the topics, it cannot deepen on every topic. It was my opportunity to deepen in the Bowtie Model but finally I have also covered a wide variety of topics. Later, when I began to study the topics, I realized that the people in the maritime world usually do not understand to a great extent statistics. Everybody is concerned about safety but few nautical students take a probabilistic approach to the accidents. For this it is extremely important to study the population that is going to be studied: in our case the SOLAS ships Also, during my time at Riga, I have been very concerned with the most diverse accidents, some of them studied during the courses at Barcelona. I have seen that it is difficult to model mathematically the accidents, since each one has different characteristics, angles, and surely there are not 2 equal. Finally, it was accorded that I should concentrate on the Bowtie Model, which is not very complex from a statistical point of view. It is simply a fault tree of events model and a tree of effects. I present some examples in this Chapter 2. The difficulty I point out is to try to estimate the probabilities of occurrence of events that are unusual. We concentrated at major accidents, those that may cause victims or heavy losses. Then, for the sake of generality, at Chapter 4, I have divided the causes in 4 great classes: Natural hazards, human factor, mechanical failure and attacks (piracy and terrorism). The last concern maybe should not be included beside the others since terrorism and piracy acts are not accidents, but since there is an important code dedicated to prevent security threats, ISPS, it is example of design of barriers to prevent an undesired event (although it gives mainly guidelines to follow by the States, Port Terminals and Shipping Companies). I have presented a detailed study of the tragedy of the Estonia, showing how a mechanical failure triggered the failure of the ferry, by its nature a delicate ship, but there were other factors such as poor maintenance and heavy seas. At the next Chapter, certain characteristics of error chains are analyzed. Finally, the conclusions are drawn, offering a pretty optimistic view of the safety (and security) culture at the Western World but that may not easily permeate the entire World, due to the associated costs

    Band alignment at metal/ferroelectric interfaces: insights and artifacts from first principles

    Get PDF
    Based on recent advances in first-principles theory, we develop a general model of the band offset at metal/ferroelectric interfaces. We show that, depending on the polarization of the film, a pathological regime might occur where the metallic carriers populate the energy bands of the insulator, making it metallic. As the most common approximations of density functional theory are affected by a systematic underestimation of the fundamental band gap of insulators, this scenario is likely to be an artifact of the simulation. We provide a number of rigorous criteria, together with extensive practical examples, to systematically identify this problematic situation in the calculated electronic and structural properties of ferroelectric systems. We discuss our findings in the context of earlier literature studies, where the issues described in this work have often been overlooked. We also discuss formal analogies to the physics of polarity compensation at LaAlO3/SrTiO3 interfaces, and suggest promising avenues for future research.Comment: 29 pages, 23 figure

    Adaptable register file organization for vector processors

    Get PDF
    Today there are two main vector processors design trends. On the one hand, we have vector processors designed for long vectors lengths such as the SX-Aurora TSUBASA which implements vector lengths of 256 elements (16384-bits). On the other hand, we have vector processors designed for short vectors such as the Fujitsu A64FX that implements vector lengths of 8 elements (512-bit) ARM SVE. However, short vector designs are the most widely adopted in modern chips. This is because, to achieve high-performance with a very high-efficiency, applications executed on long vector designs must feature abundant DLP, then limiting the range of applications. On the contrary, short vector designs are compatible with a larger range of applications. In fact, in the beginnings, long vector length implementations were focused on the HPC market, while short vector length implementations were conceived to improve performance in multimedia tasks. However, those short vector length extensions have evolved to better fit the needs of modern applications. In that sense, we believe that this compatibility with a large range of applications featuring high, medium and low DLP is one of the main reasons behind the trend of building parallel machines with short vectors. Short vector designs are area efficient and are "compatible" with applications having long vectors; however, these short vector architectures are not as efficient as longer vector designs when executing high DLP code. In this thesis, we propose a novel vector architecture that combines the area and resource efficiency characterizing short vector processors with the ability to handle large DLP applications, as allowed in long vector architectures. In this context, we present AVA, an Adaptable Vector Architecture designed for short vectors (MVL = 16 elements), capable of reconfiguring the MVL when executing applications with abundant DLP, achieving performance comparable to designs for long vectors. The design is based on three complementary concepts. First, a two-stage renaming unit based on a new type of registers termed as Virtual Vector Registers (VVRs), which are an intermediate mapping between the conventional logical and the physical and memory registers. In the first stage, logical registers are renamed to VVRs, while in the second stage, VVRs are renamed to physical registers. Second, a two-level VRF, that supports 64 VVRs whose MVL can be configured from 16 to 128 elements. The first level corresponds to the VVRs mapped in the physical registers held in the 8KB Physical Vector Register File (P-VRF), while the second level represents the VVRs mapped in memory registers held in the Memory Vector Register File (M-VRF). While the baseline configuration (MVL=16 elements) holds all the VVRs in the P-VRF, larger MVL configurations hold a subset of the total VVRs in the P-VRF, and map the remaining part in the M-VRF. Third, we propose a novel two-stage vector issue unit. In the first stage, the second level of mapping between the VVRs and physical registers is performed, while issuing to execute is managed in the second stage. This thesis also presents a set of tools for designing and evaluating vector architectures. First, a parameterizable vector architecture model implemented on the gem5 simulator to evaluate novel ideas on vector architectures. Second, a Vector Architecture model implemented on the McPAT framework to evaluate power and area metrics. Finally, the RiVEC benchmark suite, a collection of ten vectorized applications from different domains focusing on benchmarking vector microarchitectures.Hoy en día existen dos tendencias principales en el diseño de procesadores vectoriales. Por un lado, tenemos procesadores vectoriales basados en vectores largos como el SX-Aurora TSUBASA que implementa vectores con 256 elementos (16384-bits) de longitud. Por otro lado, tenemos procesadores vectoriales basados en vectores cortos como el Fujitsu A64FX que implementa vectores de 8 elementos (512-bits) de longitud ARM SVE. Sin embargo, los diseños de vectores cortos son los más adoptados en los chips modernos. Esto es porque, para lograr alto rendimiento con muy alta eficiencia, las aplicaciones ejecutadas en diseños de vectores largos deben presentar abundante paralelismo a nivel de datos (DLP), lo que limita el rango de aplicaciones. Por el contrario, los diseños de vectores cortos son compatibles con un rango más amplio de aplicaciones. En sus orígenes, implementaciones basadas en vectores largos estaban enfocadas al HPC, mientras que las implementaciones basadas en vectores cortos estaban enfocadas en tareas de multimedia. Sin embargo, esas extensiones basadas en vectores cortos han evolucionado para adaptarse mejor a las necesidades de las aplicaciones modernas. Creemos que esta compatibilidad con un mayor rango de aplicaciones es una de las principales razones de construir máquinas paralelas basadas en vectores cortos. Los diseños de vectores cortos son eficientes en área y son compatibles con aplicaciones que soportan vectores largos; sin embargo, estos diseños de vectores cortos no son tan eficientes como los diseños de vectores largos cuando se ejecuta un código con alto DLP. En esta tesis, proponemos una novedosa arquitectura vectorial que combina la eficiencia de área y recursos que caracteriza a los procesadores vectoriales basados en vectores cortos, con la capacidad de mejorar en rendimiento cuando se presentan aplicaciones con alto DLP, como lo permiten las arquitecturas vectoriales basadas en vectores largos. En este contexto, presentamos AVA, una Arquitectura Vectorial Adaptable basada en vectores cortos (MVL = 16 elementos), capaz de reconfigurar el MVL al ejecutar aplicaciones con abundante DLP, logrando un rendimiento comparable a diseños basados en vectores largos. El diseño se basa en tres conceptos. Primero, una unidad de renombrado de dos etapas basada en un nuevo tipo de registros denominados registros vectoriales virtuales (VVR), que son un mapeo intermedio entre los registros lógicos y físicos y de memoria. En la primera etapa, los registros lógicos se renombran a VVR, mientras que, en la segunda etapa, los VVR se renombran a registros físicos. En segundo lugar, un VRF de dos niveles, que admite 64 VVR cuyo MVL se puede configurar de 16 a 128 elementos. El primer nivel corresponde a los VVR mapeados en los registros físicos contenidos en el banco de registros vectoriales físico (P-VRF) de 8 KB, mientras que el segundo nivel representa los VVR mapeados en los registros de memoria contenidos en el banco de registros vectoriales de memoria (M-VRF). Mientras que la configuración de referencia (MVL=16 elementos) contiene todos los VVR en el P-VRF, las configuraciones de MVL más largos contienen un subconjunto del total de VVR en el P-VRF y mapean la parte restante en el M-VRF. En tercer lugar, proponemos una novedosa unidad de colas de emisión de dos etapas. En la primera etapa se realiza el segundo nivel de mapeo entre los VVR y los registros físicos, mientras que en la segunda etapa se gestiona la emisión de instrucciones a ejecutar. Esta tesis también presenta un conjunto de herramientas para diseñar y evaluar arquitecturas vectoriales. Primero, un modelo de arquitectura vectorial parametrizable implementado en el simulador gem5 para evaluar novedosas ideas. Segundo, un modelo de arquitectura vectorial implementado en McPAT para evaluar las métricas de potencia y área. Finalmente, presentamos RiVEC, una colección de diez aplicaciones vectorizadas enfocadas en evaluar arquitecturas vectorialesPostprint (published version

    Housing in South-Eastern Europe

    Get PDF
    corecore