9,604 research outputs found

    Environmental policy and time consistency - emissions taxes and emissions trading

    Get PDF
    The authors examine policy problems related to the use of emissions taxes, and emissions trading, two market-based instruments for controlling pollution by getting regulated firms to adopt cleaner technologies. By attaching an explicit price to emissions, these instruments give firms an incentive to continually reduce their volume of emissions. Command, and-control emissions standards create incentives to adopt cleaner technologies only up to the point where the standards are no longer binding (at which point the shadow price on emissions falls to zero). But the ongoing incentives created by the market-based instruments are not necessarily right, either. Time-consistency constraints on the setting of these instruments limit the regulator's ability toset policies that lead to efficiency in adopting technology options. After examining the time-consistency properties of a Pigouvian emissions tax, and of the emissions trading, the authors find that: 1) If damage is linear, efficiency in adopting technologies involves either universal adoption of the new technology, or universal retention of the old technology, depending on the cost of adoption. The first best tax policy, and the first-best permit-supply policy are both time-consistent under these conditions. 2) If damage is strictly convex, efficiency may require partial adoption of the new technology. In this case, the first-best tax policy is not time-consistent, and the tax rate must be adjusted after adoption has taken place (ratcheting). Ratcheting will induce an efficient equilibrium if there is a large number of firms. If there are relatively few firms, ratcheting creates too many incentives to adopt the new technology. 3) The first-best supply policy is time-consistent if there is a large number of firms. If there are relatively few firms, the first-best supply policy may not be time-consistent, and the regulator must ratchet the supply of permits. With this policy, there are not enough incentives for firms to adopt the new technology. The results do not strongly favor one policy instrument over the other, but if the point of an emissions trading program is to increase technological efficiency, it is necessary to continually adjust the supply of permits in response to technological change, even when the damage is linear. This continual adjustment is not needed for an emissions tax when damage is linear, which may give emissions taxes an advantage over emissions trading.General Technology,Environmental Economics&Policies,International Terrorism&Counterterrorism,Technology Industry,ICT Policy and Strategies,Environmental Economics&Policies,General Technology,International Terrorism&Counterterrorism,Carbon Policy and Trading,Energy and Environment

    Equilibrium incentives for adopting cleaner technology under emissions pricing

    Get PDF
    Policymakers sometimes presume that adopting a less polluting technology necessarily improves welfare. This view is generally mistaken. Adopting a cleaner technology is costly, and this cost must be weighed against the technology's benefits in reduced pollution and reduced abatement costs. The literature to date has not satisfactorily examined whether emissions pricing properly internalizes this tradeoff between costs and benefits. And if the trend toward greater use of economic instruments in environmental policy continues, as is likely, the properties of those instruments must be understood, especially for dynamic efficiency. The authors examine incentives for adopting cleaner technologies in response to Pigouvian emissions pricing in equilibrium (unlike earlier analyses, which they contend, have been generally incomplete and at times misleading). Their results indicate that emissions pricing under the standard Pigouvian rule leads to efficient equilibrium adoption of technology under certain circumstances. They show that the equilibrium level of adopting a public innovation is efficient under Pigouvian pricing only if there are enough firms that each firm has a negligible effect on aggregate emissions. When those circumstances are not satisfied, Pigouvian pricing does not induce an efficient (social welfare-maximizing) level of innovation. The potential for inefficiency stems from two problems with the Pigouvian rule. First, the Pigouvian price does not discriminate against each unit of emissions according to its marginal damage. Second, full ratcheting of the emissions price in response to declining marginal damage as firms adopt the cleaner technology is correct expost but distorts incentives for adopting technology ex ante. The next natural step for research is to examine second-best pricing policies or multiple instrument policies. The challenge is to design regulatory policies that go some way toward resolving problems yet are geared to implementation in real regulatory settings. Clearly, such policies must use more instruments than emissions pricing alone. Direct taxes or subsidies for technological change, together with emissions pricing, should give regulators more scope for creating appropriate dynamic incentives. Such instruments are already widely used: investment tax credits (for environmental research and development), accelerated depreciation (for pollution control equipment), and environmental funds (to subsidize the adoption of pollution control equipment). Such direct incentives could be excessive, however, if emissions pricing is already in place. All incentives should be coordinated.Public Health Promotion,Environmental Economics&Policies,Health Monitoring&Evaluation,General Technology,International Terrorism&Counterterrorism,International Terrorism&Counterterrorism,Carbon Policy and Trading,Environmental Economics&Policies,Health Monitoring&Evaluation,General Technology

    Current data on the globular cluster Palomar 14 are not inconsistent with MOND

    Get PDF
    Certain types of globular clusters have the very important property that the predictions for their kinematics in the Newtonian and modified Newtonian dynamics (MOND) contexts are divergent. Here, we caution the recent claim that the stellar kinematics data (using 17 stars) of the globular cluster Palomar 14 are inconsistent with MOND. We compare the observations to the theoretical predictions using a Kolmogorov-Smirnov test, which is appropriate for small samples. We find that, with the currently available data, the MOND prediction for the velocity distribution can only be excluded with a very low confidence level, clearly insufficient to claim that MOND is falsified.Comment: Research note accepted for publication in A&

    Single nanoparticle measurement techniques

    Full text link
    Various single particle measuring techniques are briefly reviewed and the basic concepts of a new micro-SQUID technique are discussed. It allows measurements of the magnetization reversal of single nanometer-sized particles at low temperature. The influence of the measuring technique on the system of interest is discussed.Comment: 3 pages, 3 figures, conference proceedings of MMM 1999, San Jose, 15-18 Nov., session number BE-0

    Specialization of the rostral prefrontal cortex for distinct analogy processes

    Get PDF
    Analogical reasoning is central to learning and abstract thinking. It involves using a more familiar situation (source) to make inferences about a less familiar situation (target). According to the predominant cognitive models, analogical reasoning includes 1) generation of structured mental representations and 2) mapping based on structural similarities between them. This study used functional magnetic resonance imaging to specify the role of rostral prefrontal cortex (PFC) in these distinct processes. An experimental paradigm was designed that enabled differentiation between these processes, by temporal separation of the presentation of the source and the target. Within rostral PFC, a lateral subregion was activated by analogy task both during study of the source (before the source could be compared with a target) and when the target appeared. This may suggest that this subregion supports fundamental analogy processes such as generating structured representations of stimuli but is not specific to one particular processing stage. By contrast, a dorsomedial subregion of rostral PFC showed an interaction between task (analogy vs. control) and period (more activated when the target appeared). We propose that this region is involved in comparison or mapping processes. These results add to the growing evidence for functional differentiation between rostral PFC subregions

    The vibrational dynamics of vitreous silica: Classical force fields vs. first-principles

    Full text link
    We compare the vibrational properties of model SiO_2 glasses generated by molecular-dynamics simulations using the effective force field of van Beest et al. (BKS) with those obtained when the BKS structure is relaxed using an ab initio calculation in the framework of the density functional theory. We find that this relaxation significantly improves the agreement of the density of states with the experimental result. For frequencies between 14 and 26 THz the nature of the vibrational modes as determined from the BKS model is very different from the one from the ab initio calculation, showing that the interpretation of the vibrational spectra in terms of calculations using effective potentials can be very misleading.Comment: 7 pages of Latex, 4 figure

    Cosmological simulations in MOND: the cluster scale halo mass function with light sterile neutrinos

    Get PDF
    We use our Modified Newtonian Dynamics (MOND) cosmological particle-mesh N-body code to investigate the feasibility of structure formation in a framework involving MOND and light sterile neutrinos in the mass range 11 - 300 eV: always assuming that \Omega_{\nu_s}=0.225 for H_o=72 \kms Mpc^{-1}. We run a suite of simulations with variants on the expansion history, cosmological variation of the MOND acceleration constant, different normalisations of the power spectrum of the initial perturbations and interpolating functions. Using various box sizes, but typically with ones of length 256 Mpc/h, we compare our simulated halo mass functions with observed cluster mass functions and show that (i) the sterile neutrino mass must be larger than 30 eV to account for the low mass (M_{200}<10^{14.6} solar masses) clusters of galaxies in MOND and (ii) regardless of sterile neutrino mass or any of the variations we mentioned above, it is not possible to form the correct number of high mass (M_{200}>10^{15.1} solar masses) clusters of galaxies: there is always a considerable over production. This means that the ansatz of considering the weak-field limit of MOND together with a component of light sterile neutrinos to form structure from z ~ 200 fails. If MOND is the correct description of weak-field gravitational dynamics, it could mean that subtle effects of the additional fields in covariant theories of MOND render the ansatz inaccurate, or that the gravity generated by light sterile neutrinos (or by similar hot dark matter particles) is different from that generated by the baryons.Comment: 10 pages, 9 figures, accepted for publication in MNRA

    Validating Semi-Analytic Models of High-Redshift Galaxy Formation using Radiation Hydrodynamical Simulations

    Get PDF
    We use a cosmological hydrodynamic simulation calculated with Enzo and the semi-analytic galaxy formation model (SAM) GAMMA to address the chemical evolution of dwarf galaxies in the early universe. The long-term goal of the project is to better understand the origin of metal-poor stars and the formation of dwarf galaxies and the Milky Way halo by cross-validating these theoretical approaches. We combine GAMMA with the merger tree of the most massive galaxy found in the hydrodynamic simulation and compare the star formation rate, the metallicity distribution function (MDF), and the age-metallicity relationship predicted by the two approaches. We found that the SAM can reproduce the global trends of the hydrodynamic simulation. However, there are degeneracies between the model parameters and more constraints (e.g., star formation efficiency, gas flows) need to be extracted from the simulation to isolate the correct semi-analytic solution. Stochastic processes such as bursty star formation histories and star formation triggered by supernova explosions cannot be reproduced by the current version of GAMMA. Non-uniform mixing in the galaxy's interstellar medium, coming primarily from self-enrichment by local supernovae, causes a broadening in the MDF that can be emulated in the SAM by convolving its predicted MDF with a Gaussian function having a standard deviation of ~0.2 dex. We found that the most massive galaxy in the simulation retains nearby 100% of its baryonic mass within its virial radius, which is in agreement with what is needed in GAMMA to reproduce the global trends of the simulation.Comment: 26 pages, 13 figures, 2 tables, submitted to ApJ (version 2
    corecore