3,078 research outputs found

    The Law of Other States

    Get PDF
    The question whether courts should consult the laws of "other states" has produced intense controversy. But in some ways, this practice is entirely routine; within the United States, state courts regularly consult the decisions of other state courts in deciding on the common law, the interpretation of statutory law, and even on the meaning of state constitutions. A formal argument in defense of such consultation stems from the Condorcet Jury Theorem, which says that under certain conditions, a widespread belief, accepted by a number of independent actors, is highly likely to be correct. It follows that if a large majority of states make a certain decision based on a certain shared belief, and the states are well motivated, there is good reason to believe that the decision is correct. For the Jury Theorem to apply, however, three conditions must be met: states must be making judgments based on private information; states must be relevantly similar; and states must be making decisions independently, rather than mimicking one another. An understanding of these conditions offers qualified support for the domestic practice of referring to the laws of other states, while also raising some questions about the Supreme Court's reference to the laws of other nations. It is possible, however, to set out the ingredients of an approach that high courts might follow, at least if we make certain assumptions about the legitimate sources of interpretation. Existing practice, at the domestic and international levels, suggests that many courts are now following an implicit Condorcetian logic.

    Climate Change Justice

    Get PDF
    Greenhouse gas reductions would cost some nations much more than others and benefit some nations far less than others. Significant reductions would impose especially large costs on the United States, and recent projections suggest that the United States has relatively less to lose from climate change. In these circumstances, what does justice require the United States to do? Many people believe that the United States is required to reduce its greenhouse gas emissions beyond the point that is justified by its own self-interest, simply because the United States is wealthy, and because the nations most at risk from climate change are poor. This argument from distributive justice is complemented by an argument from corrective justice: The existing 'stock' of greenhouse gas emissions owes a great deal to the past actions of the United States, and many people think that the United States should do a great deal to reduce a problem for which it is largely responsible. But there are serious difficulties with both of these arguments. Redistribution from the United States to poor people in poor nations might well be desirable, but if so, expenditures on greenhouse gas reductions are a crude means of producing that redistribution: It would be much better to give cash payments directly to people who are now poor. The argument from corrective justice runs into the standard problems that arise when collectivities, such as nations, are treated as moral agents: Many people who have not acted wrongfully end up being forced to provide a remedy to many people who have not been victimized. The conclusion is that while a suitably designed climate change agreement is in the interest of the world, a widely held view is wrong: Arguments from distributive and corrective justice fail to provide strong justifications for imposing special obligations for greenhouse gas reductions on the United States. These arguments have general implications for thinking about both distributive justice and corrective justice arguments in the context of international law and international agreements.Environment

    Dollars and Death

    Get PDF
    Administrative regulations and tort law both impose controls on activities that cause mortality risks, but they do so in puzzlingly different ways. Under a relatively new and still-controversial procedure, administrative regulations rely on a fixed value of a statistical life representing the hedonic loss from death. Under much older law, tort law in most states excludes hedonic loss from the calculation of damages, and instead focuses on loss of income, which regulatory policy ignores. Regulatory policy also disregards losses to dependents; tort law usually allows dependents to recover for loss of support. Regulatory policy generally treats the loss of the life of a child as equivalent to the loss of the life of an adult; tort law usually treats the loss of the life of a child as less valuable. Regulatory policy implicitly values foreigners as equal to Americans; tort law does not. We argue that both areas of law make serious mistakes in valuing life and that each should learn from the other. Regulatory policy properly focuses on hedonic loss from death, and tort law should adopt this approach. But regulatory policy should imitate tort law's individualized approach to valuing the loss from death, including its inclusion of losses to dependents. If these changes were made, tort awards would be more uniform and predictable, and regulations would be less uniform and more stringent. In addition, average tort damages for wrongful death would be at least twice as high as they are today. With respect to dollar judgments for mortality risks, a pervasive issue is how to combine accuracy with administrability and predictability; both bodies of law could do far better on this score.

    The sensitivity of r-process nucleosynthesis to the properties of neutron-rich nuclei

    Full text link
    About half of the heavy elements in the Solar System were created by rapid neutron capture, or r-process, nucleosynthesis. In the r-process, heavy elements are built up via a sequence of neutron captures and beta decays in which an intense neutron flux pushes material out towards the neutron drip line. The nuclear network simulations used to test potential astrophysical scenarios for the r-process therefore require nuclear physics data (masses, beta decay lifetimes, neutron capture rates, fission probabilities) for thousands of nuclei far from stability. Only a small fraction of this data has been experimentally measured. Here we discuss recent sensitivity studies that aim to determine the nuclei whose properties are most crucial for r-process calculations.Comment: 8 pages, 4 figures, submitted to the Proceedings of the Fifth International Conference on Fission and Properties of Neutron-Rich Nuclei (ICFN5

    Source-oriented model for air pollutant effects on visibility

    Get PDF
    A source-oriented model for air pollutant effects on visibility has been developed that can compute light scattering, light extinction, and estimated visual range directly from data on gas phase and primary particle phase air pollutant emissions from sources. The importance of such a model is that it can be used to compute the effect of emission control proposals on visibility-related parameters in advance of the adoption of such control programs. The model has been assembled by embedding several aerosol process modules within the photochemical trajectory model previously developed for aerosol nitrate concentration predictions by Russell et al. [1983] and Russell and Cass [1986]. These modules describe the size distribution and chemical composition of primary particle emissions, the speciation of organic vapor emissions, atmospheric chemical reactions, transport of condensible material between the gas and the particle phases, fog chemistry, dry deposition, and atmospheric light scattering and light absorption. Model predictions have been compared to observed values using 48-hour trajectories arriving at Claremont, California, at each hour of August 28, 1987, during the Southern California Air Quality Study. The predicted fine particle concentration averages 62 μg m^(−3) compared to an observed value of 61 μg m^(−3), while predicted PM_(10) concentrations average 102 μg m^(−3) compared to an observed average of 97 μg m^(−3). The size distribution and chemical composition predictions for elemental carbon, sulfate, and sodium ion agree with observations to within plus or minus a few micrograms per cubic meter, while ammonium and nitrate concentrations are underpredicted by the base case model by 3 to 7 μg m^(−3) on average. Light-scattering coefficient values are calculated from the predicted aerosol size distribution and refractive index, and the model predictions agree with measured values on average to within 19%. The advantages and limitations of the modeling procedure are discussed

    Sensitivity studies for r-process nucleosynthesis in three astrophysical scenarios

    Full text link
    In rapid neutron capture, or r-process, nucleosynthesis, heavy elements are built up via a sequence of neutron captures and beta decays that involves thousands of nuclei far from stability. Though we understand the basics of how the r-process proceeds, its astrophysical site is still not conclusively known. The nuclear network simulations we use to test potential astrophysical scenarios require nuclear physics data (masses, beta decay lifetimes, neutron capture rates, fission probabilities) for all of the nuclei on the neutron-rich side of the nuclear chart, from the valley of stability to the neutron drip line. Here we discuss recent sensitivity studies that aim to determine which individual pieces of nuclear data are the most crucial for r-process calculations. We consider three types of astrophysical scenarios: a traditional hot r-process, a cold r-process in which the temperature and density drop rapidly, and a neutron star merger trajectory.Comment: 8 pages, 4 figures, submitted to the Proceedings of the International Nuclear Physics Conference (INPC) 201

    Modeling the Concentrations of Gas-Phase Toxic Organic Air Pollutants: Direct Emissions and Atmospheric Formation

    Get PDF
    An Eulerian photochemical air quality model is described for the prediction of the atmospheric transport and chemical reactions of gas-phase toxic organic air pollutants. Model performance was examined in the Los Angeles, CA, area over the period August 27-28, 1987. The organic compounds were drawn from a list of 189 species selected for control as hazardous air pollutants in the Clean Air Act amendments of 1990. The species considered include benzene, various alkylbenzenes, phenol, cresols, 1,3- butadiene, acrolein, formaldehyde, acetaldehyde, and perchloroethylene among others. It is found that photochemical generation contributes significantly to form-aldehyde, acetaldehyde, acetone, and acrolein concentrations for the 2-day period studied. Phenol concentrations are dominated by direct emissions, despite the existence of a pathway for atmospheric formation from benzene oxidation. The finding that photochemical production can be a major contributor to the total concentrations of some toxic organic species implies that control programs for those species must consider more than just direct emissions

    Acquisition of acid vapor and aerosol concentration data for use in dry deposition studies in the South Coast Air Basin

    Get PDF
    An atmospheric monitoring network was operated throughout the South Coast Air Basin in the greater Los Angeles area during the year 1986. The primary objective of this study was to measure the spatial and temporal concentration distributions of atmospheric gas phase and particulate phase acids and bases in support of the California Air Resources Board's dry deposition research program. Gaseous pollutants measured include HNO_3, HCl, HF, HBr, formic acid, acetic acid and ammonia. The chemical composition of the airborne particulate matter complex was examined in three size ranges: fine particles (less than 2.2 μm aerodynamic diameter, AD), PM_(10) (less than 10 μm AD) and total particles (no size discrimination). Upwind of the air basin at San Nicolas Island, gas phase acids concentrations are very low: averaging 0.3 μg m^(-3) (0.1 ppb) for HNO_3, 0.8 μg m^(-3) for HCl, 0.13 μg m^(-3) for HF, and 2.6 μg m^(-3) for formic acid. Annual average HN03 concentrations ranged from 3.1 μg m^(-3) (1.2 ppb) near the Southern California coast to 6.9 μg m^(-3) (2.7 ppb) at an inland site in the San Gabriel Mountains. HCl concentrations within the South Coast Air Basin averaged from 0.8 μg m^(-3) to 1.8 μg m^(-3) during the year 1986. Long-term average HF concentrations within the air basin are very low, in the range from 0.14 to 0.22 μg m^(-3) between monitoring sites. Long-term average formic acid concentrations are lowest near the coastline (5.0 μg m^(-3) at Hawthorne), with the highest average concentrations (10.7 μg m^(-3)) observed inland at Upland. Ammonia concentrations at low elevation within the South Coast Air Basin average from 2.1 μg m^(-3) to 4.4 μg m^(-3) at all sites except Rubidoux. Rubidoux is located directly downwind of a large ammonia source created by dairy farming and other agricultural activities in the Chino area. Ammonia concentrations at Rubidoux average 30 μg m^(-3) during 1986, a factor of approximately 10 higher than elsewhere in the air basin. Annual average PM_(10) mass concentrations within the South Coast Air Basin ranged from 47.0 μg m^(-3) along the coast to 87.4 μg m^(-3) at Rubidoux, the farthest inland monitoring site. Five major aerosol components (carbonaceous material, NO_3^-, SO_4^-, NH_4^+ and soil-related material) accounted for greater than 80% of the annual average PM_(10) mass concentration at all on-land monitoring stations. A peak 24-h average PM_(10) mass concentration of 299 μg m^(-3) was observed at Rubidoux during 1986. That value is a factor of 2 higher than the federal 24-h average PM_(10) concentration standard, and a factor of 6 higher than the State of California PM_(10) standard. More than 40% of the PM_(10) aerosol mass measured at Rubidoux during that peak day event consisted of aerosol nitrates plus ammonium ion. Reaction of gaseous nitric acid to form aerosol nitrates was a major contributor to the high PM_(10) concentrations observed in the Rubidoux area near Riverside, California
    • …
    corecore