4,565 research outputs found

    Perspectives on subnational carbon and climate footprints: A case study of Southampton, UK

    Get PDF
    Sub-national governments are increasingly interested in local-level climate change management. Carbon- (CO2 and CH4) and climate-footprints—(Kyoto Basket GHGs) (effectively single impact category LCA metrics, for global warming potential) provide an opportunity to develop models to facilitate effective mitigation. Three approaches are available for the footprinting of sub-national communities. Territorial-based approaches, which focus on production emissions within the geo-political boundaries, are useful for highlighting local emission sources but do not reflect the transboundary nature of sub-national community infrastructures. Transboundary approaches, which extend territorial footprints through the inclusion of key cross boundary flows of materials and energy, are more representative of community structures and processes but there are concerns regarding comparability between studies. The third option, consumption-based, considers global GHG emissions that result from final consumption (households, governments, and investment). Using a case study of Southampton, UK, this chapter develops the data and methods required for a sub-national territorial, transboundary, and consumption-based carbon and climate footprints. The results and implication of each footprinting perspective are discussed in the context of emerging international standards. The study clearly shows that the carbon footprint (CO2 and CH4 only) offers a low-cost, low-data, universal metric of anthropogenic GHG emission and subsequent management

    Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Get PDF
    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented

    Snow spectral albedo at Summit, Greenland: measurements and numerical simulations based on physical and chemical properties of the snowpack

    Get PDF
    The broadband albedo of surface snow is determined both by the near-surface profile of the physical and chemical properties of the snowpack and by the spectral and angular characteristics of the incident solar radiation. Simultaneous measurements of the physical and chemical properties of snow were carried out at Summit Camp, Greenland (72°36´ N, 38°25´ W, 3210 m a.s.l.) in May and June 2011, along with spectral albedo measurements. One of the main objectives of the field campaign was to test our ability to predict snow spectral albedo by comparing the measured albedo to the albedo calculated with a radiative transfer model, using measured snow physical and chemical properties. To achieve this goal, we made daily measurements of the snow spectral albedo in the range 350–2200 nm and recorded snow stratigraphic information down to roughly 80 cm. The snow specific surface area (SSA) was measured using the DUFISSS instrument (DUal Frequency Integrating Sphere for Snow SSA measurement, Gallet et al., 2009). Samples were also collected for chemical analyses including black carbon (BC) and dust, to evaluate the impact of light absorbing particulate matter in snow. This is one of the most comprehensive albedo-related data sets combining chemical analysis, snow physical properties and spectral albedo measurements obtained in a polar environment. The surface albedo was calculated from density, SSA, BC and dust profiles using the DISORT model (DIScrete Ordinate Radiative Transfer, Stamnes et al., 1988) and compared to the measured values. Results indicate that the energy absorbed by the snowpack through the whole spectrum considered can be inferred within 1.10%. This accuracy is only slightly better than that which can be obtained considering pure snow, meaning that the impact of impurities on the snow albedo is small at Summit. In the near infrared, minor deviations in albedo up to 0.014 can be due to the accuracy of radiation and SSA measurements and to the surface roughness, whereas deviations up to 0.05 can be explained by the spatial heterogeneity of the snowpack at small scales, the assumption of spherical snow grains made for DISORT simulations and the vertical resolution of measurements of surface layer physical properties. At 1430 and around 1800 nm the discrepancies are larger and independent of the snow properties; we propose that they are due to errors in the ice refractive index at these wavelengths. This work contributes to the development of physically based albedo schemes in detailed snowpack models, and to the improvement of retrieval algorithms for estimating snow properties from remote sensing data

    Simultaneous Observations of Comet C/2002 T7 (LINEAR) with the Berkeley-Illinois-Maryland Association and Owens Valley Radio Observatory Interferometers: HCN and CH_3OH

    Get PDF
    We present observations of HCN J = 1-0 and CH_3OH J(K_a, K_c) = 3(1, 3)-4(0, 4) A+ emission from comet C/2002 T7 (LINEAR) obtained simultaneously with the Owens Valley Radio Observatory (OVRO) and Berkeley-Illinois-Maryland Association (BIMA) millimeter interferometers. We combined the data from both arrays to increase the (u, v) sampling and signal to noise of the detected line emission. We also report the detection of CH_3OH J(K_a, K_c) = 8(0, 8)-7(1, 7) A^+ with OVRO data alone. Using a molecular excitation code that includes the effects of collisions with water and electrons, as well as pumping by the Solar infrared photons (for HCN alone), we find a production rate of HCN of 2.9 × 10^(26) s^(–1) and for CH_3OH of 2.2 × 10^(27) s^(–1). Compared to the adopted water production rate of 3 × 10^(29) s^(–1), this corresponds to an HCN/H_2O ratio of 0.1% and a CH_3OH/H_2O ratio of 0.7%. We critically assess the uncertainty of these values due to the noise (~10%), the uncertainties in the adopted comet model (~50%), and the uncertainties in the adopted collisional excitation rates (up to a factor of 2). Pumping by Solar infrared photons is found to be a minor effect for HCN, because our 15" synthesized beam is dominated by the region in the coma where collisions dominate. Since the uncertainties in the derived production rates are at least as large as one-third of the differences found between comets, we conclude that reliable collision rates and an accurate comet model are essential. Because the collisionally dominated region critically depends on the water production rate, using the same approximate method for different comets may introduce biases in the derived production rates. Multiline observations that directly constrain the molecular excitation provide much more reliable production rates

    Coupling groundwater and riparian vegetation models to assess effects of reservoir releases

    Get PDF
    Although riparian areas in the arid southwestern United States are critical for maintaining species diversity, their extent and health have been declining since Euro-American settlement. The purpose of this study was to develop a methodology to evaluate the potential for riparian vegetation restoration and groundwater recharge. A numerical groundwater flow model was coupled with a conceptual riparian vegetation model to predict hydrologic conditions favorable to maintaining riparian vegetation downstream of a reservoir. A Geographic Information System(GIS) was used for this one-way coupling. Constant and seasonally varying releases from the dam were simulated using volumes anticipated to be permitted by a regional water supplier. Simulations indicated that seasonally variable releases would produce surface flow 5.4-8.5 km below the dam in a previously dry reach. Using depth to groundwater simulations from the numerical flow model with conceptual models of depths to water necessary for maintenance of riparian vegetation, the GIS analysis predicted a 5- to 6.5-fold increase in the area capable of sustaining riparian vegetation

    Predicting hedgehog mortality risks on British roads using habitat suitability modelling

    Get PDF
    Road vehicle collisions are likely to be an important contributory factor in the decline of the European hedgehog (Erinaceus europaeus) in Britain. Here, a collaborative roadkill dataset collected from multiple projects across Britain was used to assess when, where and why hedgehog roadkill are more likely to occur. Seasonal trends were assessed using a Generalized Additive Model. There were few casualties in winter—the hibernation season for hedgehogs—with a gradual increase from February that reached a peak in July before declining thereafter. A sequential multi-level Habitat Suitability Modelling (HSM) framework was then used to identify areas showing a high probability of hedgehog roadkill occurrence throughout the entire British road network (∼400,000 km) based on multi-scale environmental determinants. The HSM predicted that grassland and urban habitat coverage were important in predicting the probability of roadkill at a national scale. Probabilities peaked at approximately 50% urban cover at a one km scale and increased linearly with grassland cover (improved and rough grassland). Areas predicted to experience high probabilities of hedgehog roadkill occurrence were therefore in urban and suburban environments, that is, where a mix of urban and grassland habitats occur. These areas covered 9% of the total British road network. In combination with information on the frequency with which particular locations have hedgehog road casualties, the framework can help to identify priority areas for mitigation measures

    A two-way photonic interface for linking Sr+ transition at 422 nm to the telecommunications C-band

    Full text link
    We report a single-stage bi-directional interface capable of linking Sr+ trapped ion qubits in a long-distance quantum network. Our interface converts photons between the Sr+ emission wavelength at 422 nm and the telecoms C-band to enable low-loss transmission over optical fiber. We have achieved both up- and down-conversion at the single photon level with efficiencies of 9.4% and 1.1% respectively. Furthermore we demonstrate noise levels that are low enough to allow for genuine quantum operation in the future.Comment: 5 pages, 4 figure

    Robust Weak-lensing Mass Calibration of Planck Galaxy Clusters

    Full text link
    In light of the tension in cosmological constraints reported by the Planck team between their SZ-selected cluster counts and Cosmic Microwave Background (CMB) temperature anisotropies, we compare the Planck cluster mass estimates with robust, weak-lensing mass measurements from the Weighing the Giants (WtG) project. For the 22 clusters in common between the Planck cosmology sample and WtG, we find an overall mass ratio of \left = 0.688 \pm 0.072. Extending the sample to clusters not used in the Planck cosmology analysis yields a consistent value of <MPlanck/MWtG>=0.698±0.062\left< M_{Planck}/M_{\rm WtG} \right> = 0.698 \pm 0.062 from 38 clusters in common. Identifying the weak-lensing masses as proxies for the true cluster mass (on average), these ratios are ∼1.6σ\sim 1.6\sigma lower than the default mass bias of 0.8 assumed in the Planck cluster analysis. Adopting the WtG weak-lensing-based mass calibration would substantially reduce the tension found between the Planck cluster count cosmology results and those from CMB temperature anisotropies, thereby dispensing of the need for "new physics" such as uncomfortably large neutrino masses (in the context of the measured Planck temperature anisotropies and other data). We also find modest evidence (at 95 per cent confidence) for a mass dependence of the calibration ratio and discuss its potential origin in light of systematic uncertainties in the temperature calibration of the X-ray measurements used to calibrate the Planck cluster masses. Our results exemplify the critical role that robust absolute mass calibration plays in cluster cosmology, and the invaluable role of accurate weak-lensing mass measurements in this regard.Comment: 5 pages, 2 figure
    • …
    corecore