44 research outputs found

    Developing the greatest Blue Economy: Water productivity, fresh water depletion, and virtual water trade in the Great Lakes basin

    Get PDF
    The Great Lakes basin hosts the world\u27s most abundant surface fresh water reserve. Historically an industrial and natural resource powerhouse, the region has suffered economic stagnation in recent decades. Meanwhile, growing water resource scarcity around the world is creating pressure on water-intensive human activities. This situation creates the potential for the Great Lakes region to sustainably utilize its relative water wealth for economic benefit. We combine economic production and trade datasets with water consumption data and models of surface water depletion in the region. We find that, on average, the current economy does not create significant impacts on surface waters, but there is some risk that unregulated large water uses can create environmental flow impacts if they are developed in the wrong locations. Water uses drawing on deep groundwater or the Great Lakes themselves are unlikely to create a significant depletion, and discharge of groundwater withdrawals to surface waters offsets most surface water depletion. This relative abundance of surface water means that science-based management of large water uses to avoid accidentally creating “hotspots” is likely to be successful in avoiding future impacts, even if water use is significantly increased. Commercial water uses are the most productive, with thermoelectric, mining, and agricultural water uses in the lowest tier of water productivity. Surprisingly for such a water-abundant economy, the region is a net importer of water-derived goods and services. This, combined with the abundance of surface water, suggests that the region\u27s water-based economy has room to grow in the 21st century

    Heat-Related Deaths in Hot Cities: Estimates of Human Tolerance to High Temperature Thresholds

    Get PDF
    In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages \u3c65 and ≄65 during the months May–October for years 2000–2008. The most robust relationship was between ATmax on day of death and mortality from direct exposure to high environmental heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90–97 °F; 32.2‒ 36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males \u3c65 years (ATmax = 102 °F; 38.9 °C). Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide

    Robust observations of land-to-atmosphere feedbacks using the information flows of FLUXNET

    Get PDF
    Feedbacks between atmospheric processes like precipitation and land surface fluxes including evapotranspiration are difficult to observe, but critical for understanding the role of the land surface in the Earth System. To quantify global surface-atmosphere feedbacks we use results of a process network (PN) applied to 251 eddy covariance sites from the LaThuile database to train a neural network across the global terrestrial surface. There is a strong land–atmosphere coupling between latent (LE) and sensible heat flux (H) and precipitation (P) during summer months in temperate regions, and between H and P during winter, whereas tropical rainforests show little coupling seasonality. Savanna, shrubland, and other semi-arid ecosystems exhibit strong responses in their coupling behavior based on water availability. Feedback couplings from surface fluxes to P peaks at aridity (P/potential evapotranspiration ETp) values near unity, whereas coupling with respect to clouds, inferred from reduced global radiation, increases as P/ETp approaches zero. Spatial patterns in feedback coupling strength are related to climatic zone and biome type. Information flow statistics highlight hotspots of (1) persistent land–atmosphere coupling in sub-Saharan Africa, (2) boreal summer coupling in the central and southwestern US, Brazil, and the Congo basin and (3) in the southern Andes, South Africa and Australia during austral summer. Our data-driven approach to quantifying land atmosphere coupling strength that leverages the global FLUXNET database and information flow statistics provides a basis for verification of feedback interactions in general circulation models and for predicting locations where land cover change will feedback to climate or weather

    A multi-method and multi-scale approach for estimating city-wide anthropogenic heat fluxes

    Get PDF
    AbstractA multi-method approach estimating summer waste heat emissions from anthropogenic activities (QF) was applied for a major subtropical city (Phoenix, AZ). These included detailed, quality-controlled inventories of city-wide population density and traffic counts to estimate waste heat emissions from population and vehicular sources respectively, and also included waste heat simulations derived from urban electrical consumption generated by a coupled building energy – regional climate model (WRF-BEM + BEP). These component QF data were subsequently summed and mapped through Geographic Information Systems techniques to enable analysis over local (i.e. census-tract) and regional (i.e. metropolitan area) scales. Through this approach, local mean daily QF estimates compared reasonably versus (1.) observed daily surface energy balance residuals from an eddy covariance tower sited within a residential area and (2.) estimates from inventory methods employed in a prior study, with improved sensitivity to temperature and precipitation variations. Regional analysis indicates substantial variations in both mean and maximum daily QF, which varied with urban land use type. Average regional daily QF was ∌13 W m−2 for the summer period. Temporal analyses also indicated notable differences using this approach with previous estimates of QF in Phoenix over different land uses, with much larger peak fluxes averaging ∌50 W m−2 occurring in commercial or industrial areas during late summer afternoons. The spatio-temporal analysis of QF also suggests that it may influence the form and intensity of the Phoenix urban heat island, specifically through additional early evening heat input, and by modifying the urban boundary layer structure through increased turbulence

    Technical note: “Bit by bit”: a practical and general approach for evaluating model computational complexity vs. model performance

    Get PDF
    One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model\u27s ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model\u27s effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, process-based, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model\u27s descriptive complexity – also in bit

    Technical note: “Bit by bit”: a practical and general approach for evaluating model computational complexity vs. model performance

    Get PDF
    One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model’s ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model’s effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, processbased, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model’s descriptive complexity – also in bit.info:eu-repo/semantics/publishedVersio

    Multiscale modeling and evaluation of urban surface energy balance in the Phoenix metropolitan area

    Get PDF
    AbstractPhysical mechanisms of incongruency between observations and Weather Research and Forecasting (WRF) Model predictions are examined. Limitations of evaluation are constrained by (i) parameterizations of model physics, (ii) parameterizations of input data, (iii) model resolution, and (iv) flux observation resolution. Observations from a new 22.1-m flux tower situated within a residential neighborhood in Phoenix, Arizona, are utilized to evaluate the ability of the urbanized WRF to resolve finescale surface energy balance (SEB) when using the urban classes derived from the 30-m-resolution National Land Cover Database. Modeled SEB response to a large seasonal variation of net radiation forcing was tested during synoptically quiescent periods of high pressure in winter 2011 and premonsoon summer 2012. Results are presented from simulations employing five nested domains down to 333-m horizontal resolution. A comparative analysis of model cases testing parameterization of physical processes was done using four configurations of urban parameterization for the bulk urban scheme versus three representations with the Urban Canopy Model (UCM) scheme, and also for two types of planetary boundary layer parameterization: the local Mellor–Yamada–Janjić scheme and the nonlocal Yonsei University scheme. Diurnal variation in SEB constituent fluxes is examined in relation to surface-layer stability and modeled diagnostic variables. Improvement is found when adapting UCM for Phoenix with reduced errors in the SEB components. Finer model resolution is seen to have insignificant (&lt;1 standard deviation) influence on mean absolute percent difference of 30-min diurnal mean SEB terms.</jats:p

    Water-Use Data in the United States: Challenges and Future Directions

    Get PDF
    In the United States, greater attention has been given to developing water supplies and quantifying available waters than determining who uses water, how much they withdraw and consume, and how and where water use occurs. As water supplies are stressed due to an increasingly variable climate, changing land-use, and growing water needs, greater consideration of the demand side of the water balance equation is essential. Data about the spatial and temporal aspects of water use for different purposes are now critical to long-term water supply planning and resource management. We detail the current state of water-use data, the major stakeholders involved in their collection and applications, and the challenges in obtaining high-quality nationally consistent data applicable to a range of scales and purposes. Opportunities to improve access, use, and sharing of water-use data are outlined. We cast a vision for a world-class national water-use data product that is accessible, timely, and spatially detailed. Our vision will leverage the strengths of existing local, state, and federal agencies to facilitate rapid and informed decision-making, modeling, and science for water resources. To inform future decision-making regarding water supplies and uses, we must coordinate efforts to substantially improve our capacity to collect, model, and disseminate water-use data
    corecore