30 research outputs found

    Discounting and the environment should current impacts be weighted differently than impacts harming future generations?

    Get PDF
    Background: In Life-Cycle Assessment (LCA), decision makers are often faced with tradeoffs between current and future impacts. One typical example is waste incineration, where immediate emissions to the air from the incineration process have to be weighted against future emissions of slag landfills. Long-term impacts are either completely taken into account or they are entirely disregarded in case of a temporal cut-off. Temporal cutoffs are a special case of discounting. Objective: In this paper, discounting is defined as valuing damages differently at different points of time using a positive or negative discount rate. Apart from temporal cut-offs, discounting has rarely been applied in LCA so far. It is the goal of this paper to discuss the concept of discounting and its applicability in the context of LCA. Methods: For this purpose, we first review the arguments for discounting and its principles in economic sciences. Discounting in economics can be motivated by pure time preference, productivity of capital, diminishing marginal utility of consumption, and uncertainties. The nominal discount rate additionally includes changes in the price level. These arguments and their justification are discussed in the context of environmental impacts harming future generations. Results and Discussion: It is concluded that discounting across generations because of pure time preference contradicts fundamental ethical values and should therefore not be applied in LCA. However, it has to be acknowledged that in practice decision makers often use positive discount rates because of pure time preference — either because they might profit from imposing environmental damage on others instead of themselves or because people in the far future are not of immediate concern to them. Discounting because of the productivity of capital assumes a relationship between monetary values and environmental impact. If such a relationship is accepted, discounting could be applied. However, future generations should be compensated for the environmental damage. It is likely that they would demand a higher compensation if the real per capita income increases. As both the compensation and the discount rate are related to economic growth, the overall discount rate might be close to zero. It is shown that the overall discount rate might even be negative considering that the required compensation could increase (even to infinite) if natural assets remain scarce, whereas the utility of consumption diminishes with increasing income. Uncertainties could justify both positive and negative discount rates. Since the relationship between uncertainties and the magnitude of damage is generally not exponential, we recommend to model changes in the magnitude of damage in scenario analysis instead of considering it in discounting (which requires an exponential function of time in the case of a constant discount rate). We investigated the influence of discounting in a case study of heavy metal emissions from slag landfills. It could be shown that even small discount rates of less than 1 % lead to a significant reduction of the impact score, whereas negative discount rates inflate the results. Conclusions and Recommendations: Discounting is only applicable when temporally differentiated data is available. In some cases, such a temporal differentiation is necessary to take sound decisions, especially when long emission periods are involved. An example is the disposal of nuclear or heavy metal-containing waste. In these cases, the results might completely depend on the discount rate. This paper helps to structure arguments and thus to support the decision about whether or not discounting should be applied in an LC

    Comparison of external bulk video imaging with focused beam reflectance measurement and ultra-violet visible spectroscopy for metastable zone identification in food and pharmaceutical crystallization processes

    Get PDF
    The purpose of the paper is twofold: it describes the proof of concept of the newly introduced bulk video imaging (BVI) method and it presents the comparison with existing process analytical technologies (PAT) such as focused beam reflectance measurement (FBRM) and ultra violet/visible (UV/Vis) spectroscopy. While the latter two sample the system in small volumes closely to the probe, the BVI approach monitors the entire or large parts of the crystallizer volume. The BVI method is proposed as a complementary noninvasive PAT tool and it is shown that it is able to detect the boundaries of the metastable zone with comparable or better performance than the FBRM and UV/VIS probes

    Model based control of a liquid swelling constrained batch reactor subject to recipe uncertainties

    Get PDF
    This work presents the application of nonlinear model predictive control (NMPC) to a simulated industrial batch reactor subject to safety constraint due to reactor level swelling, which can occur with relatively fast dynamics. Uncertainties in the implementation of recipes in batch process operation are of significant industrial relevance. The paper describes a novel control-relevant formulation of the excessive liquid rise problem for a two-phase batch reactor subject to recipe uncertainties. The control simulations are carried out using a dedicated NMPC and optimization software toolbox Optcon which implements state of the art technologies. The open-loop optimal control problem is computed using the multipleshooting technique and the arising non-linear programming problem is solved using a sequential quadratic programming (SQP) algorithm tailored for large scale problems, based on the freeware optimization environment HQP. The fast response of the NMPC controller is guaranteed by the initial value embedding and real time iteration technologies. It is concluded that the OptCon implementation allows small sampling times and the controller is able to maintain safe and optimal operation conditions, with good control performance despite significant uncertainties in the implementation of the batch recipe

    Aquatic occurrence of phytotoxins in small streams triggered by biogeography, vegetation growth stage, and precipitation

    Get PDF
    Toxic plant secondary metabolites (PSMs), so-called phytotoxins, occur widely in plant species. Many of these phytotoxins have similar mobility, persistence, and toxicity properties in the environment as anthropogenic micropollutants, which increasingly contaminate surface waters. Although recent case studies have shown the aquatic relevance of phytotoxins, the overall exposure remains unknown. Therefore, we performed a detailed occurrence analysis covering 134 phytotoxins from 27 PSM classes. Water samples from seven small Swiss streams with catchment areas from 1.7 to 23 km(2) and varying land uses were gathered over several months to investigate seasonal impacts. They were complemented with samples from different biogeographical regions to cover variations in vegetation. A broad SPE-LC-HRMS/MS method was applied with limits of detection below 5 ng/L for over 80% of the 134 included phytotoxins. In total, we confirmed 39 phytotoxins belonging to 13 PSM classes, which corresponds to almost 30% of all included phytotoxins. Several alkaloids were regularly detected in the low ng/L-range, with average detection frequencies of 21%. This is consistent with the previously estimated persistence and mobility properties that indicated a high contamination potential. Coumarins were previously predicted to be unstable, however, detection frequencies were around 89%, and maximal concentrations up to 90 ng/L were measured for fraxetin produced by various trees. Overall, rainy weather conditions at full vegetation led to the highest total phytotoxin concentrations, which might potentially be most critical for aquatic organisms

    Using a pharmacokinetic model to interpret biomonitoring data of PBDEs in the Australian population

    Get PDF
    From human biomonitoring data that are increasingly collected in the United States, Australia, and in other countries from large-scale field studies, we obtain snap-shots of concentration levels of various persistent organic pollutants (POPs) within a cross section of the population at different times. Not only can we observe the trends within this population with time, but we can also gain information going beyond the obvious time trends. By combining the biomonitoring data with pharmacokinetic modeling, we can re-construct the time-variant exposure to individual POPs, determine their intrinsic elimination half-lives in the human body, and predict future levels of POPs in the population. Different approaches have been employed to extract information from human biomonitoring data. Pharmacokinetic (PK) models were combined with longitudinal data1, with single2 or multiple3 average concentrations of a cross-sectional data (CSD), or finally with multiple CSD with or without empirical exposure data4. In the latter study, for the first time, the authors based their modeling outputs on two sets of CSD and empirical exposure data, which made it possible that their model outputs were further constrained due to the extensive body of empirical measurements. Here we use a PK model to analyze recent levels of PBDE concentrations measured in the Australian population. In this study, we are able to base our model results on four sets5-7 of CSD; we focus on two PBDE congeners that have been shown3,5,8-9 to differ in intake rates and half-lives with BDE-47 being associated with high intake rates and a short half-life and BDE-153 with lower intake rates and a longer half-life. By fitting the model to PBDE levels measured in different age groups in different years, we determine the level of intake of BDE-47 and BDE-153, as well as the half-lives of these two chemicals in the Australian population

    Global multimedia source-receptor relationships for persistent organic pollutants during use and after phase-out

    Get PDF
    AbstractChemicals that are persistent in the atmosphere can be transported long distances and across international boundaries. Therefore, information about the fraction of local versus imported air pollution is required to formulate regulations aimed at controlling pollutant levels. The objective of this work is to illustrate the capabilities of a dynamic global–scale multimedia model to calculate source–receptor relationships for persistent organic pollutants that cycle between air, water, soil and vegetation in the global environment. As exemplary case studies, we present model calculations of time–evolving source–receptor relationships for PCB28, PCB153, α–HCH and β–HCH over the duration of their usage, phase–out and a post–ban period. Our analysis is geographically explicit, and elucidates the role of primary versus secondary sources in controlling the levels of air pollution. Our case studies consider source–receptor relationships between the four regions defined by the Convention on Long–range Transboundary Air Pollution Task Force on Hemispheric Transport of Air Pollution, as well as the Arctic as a remote receptor region. We find source–receptor relationships that are highly variable over time, and between different regions and chemicals. Air pollution by PCBs in North America and Europe is consistently dominated by local emissions, whereas in East– and South–Asia extra–regional sources are sometimes major contributors. Emissions of α–HCH peak at different times in the four regions, which leads to a phase of high self–pollution in each region, and periods when pollution enters mainly from outside. Compared to α–HCH, air pollution with the less volatile and more persistent β–HCH is more strongly determined by secondary emissions near source areas throughout its use history. PCB concentrations in Arctic air are dominated by emissions transported from North America and Europe from 1930 to 2080, whereas for HCHs each of the source regions contributes a high share at some point between 1950 and 2050

    Tutorial on the fitting of kinetics models to multivariate spectroscopic measurements with non-linear least-squares regression

    No full text
    The continuing development of modern instrumentation means an increasing amount of data is being delivered in less time. As a consequence, it is crucial that research into techniques for the analysis of large data sets continues. However, even more crucial is that once developed these techniques are disseminated to the wider chemical community. In this tutorial, all the steps involved in the fitting of a chemical model, based on reaction kinetics, to measured multiwavelength spectroscopic data are presented. From postulation of the chemical model and derivation of the appropriate differential equations, through to calculating the concentration profiles and, using non-linear regression, fitting of the rate constants of the model to measured multiwavelength data. The benefits of using multiwavelength data are both discussed and demonstrated. A number of real examples where the described techniques are applied to real measurements are also given
    corecore