13 research outputs found
Life Cycle Data Interoperability Improvements through Implementation of the Federal LCA Commons Elementary Flow List
As a fundamental component of data for life cycle assessment models, elementary flows have been demonstrated to be a key requirement of life cycle assessment data interoperability. However, existing elementary flow lists have been found to lack sufficient structure to enable improved interoperability between life cycle data sources. The Federal Life Cycle Assessment Commons Elementary Flow List provides a novel framework and structure for elementary flows, but the actual improvement this list provides to the interoperability of life cycle data has not been tested. The interoperability of ten elementary flow lists, two life cycle assessment databases, three life cycle impact assessment methods, and five life cycle assessment software sources is assessed with and without use of the Federal Life Cycle Assessment Commons Elementary Flow List as an intermediary in flow mapping. This analysis showed that only 25% of comparisons between these sources resulted in greater than 50% of flows being capable of automatic name-to-name matching between lists. This indicates that there is a low level of interoperability when using sources with their original elementary flow nomenclature, and elementary flow mapping is required to use these sources in combination. The mapping capabilities of the Federal Life Cycle Assessment Commons Elementary Flow List to sources were reviewed and revealed a notable increase in name-to-name matches. Overall, this novel framework is found to increase life cycle data source interoperability
Incorporating New Technologies in EEIO Models
We propose a methodology to add new technologies into Environmentally Extended Input–Output (EEIO) models based on a Supply and Use framework. The methodology provides for adding new industries (new technologies) and a new commodity under the assumption that the new commodity will partially substitute for a functionally-similar existing commodity of the baseline economy. The level of substitution is controlled by a percentage (%) as a variable of the model. In the Use table, a percentage of the current use of the existing commodity is transferred to the new commodity. The Supply or Make table is modified assuming that the new industries are the only ones producing the new commodity. We illustrate the method for the USEEIO model, for the addition of second generation biofuels, including naphtha, jet fuel and diesel fuel. The new industries’ inputs, outputs and value-added components needed to produce the new commodity are drawn from process-based life cycle inventories (LCIs). Process-based LCI inputs and outputs per physical functional unit are transformed to prices and assigned to commodities and environmental flow categories for the EEIO model. This methodology is designed to evaluate the environmental impacts of substituting products in the current US economy with bio-versions, produced by new technologies, that are intended to reduce negative environmental impacts. However, it can be applied for any new commodity for which the substitution assumption is reasonable
useeior: An Open-Source R Package for Building and Using US Environmentally-Extended Input–Output Models
useeior is an open-source R package that builds USEEIO models, a family of environmentally-extended input–output models of US goods and services used for life cycle assessment, environmental footprint estimation, and related applications. USEEIO models have gained a wide user base since their initial release in 2017, but users were often challenged to prepare required input data and undergo a complicated model building approach. To address these challenges, useeior was created. In useeior, economic and environmental data are conveniently retrievable for immediate use. Users can build models simply from given or user-specified model configuration and optional hybridization specifications. The assembly of economic and environmental data and matrix calculations are automatically performed. Users can export model results to desired formats. useeior is a core component of the USEEIO modeling framework. It improves transparency, efficiency, and flexibility in building USEEIO models, and was used to deliver the recent USEEIO model
FLOWSA: A Python Package Attributing Resource Use, Waste, Emissions, and Other Flows to Industries
Quantifying industry consumption or production of resources, wastes, emissions, and losses—collectively called flows—is a complex and evolving process. The attribution of flows to industries often requires allocating multiple data sources that span spatial and temporal scopes and contain varied levels of aggregation. Once calculated, datasets can quickly become outdated with new releases of source data. The US Environmental Protection Agency (USEPA) developed the open-source Flow Sector Attribution (FLOWSA) Python package to address the challenges surrounding attributing flows to US industrial and final-use sectors. Models capture flows drawn from or released to the environment by sectors, as well as flow transfers between sectors. Data on flow use and generation by source-defined activities are imported from providers and transformed into standardized tables but are otherwise numerically unchanged in preparation for modeling. FLOWSA sector attribution models allocate primary data sources to industries using secondary data sources and file mapping activities to sectors. Users can modify methodological, spatial, and temporal parameters to explore and compare the impact of sector attribution methodological changes on model results. The standardized data outputs from these models are used as the environmental data inputs into the latest version of USEPA’s US Environmentally Extended Input–Output (USEEIO) models, life cycle models of US goods and services for ~400 categories. This communication demonstrates FLOWSA’s capability by describing how to build models and providing select model results for US industry use of water, land, and employment. FLOWSA is available on GitHub, and many of the data outputs are available on the USEPA’s Data Commons
Critical review of elementary flows in LCA data
Elementary flows are essential components of data used for life cycle assessment. A standard list is not used across all sources, as data providers now manage these flows independently. Elementary flows must be consistent across a life cycle inventory for accurate inventory analysis and must correspond with impact methods for impact assessment. With the goal of achieving a global network of LCA databases, a critical review of elementary flow usage and management in LCA data sources was performed. Flows were collected in a standard template from various life cycle inventory, impact method, and software sources. A typology of elementary flows was created to identify flows by types such as chemicals, minerals, land flows, etc., to facilitate differential analysis. Twelve criteria were defined to evaluate flows against principles of clarity, consistency, extensibility, translatability, and uniqueness. Over 134,000 elementary flows from six LCI databases, three LCIA methods, and three LCA software tools were collected and evaluated from European, North American, and Asian Pacific LCA sources. The vast majority were typed as "Element or Compound" or "Group of Chemicals" with less than 10% coming from the other seven types. Many lack important identifying information including context information (environmental compartments), directionality (LCIA methods generally do not provide this information), additional clarifiers such as CAS numbers and synonyms, unique identifiers (like UUIDs), and supporting metadata. Extensibility of flows is poor because patterns in flow naming are generally complex and inconsistent because user-defined nomenclature is used. The current shortcomings in flow clarity, consistency, and extensibility are likely to make it more challenging for users to properly select and use elementary flows when creating LCA data and make translation/conversion between different reference lists challenging and loss of information will likely occur. We recommend the application of a typology to flow lists, use of unique identifiers and inclusion of clarifiers based on external references, setting an exclusive or inclusive nomenclature for flow context information that includes directionality and environmental compartment information, separating flowable names from context and unit information, linking inclusive taxonomies to create limited patterns for flowable names, and using an encoding schema that will prevent technical translation errors
Identifying/Quantifying Environmental Trade-offs Inherent in GHG Reduction Strategies for Coal-Fired Power
Improvements
to coal power plant technology and the cofired combustion
of biomass promise direct greenhouse gas (GHG) reductions for existing
coal-fired power plants. Questions remain as to what the reduction
potentials are from a life cycle perspective and if it will result
in unintended increases in impacts to air and water quality and human
health. This study provides a unique analysis of the potential environmental
impact reductions from upgrading existing subcritical pulverized coal
power plants to increase their efficiency, improving environmental
controls, cofiring biomass, and exporting steam for industrial use. The climate impacts
are examined in both a traditionalî—¸100 year GWPî—¸method
and a time series analysis that accounts for emission and uptake timing
over the life of the power plant. Compared to fleet average pulverized
bed boilers (33% efficiency), we find that circulating fluidized bed
boilers (39% efficiency) may provide GHG reductions of about 13% when
using 100% coal and reductions of about 20–37% when cofiring
with 30% biomass. Additional greenhouse gas reductions from combined
heat and power are minimal if the steam coproduct displaces steam
from an efficient natural gas boiler. These upgrades and cofiring
biomass can also reduce other life cycle impacts, although there may
be increased impacts to water quality (eutrophication) when using
biomass from an intensely cultivated source. Climate change impacts
are sensitive to the timing of emissions and carbon sequestration
as well as the time horizon over which impacts are considered, particularly
for long growth woody biomass
Mining Available Data from the United States Environmental Protection Agency to Support Rapid Life Cycle Inventory Modeling of Chemical Manufacturing
Demands
for quick and accurate life cycle assessments create a
need for methods to rapidly generate reliable life cycle inventories
(LCI). Data mining is a suitable tool for this purpose, especially
given the large amount of available governmental data. These data
are typically applied to LCIs on a case-by-case basis. As linked open
data becomes more prevalent, it may be possible to automate LCI using
data mining by establishing a reproducible approach for identifying,
extracting, and processing the data. This work proposes a method for
standardizing and eventually automating the discovery and use of publicly
available data at the United States Environmental Protection Agency
for chemical-manufacturing LCI. The method is developed using a case
study of acetic acid. The data quality and gap analyses for the generated
inventory found that the selected data sources can provide information
with equal or better reliability and representativeness on air, water,
hazardous waste, on-site energy usage, and production volumes but
with key data gaps including material inputs, water usage, purchased
electricity, and transportation requirements. A comparison of the
generated LCI with existing data revealed that the data mining inventory
is in reasonable agreement with existing data and may provide a more-comprehensive
inventory of air emissions and water discharges. The case study highlighted
challenges for current data management practices that must be overcome
to successfully automate the method using semantic technology. Benefits
of the method are that the openly available data can be compiled in
a standardized and transparent approach that supports potential automation
with flexibility to incorporate new data sources as needed