101,533 research outputs found
Recommended from our members
Natural Variability in Projections of Climate Change Impacts on Fine Particulate Matter Pollution
Variations in meteorology associated with climate change can impact fine particulate matter (PM2.5) pollution by affecting natural emissions, atmospheric chemistry, and pollutant transport. However, substantial discrepancies exist among model-based projections of PM2.5 impacts driven by anthropogenic climate change. Natural variability can significantly contribute to the uncertainty in these estimates. Using a large ensemble of climate and atmospheric chemistry simulations, we evaluate the influence of natural variability on projections of climate change impacts on PM2.5 pollution in the United States. We find that natural variability in simulated PM2.5 can be comparable or larger than reported estimates of anthropogenic-induced climate impacts. Relative to mean concentrations, the variability in projected PM2.5 climate impacts can also exceed that of ozone impacts. Based on our projections, we recommend that analyses aiming to isolate the effect climate change on PM2.5 use 10 years or more of modeling to capture the internal variability in air quality and increase confidence that the anthropogenic-forced effect is differentiated from the noise introduced by natural variability. Projections at a regional scale or under greenhouse gas mitigation scenarios can require additional modeling to attribute impacts to climate change. Adequately considering natural variability can be an important step toward explaining the inconsistencies in estimates of climate-induced impacts on PM2.5. Improved treatment of natural variability through extended modeling lengths or initial condition ensembles can reduce uncertainty in air quality projections and improve assessments of climate policy risks and benefits
Recommended from our members
Systematic evaluation of software product line architectures
The architecture of a software product line is one of its most important artifacts as it represents an abstraction of the products that can be generated. It is crucial to evaluate the quality attributes of a product line architecture in order to: increase the productivity of the product line process and the quality of the products; provide a means to understand the potential behavior of the products and, consequently, decrease their time to market; and, improve the handling of the product line variability. The evaluation of product line architecture can serve as a basis to analyze the managerial and economical values of a product line for software managers and architects. Most of the current research on the evaluation of product line architecture does not take into account metrics directly obtained from UML models and their variabilities; the metrics used instead are difficult to be applied in general and to be used for quantitative analysis. This paper presents a Systematic Evaluation Method for UML-based Software Product Line Architecture, the SystEM-PLA. SystEM-PLA differs from current research as it provides stakeholders with a means to: (i) estimate and analyze potential products; (ii) use predefined basic UML-based metrics to compose quality attribute metrics; (iii) perform feasibility and trade-off analysis of a product line architecture with respect to its quality attributes; and, (iv) make the evaluation of product line architecture more flexible. An example using the SEIâs Arcade Game Maker (AGM) product line is presented as a proof of concept, illustrating SystEM-PLA activities. Metrics for complexity and extensibility quality attributes are defined and used to
perform a trade-off analysis
An automated Model-based Testing Approach in Software Product Lines Using a Variability Language.
This paper presents the application of an automated testing approach for Software Product Lines (SPL) driven by its state-machine and variability models. Context: Model-based testing provides a technique for automatic generation of test cases using models. Introduction of a variability model in this technique can achieve testing automation in SPL. Method: We use UML and CVL (Common Variability Language) models as input, and JUnit test cases are derived from these models. This approach has been implemented using the UML2 Eclipse Modeling platform and the CVL-Tool. Validation: A model checking tool prototype has been developed and a case study has been performed. Conclusions: Preliminary experiments have proved that our approach can find structural errors in the SPL under test. In our future work we will introduce Object Constraint Language (OCL) constraints attached to the input UML mode
Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline
From medical charts to national census, healthcare has traditionally operated
under a paper-based paradigm. However, the past decade has marked a long and
arduous transformation bringing healthcare into the digital age. Ranging from
electronic health records, to digitized imaging and laboratory reports, to
public health datasets, today, healthcare now generates an incredible amount of
digital information. Such a wealth of data presents an exciting opportunity for
integrated machine learning solutions to address problems across multiple
facets of healthcare practice and administration. Unfortunately, the ability to
derive accurate and informative insights requires more than the ability to
execute machine learning models. Rather, a deeper understanding of the data on
which the models are run is imperative for their success. While a significant
effort has been undertaken to develop models able to process the volume of data
obtained during the analysis of millions of digitalized patient records, it is
important to remember that volume represents only one aspect of the data. In
fact, drawing on data from an increasingly diverse set of sources, healthcare
data presents an incredibly complex set of attributes that must be accounted
for throughout the machine learning pipeline. This chapter focuses on
highlighting such challenges, and is broken down into three distinct
components, each representing a phase of the pipeline. We begin with attributes
of the data accounted for during preprocessing, then move to considerations
during model building, and end with challenges to the interpretation of model
output. For each component, we present a discussion around data as it relates
to the healthcare domain and offer insight into the challenges each may impose
on the efficiency of machine learning techniques.Comment: Healthcare Informatics, Machine Learning, Knowledge Discovery: 20
Pages, 1 Figur
Detecting and Explaining Conflicts in Attributed Feature Models
Product configuration systems are often based on a variability model. The
development of a variability model is a time consuming and error-prone process.
Considering the ongoing development of products, the variability model has to
be adapted frequently. These changes often lead to mistakes, such that some
products cannot be derived from the model anymore, that undesired products are
derivable or that there are contradictions in the variability model. In this
paper, we propose an approach to discover and to explain contradictions in
attributed feature models efficiently in order to assist the developer with the
correction of mistakes. We use extended feature models with attributes and
arithmetic constraints, translate them into a constraint satisfaction problem
and explore those for contradictions. When a contradiction is found, the
constraints are searched for a set of contradicting relations by the
QuickXplain algorithm.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301
Comparing Rough Set Theory with Multiple Regression Analysis as Automated Valuation Methodologies
This paper focuses on the problem of applying rough set theory to mass appraisal. This methodology was first introduced by a Polish mathematician, and has been applied recently as an automated valuation methodology by the author. The method allows the appraiser to estimate a property without defining econometric modeling, although it does not give any quantitative estimation of marginal prices. In a previous paper by the author, data were organized into classes prior to the valuation process, allowing for the if-then, or right âruleâ for each property class to be defined. In that work, the relationship between property and class of valued was said to be dichotomic.mass appraisal; property valuation; rough set theory; valued tolerance relation
A high resolution spatiotemporal model for in-vehicle black carbon exposure : quantifying the in-vehicle exposure reduction due to the Euro 5 particulate matter standard legislation
Several studies have shown that a significant amount of daily air pollution exposure is inhaled during trips. In this study, car drivers assessed their own black carbon exposure under real-life conditions (223 h of data from 2013). The spatiotemporal exposure of the car drivers is modeled using a data science approach, referred to as microscopic land-use regression (mu LUR). In-vehicle exposure is highly dynamical and is strongly related to the local traffic dynamics. An extensive set of potential covariates was used to model the in-vehicle black carbon exposure in a temporal resolution of 10 s. Traffic was retrieved directly from traffic databases and indirectly by attributing the trips through a noise map as an alternative traffic source. Modeling by generalized additive models (GAM) shows non-linear effects for meteorology and diurnal traffic patterns. A fitted diurnal pattern explains indirectly the complex diurnal variability of the exposure due to the non-linear interaction between traffic density and distance to the preceding vehicles. Comparing the strength of direct traffic attribution and indirect noise map-based traffic attribution reveals the potential of noise maps as a proxy for traffic-related air pollution exposure. An external validation, based on a dataset gathered in 2010-2011, quantifies the exposure reduction inside the vehicles at 33% (mean) and 50% (median). The EU PM Euro 5 PM emission standard (in force since 2009) explains the largest part of the discrepancy between the measurement campaign in 2013 and the validation dataset. The mu LUR methodology provides a high resolution, route-sensitive, seasonal and meteorology-sensitive personal exposure estimate for epidemiologists and policy makers
- âŠ