101,533 research outputs found

    An automated Model-based Testing Approach in Software Product Lines Using a Variability Language.

    Get PDF
    This paper presents the application of an automated testing approach for Software Product Lines (SPL) driven by its state-machine and variability models. Context: Model-based testing provides a technique for automatic generation of test cases using models. Introduction of a variability model in this technique can achieve testing automation in SPL. Method: We use UML and CVL (Common Variability Language) models as input, and JUnit test cases are derived from these models. This approach has been implemented using the UML2 Eclipse Modeling platform and the CVL-Tool. Validation: A model checking tool prototype has been developed and a case study has been performed. Conclusions: Preliminary experiments have proved that our approach can find structural errors in the SPL under test. In our future work we will introduce Object Constraint Language (OCL) constraints attached to the input UML mode

    Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline

    Full text link
    From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.Comment: Healthcare Informatics, Machine Learning, Knowledge Discovery: 20 Pages, 1 Figur

    Detecting and Explaining Conflicts in Attributed Feature Models

    Full text link
    Product configuration systems are often based on a variability model. The development of a variability model is a time consuming and error-prone process. Considering the ongoing development of products, the variability model has to be adapted frequently. These changes often lead to mistakes, such that some products cannot be derived from the model anymore, that undesired products are derivable or that there are contradictions in the variability model. In this paper, we propose an approach to discover and to explain contradictions in attributed feature models efficiently in order to assist the developer with the correction of mistakes. We use extended feature models with attributes and arithmetic constraints, translate them into a constraint satisfaction problem and explore those for contradictions. When a contradiction is found, the constraints are searched for a set of contradicting relations by the QuickXplain algorithm.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301

    Comparing Rough Set Theory with Multiple Regression Analysis as Automated Valuation Methodologies

    Get PDF
    This paper focuses on the problem of applying rough set theory to mass appraisal. This methodology was first introduced by a Polish mathematician, and has been applied recently as an automated valuation methodology by the author. The method allows the appraiser to estimate a property without defining econometric modeling, although it does not give any quantitative estimation of marginal prices. In a previous paper by the author, data were organized into classes prior to the valuation process, allowing for the if-then, or right “rule” for each property class to be defined. In that work, the relationship between property and class of valued was said to be dichotomic.mass appraisal; property valuation; rough set theory; valued tolerance relation

    A high resolution spatiotemporal model for in-vehicle black carbon exposure : quantifying the in-vehicle exposure reduction due to the Euro 5 particulate matter standard legislation

    Get PDF
    Several studies have shown that a significant amount of daily air pollution exposure is inhaled during trips. In this study, car drivers assessed their own black carbon exposure under real-life conditions (223 h of data from 2013). The spatiotemporal exposure of the car drivers is modeled using a data science approach, referred to as microscopic land-use regression (mu LUR). In-vehicle exposure is highly dynamical and is strongly related to the local traffic dynamics. An extensive set of potential covariates was used to model the in-vehicle black carbon exposure in a temporal resolution of 10 s. Traffic was retrieved directly from traffic databases and indirectly by attributing the trips through a noise map as an alternative traffic source. Modeling by generalized additive models (GAM) shows non-linear effects for meteorology and diurnal traffic patterns. A fitted diurnal pattern explains indirectly the complex diurnal variability of the exposure due to the non-linear interaction between traffic density and distance to the preceding vehicles. Comparing the strength of direct traffic attribution and indirect noise map-based traffic attribution reveals the potential of noise maps as a proxy for traffic-related air pollution exposure. An external validation, based on a dataset gathered in 2010-2011, quantifies the exposure reduction inside the vehicles at 33% (mean) and 50% (median). The EU PM Euro 5 PM emission standard (in force since 2009) explains the largest part of the discrepancy between the measurement campaign in 2013 and the validation dataset. The mu LUR methodology provides a high resolution, route-sensitive, seasonal and meteorology-sensitive personal exposure estimate for epidemiologists and policy makers
    • 

    corecore