1,071 research outputs found

    Factor analysis modelling for speaker verification with short utterances

    Get PDF
    This paper examines combining both relevance MAP and subspace speaker adaptation processes to train GMM speaker models for use in speaker verification systems with a particular focus on short utterance lengths. The subspace speaker adaptation method involves developing a speaker GMM mean supervector as the sum of a speaker-independent prior distribution and a speaker dependent offset constrained to lie within a low-rank subspace, and has been shown to provide improvements in accuracy over ordinary relevance MAP when the amount of training data is limited. It is shown through testing on NIST SRE data that combining the two processes provides speaker models which lead to modest improvements in verification accuracy for limited data situations, in addition to improving the performance of the speaker verification system when a larger amount of available training data is available

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186

    Modelling Local Deep Convolutional Neural Network Features to Improve Fine-Grained Image Classification

    Get PDF
    We propose a local modelling approach using deep convolutional neural networks (CNNs) for fine-grained image classification. Recently, deep CNNs trained from large datasets have considerably improved the performance of object recognition. However, to date there has been limited work using these deep CNNs as local feature extractors. This partly stems from CNNs having internal representations which are high dimensional, thereby making such representations difficult to model using stochastic models. To overcome this issue, we propose to reduce the dimensionality of one of the internal fully connected layers, in conjunction with layer-restricted retraining to avoid retraining the entire network. The distribution of low-dimensional features obtained from the modified layer is then modelled using a Gaussian mixture model. Comparative experiments show that considerable performance improvements can be achieved on the challenging Fish and UEC FOOD-100 datasets.Comment: 5 pages, three figure

    Variability-Modelling Practices in Industrial Software Product Lines: A Qualitative Study

    Get PDF
    Many organizations have transitioned from single-systems development to product-line development with the goal of increasing productivity and facilitating mass customization. Variability modelling is a key activity in software product-line development that deals with the explicit representation of variability using dedicated models. Variability models specify points of variability and their variants in a product line. Although many variability-modelling notations and tools have been designed by researchers and practitioners, very little is known about their usage, actual benefits or challenges. Existing studies mostly describe product-line practices in general, with little focus on variability modelling. We address this gap through a qualitative study on variability-modelling practices in medium- and large-scale companies using two empirical methods: surveys and interviews. We investigated companies' variability-modelling practices and experiences with the aim to gather information on 1) the methods and strategies used to create and manage variability models, 2) the tools and notations used for variability modelling, 3) the perceived values and challenges of variability modelling, and 4) the core characteristics of their variability models. Our results show that variability models are often created by re-engineering existing products into a product line. All of the interviewees and the majority of survey participants indicated that they represent variability using separate variability models rather than annotative approaches. We found that developers use variability models for many purposes, such as the visualization of variabilities, configuration of products, and scoping of products. Although we observed that high degree of heterogeneity exists in the variability-modelling notations and tools used by organizations, feature-based notations and tools are the most common. We saw huge differences in the sizes of variability models and their contents, which indicate that variability models can have different use cases depending on the organization. Most of our study participants reported complexity challenges that were related mainly to the visualization and evolution of variability models, and dependency management. In addition, reports from interviews suggest that product-line adoption and variability modelling have forced developers to think in terms of a product-line scenario rather than a product-based scenario

    Real-world variability, modelling and mitigation of road transport emissions

    Get PDF
    Outdoor air pollution is considered the largest single environmental health risk and is estimated to cause 4.2 million deaths every year. Despite the major vehicle emissions reduction achieved over the past two decades, road transport remains a major source of air pollutants such as nitrogen oxides (NOx), contributing 39% of the total EU-28 NOx emissions in 2017. The regular exceedances of the annual mean concentration limit for NO2, particularly in urban areas, have been largely attributed to the discrepancies between type approval limits and real-world driving emissions as well as the tting of defeat devices on diesel vehicles. In order to design e ective air quality mitigation strategies, it is therefore crucial to improve our understanding of and ability to model real-world driving emissions. Based on the largest recorded Portable Emissions Measurement System (PEMS) dataset, which comprised 287 Euro 5 and Euro 6 diesel and petrol vehicles, this PhD thesis aims to ll these gaps by providing new emissions models based on an extensive dataset. It is demonstrated in this thesis that while physical parameters such as vehicle weight or engine size did not show any correlation with real-world NOx emissions, external parameters, particularly driving dynamicity, are directly correlated with real-world driving emissions. The e ect of driving dynamicity on real-world emissions is shown to decrease with the successive regulations (Euro standard), indicating a general improvement of aftertreatment systems. The rst model presented is an aggregated emission model, while the second emission model is an instantaneous emission model but both directly account for driving dynamicity, although the way they do di er signi cantly. Both models are reliable and accurate, presenting relative error in prediction smaller than 20%. Additionally, this PhD thesis also intends to gain insights on the real-world impact of an air quality mitigation strategy on the emissions and local air quality. Application of the developed models to the assessment of the impact of a tra c intervention on air quality demonstrated that although the chosen mitigation strategy had locally a measurable impact on emissions and air quality, this impact was small compared to the variations in pollutant concentrations induced by the meteorological conditions. The ease of use of both models, as well as their wide range of applicability make them ideal operational tools for policy makers aiming to build emission inventories or evaluate emissions mitigation strategies.Open Acces

    Evaluation of Variability Concepts for Simulink in the Automotive Domain

    Get PDF
    Modeling variability in Matlab/Simulink becomes more and more important. We took the two variability modeling concepts already included in Matlab/Simulink and our own one and evaluated them to find out which one is suited best for modeling variability in the automotive domain. We conducted a controlled experiment with developers at Volkswagen AG to decide which concept is preferred by developers and if their preference aligns with measurable performance factors. We found out that all existing concepts are viable approaches and that the delta approach is both the preferred concept as well as the objectively most efficient one, which makes Delta-Simulink a good solution to model variability in the automotive domain.Comment: 10 pages, 7 figures, 6 tables, Proceedings of 48th Hawaii International Conference on System Sciences (HICSS), pp. 5373-5382, Kauai, Hawaii, USA, IEEE Computer Society, 201
    corecore