6,338 research outputs found

    Barbauld's Richardson and the canonisation of personal character

    Get PDF
    In The Correspondence of Samuel Richardson (1804), Anna Letitia Barbauld set out to assure readers that the novelist’s personal character (as displayed in his letters) corresponded to his authorial moral character (as inferred through his novels) in order to present him as an appropriate father of the modern British novel – a process I call the “canonisation of private character.” To that end, Barbauld’s editorial work presented Richardson as a benevolent patriarchal figure whose moral authority over the domestic life of his extended family guaranteed the morality of his novels and of his personal character. As my case study of Richardson’s correspondence with Sarah Wescomb shows, Barbauld’s interventions accordingly muted challenges to Richardson’s authority on questions of paternal control or filial obedience. Life writing, textual criticism, and literary history were therefore so intimately intertwined in Barbauld’s treatment of Richardson and his writings that they mutually constituted and sustained each other. Her contributions to the elevation and institution of novels as a national literary genre – in the Correspondence as well as in her later prefaces to The British Novelists (1810) – accordingly should be read in conjunction with her biographical elevation and canonisation of Richardson as the first properly moral, modern novelist

    Using Gaussian process regression for efficient parameter reconstruction

    Full text link
    Optical scatterometry is a method to measure the size and shape of periodic micro- or nanostructures on surfaces. For this purpose the geometry parameters of the structures are obtained by reproducing experimental measurement results through numerical simulations. We compare the performance of Bayesian optimization to different local minimization algorithms for this numerical optimization problem. Bayesian optimization uses Gaussian-process regression to find promising parameter values. We examine how pre-computed simulation results can be used to train the Gaussian process and to accelerate the optimization.Comment: 8 pages, 4 figure

    A Super Efficiency Model for Product Evaluation

    Get PDF
    This study applies a Super Efficiency Data Envelopment Analysis model to evaluate the efficiency of cars sold on the German market. Efficiency is conceptualized from a customers' perspective as a ratio of outputs that customers obtain from a product relative to inputs that customers have to invest. The output side is modeled as a set of customer-relevant parameters such as performance attributes but also nonfunctional benefits and brand strength. More than 60% of the cars are efficient but the analysis shows marked differences regarding their degree of Super Efficiency. Super Efficiency indicates the extent to which the efficient products exceed the efficient frontier formed by other efficient units. Based on the parameter weights, segments of cars with a particular mix of characteristics can be identified; cars with a comparative advantage relative to their competitors who provide the same mix are characterized as the reference points within a given segment.Customer Value, Data Envelopment Analysis (DEA), Marketing Efficiency, Product Marketing, Super Efficiency Model

    Generating Multi-Categorical Samples with Generative Adversarial Networks

    Get PDF
    We propose a method to train generative adversarial networks on mutivariate feature vectors representing multiple categorical values. In contrast to the continuous domain, where GAN-based methods have delivered considerable results, GANs struggle to perform equally well on discrete data. We propose and compare several architectures based on multiple (Gumbel) softmax output layers taking into account the structure of the data. We evaluate the performance of our architecture on datasets with different sparsity, number of features, ranges of categorical values, and dependencies among the features. Our proposed architecture and method outperforms existing models

    Benchmarking the Health Sector in Germany – An Application of Data Envelopment Analysis

    Get PDF
    At present, a first round of hospital benchmarking as required by German law on health care reform takes place. After extensive discussions between hospitals and insurance companies, which are jointly responsible to deliver benchmarking results, a method with some peculiar characteristics was chosen. In this paper it is argued that the deficiencies of said method could be overcome by using Data Envelopment Analysis (DEA). The reasons that make DEA an advisable tool for policy decisions within the context of relative performance evaluation in the health care sector are discussed. In order to illustrate the potential of nonparametric frontier estimation for hospital benchmarking in Germany, a comparison of hospitals, which provide the same basic clinical care, is carried out. Controlling for differences in the case mix and for possible heterogeneity of the services which hospitals provide, substantial productivity differences can be detected. Beyond simply identifying inefficient providers DEA leads to additional insight about the reasons of inefficiency and to useful management implications.Health care reform benchmarking relative performance evaluation Data Envelopment Analysis

    DEVELOPING AND VALIDATING A QUALITY ASSESSMENT SCALE FOR WEB PORTALS

    Get PDF
    The Web portals business model has spread rapidly over the last few years. Despite this, there have been very few scholarly findings about which services and characteristics make a Web site a portal and which dimensions determine the customers’ evaluation of the portal’s quality. Taking the example of financial portals, the authors develop a theoretical framework of the Web portal quality construct by determining the number and nature of corresponding dimensions, which are: security and trust, basic services quality, cross-buying services quality, added values, transaction support and relationship quality. To measure the six portal quality dimensions, multi item measurement scales are developed and validated.Construct Validation, Customer Retention, E-Banking, E- Loyalty, Service Quality, Web Portals

    Human in the Loop: Interactive Passive Automata Learning via Evidence-Driven State-Merging Algorithms

    Get PDF
    We present an interactive version of an evidence-driven state-merging (EDSM) algorithm for learning variants of finite state automata. Learning these automata often amounts to recovering or reverse engineering the model generating the data despite noisy, incomplete, or imperfectly sampled data sources rather than optimizing a purely numeric target function. Domain expertise and human knowledge about the target domain can guide this process, and typically is captured in parameter settings. Often, domain expertise is subconscious and not expressed explicitly. Directly interacting with the learning algorithm makes it easier to utilize this knowledge effectively.Comment: 4 pages, presented at the Human in the Loop workshop at ICML 201

    Improving Missing Data Imputation with Deep Generative Models

    Full text link
    Datasets with missing values are very common on industry applications, and they can have a negative impact on machine learning models. Recent studies introduced solutions to the problem of imputing missing values based on deep generative models. Previous experiments with Generative Adversarial Networks and Variational Autoencoders showed interesting results in this domain, but it is not clear which method is preferable for different use cases. The goal of this work is twofold: we present a comparison between missing data imputation solutions based on deep generative models, and we propose improvements over those methodologies. We run our experiments using known real life datasets with different characteristics, removing values at random and reconstructing them with several imputation techniques. Our results show that the presence or absence of categorical variables can alter the selection of the best model, and that some models are more stable than others after similar runs with different random number generator seeds
    • …
    corecore