8,591 research outputs found
Barbauld's Richardson and the canonisation of personal character
In The Correspondence of Samuel Richardson (1804), Anna Letitia Barbauld set out to assure readers that the novelist’s personal character (as displayed in his letters) corresponded to his authorial moral character (as inferred through his novels) in order to present him as an appropriate father of the modern British novel – a process I call the “canonisation of private character.” To that end, Barbauld’s editorial work presented Richardson as a benevolent patriarchal figure whose moral authority over the domestic life of his extended family guaranteed the morality of his novels and of his personal character. As my case study of Richardson’s correspondence with Sarah Wescomb shows, Barbauld’s interventions accordingly muted challenges to Richardson’s authority on questions of paternal control or filial obedience. Life writing, textual criticism, and literary history were therefore so intimately intertwined in Barbauld’s treatment of Richardson and his writings that they mutually constituted and sustained each other. Her contributions to the elevation and institution of novels as a national literary genre – in the Correspondence as well as in her later prefaces to The British Novelists (1810) – accordingly should be read in conjunction with her biographical elevation and canonisation of Richardson as the first properly moral, modern novelist
Generating Multi-Categorical Samples with Generative Adversarial Networks
We propose a method to train generative adversarial networks on mutivariate
feature vectors representing multiple categorical values. In contrast to the
continuous domain, where GAN-based methods have delivered considerable results,
GANs struggle to perform equally well on discrete data. We propose and compare
several architectures based on multiple (Gumbel) softmax output layers taking
into account the structure of the data. We evaluate the performance of our
architecture on datasets with different sparsity, number of features, ranges of
categorical values, and dependencies among the features. Our proposed
architecture and method outperforms existing models
DEVELOPING AND VALIDATING A QUALITY ASSESSMENT SCALE FOR WEB PORTALS
The Web portals business model has spread rapidly over the last few years. Despite this, there have been very few scholarly findings about which services and characteristics make a Web site a portal and which dimensions determine the customers’ evaluation of the portal’s quality. Taking the example of financial portals, the authors develop a theoretical framework of the Web portal quality construct by determining the number and nature of corresponding dimensions, which are: security and trust, basic services quality, cross-buying services quality, added values, transaction support and relationship quality. To measure the six portal quality dimensions, multi item measurement scales are developed and validated.Construct Validation, Customer Retention, E-Banking, E- Loyalty, Service Quality, Web Portals
Human in the Loop: Interactive Passive Automata Learning via Evidence-Driven State-Merging Algorithms
We present an interactive version of an evidence-driven state-merging (EDSM)
algorithm for learning variants of finite state automata. Learning these
automata often amounts to recovering or reverse engineering the model
generating the data despite noisy, incomplete, or imperfectly sampled data
sources rather than optimizing a purely numeric target function. Domain
expertise and human knowledge about the target domain can guide this process,
and typically is captured in parameter settings. Often, domain expertise is
subconscious and not expressed explicitly. Directly interacting with the
learning algorithm makes it easier to utilize this knowledge effectively.Comment: 4 pages, presented at the Human in the Loop workshop at ICML 201
Improving Missing Data Imputation with Deep Generative Models
Datasets with missing values are very common on industry applications, and
they can have a negative impact on machine learning models. Recent studies
introduced solutions to the problem of imputing missing values based on deep
generative models. Previous experiments with Generative Adversarial Networks
and Variational Autoencoders showed interesting results in this domain, but it
is not clear which method is preferable for different use cases. The goal of
this work is twofold: we present a comparison between missing data imputation
solutions based on deep generative models, and we propose improvements over
those methodologies. We run our experiments using known real life datasets with
different characteristics, removing values at random and reconstructing them
with several imputation techniques. Our results show that the presence or
absence of categorical variables can alter the selection of the best model, and
that some models are more stable than others after similar runs with different
random number generator seeds
Using Gaussian process regression for efficient parameter reconstruction
Optical scatterometry is a method to measure the size and shape of periodic
micro- or nanostructures on surfaces. For this purpose the geometry parameters
of the structures are obtained by reproducing experimental measurement results
through numerical simulations. We compare the performance of Bayesian
optimization to different local minimization algorithms for this numerical
optimization problem. Bayesian optimization uses Gaussian-process regression to
find promising parameter values. We examine how pre-computed simulation results
can be used to train the Gaussian process and to accelerate the optimization.Comment: 8 pages, 4 figure
Analyzing Product Efficiency – A Customer-Oriented Approach
The purpose of this study is to provide a broader, economic perspective on customer value management. By developing an efficiency-based concept of customer value we aim at contributing to the presently underrepresented research field of marketing economics. The customer value concept is utilized to assess product performance and eventually to determine the competitive market structure and the product-market boundaries. Our analytical approach to product-market structuring based on customer value is developed within a microeconomic framework. We measure customer value as the product efficiency viewed from the customer’s perspective, i.e., as a ratio of outputs (e.g., resale value, reliability, safety, comfort) that customers obtain from a product relative to inputs (price, running costs) that customers have to deliver in exchange. The efficiency value derived can be understood as the return on the customer’s investment. Products offering a maximum customer value relative to all other alternatives in the market are characterized as efficient. Different efficient products may create value in different ways using different strategies (output-input- combinations). Each efficient product can be viewed as a benchmark for a distinct sub-market. Jointly, these products form the efficient frontier, which serves as a reference function for the inefficient products. Thus, we define customer value of alternative products as a relative concept. Market partitioning is achieved endogenously by clustering products in one segment that are benchmarked by the same efficient peer(s). This ensures that only products with a similar output-input structure are partitioned into the same sub-market. As a result, a sub-market consists of highly substitutable products. In addition, value-creating strategies (i.e., indications of how to vary inputs and outputs) to improve product performance in order to offer maximum customer value are provided. The impact of each performance parameter on customer value is determined, identifying the value drivers among them. This methodological framework is applied to data of the 1996 German Automobile Club (ADAC) survey.Customer Value, Data Envelopment Analysis (DEA), Efficiency Analysis, Market Partitioning, Product-Market Structuring
A Super Efficiency Model for Product Evaluation
This study applies a Super Efficiency Data Envelopment Analysis model to evaluate the efficiency of cars sold on the German market. Efficiency is conceptualized from a customers' perspective as a ratio of outputs that customers obtain from a product relative to inputs that customers have to invest. The output side is modeled as a set of customer-relevant parameters such as performance attributes but also nonfunctional benefits and brand strength. More than 60% of the cars are efficient but the analysis shows marked differences regarding their degree of Super Efficiency. Super Efficiency indicates the extent to which the efficient products exceed the efficient frontier formed by other efficient units. Based on the parameter weights, segments of cars with a particular mix of characteristics can be identified; cars with a comparative advantage relative to their competitors who provide the same mix are characterized as the reference points within a given segment.Customer Value, Data Envelopment Analysis (DEA), Marketing Efficiency, Product Marketing, Super Efficiency Model
- …
