437 research outputs found
Recommended from our members
Exploring the impact of advertising on brand equity and shareholder value
textThe primary objective of this research was to test whether advertising can
contribute directly to brand equity and indirectly to shareholder value and, if it can,
determine how much value advertising can deliver to brands and firms. If advertising can
play a key role in developing and maintaining brand equity and shareholder value, it
should be considered an investment rather than an expense.
Mainstream advertising effectiveness research has traditionally focused on the
relationship between advertising and market performance measures such as sales volume
and market share. Even though this approach has produced interesting findings on how
advertising works or should work, its contributions to our knowledge about the role of
advertising in a competitive, complicated, and ever-changing market environment has
been limited. The present research employed a conceptual framework by Srivastava and his
colleagues (1998) in order to address posited relationships between advertising, R&D,
brand equity, and shareholder value. Using secondary data from various industry and
academic sources during a ten-year time span, simple and multiple regression analyses
were performed in conjunction with path analyses to evaluate the posited relationships.
The findings of the research showed that advertising can not only work to
improve market performance measures but also to develop and maintain brands. R&D
was also found to positively affect brand equity by presumably enhancing a firm’s
intellectual market-based assets. With regard to the relative effectiveness of advertising
and R&D, expenditures on R&D were more effective than expenditures on advertising in
contributing to brand equity when measuring absolute effects of expenditures. When
measuring changes in brand equity, however, changes in advertising were more effective
than changes in R&D. Thus, R&D can be more important than advertising in contributing
to the total value of brand equity, but advertising can be more effective than R&D in
contributing to the marginal value of brand equity.Advertisin
Membership generation using multilayer neural network
There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class
Training-free Style Transfer Emerges from h-space in Diffusion models
Diffusion models (DMs) synthesize high-quality images in various domains.
However, controlling their generative process is still hazy because the
intermediate variables in the process are not rigorously studied. Recently,
StyleCLIP-like editing of DMs is found in the bottleneck of the U-Net, named
-space. In this paper, we discover that DMs inherently have disentangled
representations for content and style of the resulting images: -space
contains the content and the skip connections convey the style. Furthermore, we
introduce a principled way to inject content of one image to another
considering progressive nature of the generative process. Briefly, given the
original generative process, 1) the feature of the source content should be
gradually blended, 2) the blended feature should be normalized to preserve the
distribution, 3) the change of skip connections due to content injection should
be calibrated. Then, the resulting image has the source content with the style
of the original image just like image-to-image translation. Interestingly,
injecting contents to styles of unseen domains produces harmonization-like
style transfer. To the best of our knowledge, our method introduces the first
training-free feed-forward style transfer only with an unconditional pretrained
frozen generative network. The code is available at
https://curryjung.github.io/DiffStyle/
Diffusion Models already have a Semantic Latent Space
Diffusion models achieve outstanding generative performance in various
domains. Despite their great success, they lack semantic latent space which is
essential for controlling the generative process. To address the problem, we
propose asymmetric reverse process (Asyrp) which discovers the semantic latent
space in frozen pretrained diffusion models. Our semantic latent space, named
h-space, has nice properties for accommodating semantic image manipulation:
homogeneity, linearity, robustness, and consistency across timesteps. In
addition, we introduce a principled design of the generative process for
versatile editing and quality boost ing by quantifiable measures: editing
strength of an interval and quality deficiency at a timestep. Our method is
applicable to various architectures (DDPM++, iD- DPM, and ADM) and datasets
(CelebA-HQ, AFHQ-dog, LSUN-church, LSUN- bedroom, and METFACES). Project page:
https://kwonminki.github.io/Asyrp/Comment: ICLR2023 (Notable - Top 25%
- …