106 research outputs found
Consumers’ Purchasing Intentions for Vegetable Oil in the Presence of Generic or Specific Information on Genetic Modification
In combination with the emergence of genetically modified (GM) food, there has been an urgent call for GM labeling to provide relevant information disclosure. Using data collected in Beijing, China, this study attempts to address the issue of whether different types of information may have distinct impacts on consumers’ stated purchasing decisions. Three types of information are used in this study: one is generic and the other two are linked with two important implications of GM technology—human health and the environment. Results verify that consumers’ purchasing decisions are affected by different types of information through their attitudes and personal characteristics. This finding has potential implications for establishing various GM marketing strategies and information campaigns.China, genetically modified (GM) food, labeling information, probit model, vegetable oil, Agribusiness, Food Consumption/Nutrition/Food Safety,
Recommended from our members
Demystifying deep network architectures : from theory to applications
Deep neural networks significantly power the success of machine learning and artificial intelligence. Over the past decade, the community keeps designing architectures of deep layers and complicated connections. Many works in deep learning theory tried to understand deep networks from different perspectives and contributed to concrete analysis.
However, the gap between deep learning theory and application is growingly large. Specifically, due to our partial understanding of our deep networks,
current deep learning theory is not enough to guide us in designing practical neural architectures. This is mainly because of two gaps: 1) designing network architectures is very expensive, and 2) practical network architectures are much more complicated than what we studied in theory. Therefore, our core question to focus on is: how do we bridge these gaps, between deep learning theory and practical neural architecture designs? This dissertation will center around this challenge and tries to bridge the gap between the two worlds. First, current deep learning theory can inspire our architecture design (Chapter 3 and 4). We propose three theory-inspired indicators with strong correlations with networks' performance and they can be measured at networks' initialization without introducing any gradient descent cost. Based on these important metrics, we propose a training-free neural architecture design algorithm with extremely low computation and time costs. Second, the architecture design can further inspire deep learning theory (Chapter 5 and 6). By introducing two principled directions of the network's graph topology, we jointly analyze the impact of the network architecture on its convergence, expressivity, and generalization of networks, and demonstrate a "no free lunch" behavior in ReLU networks. Finally, a practical use case in the industry will be discussed (Chapter 7), where we design and scale up vision foundation models, again, without any training cost.Electrical and Computer Engineerin
Influence of Source Credibility on Consumer Acceptance of Genetically Modified Foods in China
This paper examines the reasoning mechanism behind the consumer acceptance of genetically modified foods (GMFs) in China, and investigates influence of source credibility on consumer acceptance of GMFs. Based on the original Persuasion Model—which was developed by Carl Hovland, an American psychologist and pioneer in the study of communication and its effect on attitudes and beliefs—we conducted a survey using multistage sampling from 1167 urban residents, which were proportionally selected from six cities in three economic regions (south, central, and north) in the Jiangsu province through face to face interviews. Mixed-process regression that could correct endogeneity and ordered probit model were used to test the impact of source credibility on consumers’ acceptance of GMFs. Our major finding was that consumer acceptance of GMFs is affected by such factors as information source credibility, general attitudes, gender, and education levels. The reliability of biotechnology research institutes, government offices devoted to management of GM organisms (GMOs), and GMO technological experts have expedited urban consumer acceptance of GM soybean oil. However, public acceptance can also decrease as faith in the environmental organization. We also found that ignorance of the endogeneity of above mentioned source significantly undervalued its effect on consumers’ acceptance. Moreover, the remaining three sources (non-GMO experts, food companies, and anonymous information found on the Internet) had almost no effect on consumer acceptance. Surprisingly, the more educated people in our survey were more skeptical towards GMFs. Our results contribute to the behavioral literature on consumer attitudes toward GMFs by developing a reasoning mechanism determining consumer acceptance of GMFs. Particularly, this paper quantitatively studied the influence of different source credibility on consumer acceptance of GMFs by using mixed-process regression to correct endogeneity in information sources, while taking into consideration of information asymmetry and specific preference in the use of information sources
Automated Synthetic-to-Real Generalization
Models trained on synthetic images often face degraded generalization to real data. As a convention, these models are often initialized with ImageNet pre-trained representation. Yet the role of ImageNet knowledge is seldom discussed despite common practices that leverage this knowledge to maintain the generalization ability. An example is the careful hand-tuning of early stopping and layer-wise learning rates, which is shown to improve synthetic-to-real generalization but is also laborious and heuristic. In this work, we explicitly encourage the synthetically trained model to maintain similar representations with the ImageNet pre-trained model, and propose a \textit{learning-to-optimize (L2O)} strategy to automate the selection of layer-wise learning rates. We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data, while also benefiting downstream tasks such as domain adaptation. Code is available at: this https URL https://github.com/NVlabs/ASG
Principled Architecture-aware Scaling of Hyperparameters
Training a high-quality deep neural network requires choosing suitable
hyperparameters, which is a non-trivial and expensive process. Current works
try to automatically optimize or design principles of hyperparameters, such
that they can generalize to diverse unseen scenarios. However, most designs or
optimization methods are agnostic to the choice of network structures, and thus
largely ignore the impact of neural architectures on hyperparameters. In this
work, we precisely characterize the dependence of initializations and maximal
learning rates on the network architecture, which includes the network depth,
width, convolutional kernel size, and connectivity patterns. By pursuing every
parameter to be maximally updated with the same mean squared change in
pre-activations, we can generalize our initialization and learning rates across
MLPs (multi-layer perception) and CNNs (convolutional neural network) with
sophisticated graph topologies. We verify our principles with comprehensive
experiments. More importantly, our strategy further sheds light on advancing
current benchmarks for architecture design. A fair comparison of AutoML
algorithms requires accurate network rankings. However, we demonstrate that
network rankings can be easily changed by better training networks in
benchmarks with our architecture-aware learning rates and initialization
Automated Synthetic-to-Real Generalization
Models trained on synthetic images often face degraded generalization to real
data. As a convention, these models are often initialized with ImageNet
pre-trained representation. Yet the role of ImageNet knowledge is seldom
discussed despite common practices that leverage this knowledge to maintain the
generalization ability. An example is the careful hand-tuning of early stopping
and layer-wise learning rates, which is shown to improve synthetic-to-real
generalization but is also laborious and heuristic. In this work, we explicitly
encourage the synthetically trained model to maintain similar representations
with the ImageNet pre-trained model, and propose a \textit{learning-to-optimize
(L2O)} strategy to automate the selection of layer-wise learning rates. We
demonstrate that the proposed framework can significantly improve the
synthetic-to-real generalization performance without seeing and training on
real data, while also benefiting downstream tasks such as domain adaptation.
Code is available at: https://github.com/NVlabs/ASG.Comment: Accepted to ICML 202
- …