106 research outputs found

    Consumers’ Purchasing Intentions for Vegetable Oil in the Presence of Generic or Specific Information on Genetic Modification

    Get PDF
    In combination with the emergence of genetically modified (GM) food, there has been an urgent call for GM labeling to provide relevant information disclosure. Using data collected in Beijing, China, this study attempts to address the issue of whether different types of information may have distinct impacts on consumers’ stated purchasing decisions. Three types of information are used in this study: one is generic and the other two are linked with two important implications of GM technology—human health and the environment. Results verify that consumers’ purchasing decisions are affected by different types of information through their attitudes and personal characteristics. This finding has potential implications for establishing various GM marketing strategies and information campaigns.China, genetically modified (GM) food, labeling information, probit model, vegetable oil, Agribusiness, Food Consumption/Nutrition/Food Safety,

    Influence of Source Credibility on Consumer Acceptance of Genetically Modified Foods in China

    Get PDF
    This paper examines the reasoning mechanism behind the consumer acceptance of genetically modified foods (GMFs) in China, and investigates influence of source credibility on consumer acceptance of GMFs. Based on the original Persuasion Model—which was developed by Carl Hovland, an American psychologist and pioneer in the study of communication and its effect on attitudes and beliefs—we conducted a survey using multistage sampling from 1167 urban residents, which were proportionally selected from six cities in three economic regions (south, central, and north) in the Jiangsu province through face to face interviews. Mixed-process regression that could correct endogeneity and ordered probit model were used to test the impact of source credibility on consumers’ acceptance of GMFs. Our major finding was that consumer acceptance of GMFs is affected by such factors as information source credibility, general attitudes, gender, and education levels. The reliability of biotechnology research institutes, government offices devoted to management of GM organisms (GMOs), and GMO technological experts have expedited urban consumer acceptance of GM soybean oil. However, public acceptance can also decrease as faith in the environmental organization. We also found that ignorance of the endogeneity of above mentioned source significantly undervalued its effect on consumers’ acceptance. Moreover, the remaining three sources (non-GMO experts, food companies, and anonymous information found on the Internet) had almost no effect on consumer acceptance. Surprisingly, the more educated people in our survey were more skeptical towards GMFs. Our results contribute to the behavioral literature on consumer attitudes toward GMFs by developing a reasoning mechanism determining consumer acceptance of GMFs. Particularly, this paper quantitatively studied the influence of different source credibility on consumer acceptance of GMFs by using mixed-process regression to correct endogeneity in information sources, while taking into consideration of information asymmetry and specific preference in the use of information sources

    Automated Synthetic-to-Real Generalization

    Get PDF
    Models trained on synthetic images often face degraded generalization to real data. As a convention, these models are often initialized with ImageNet pre-trained representation. Yet the role of ImageNet knowledge is seldom discussed despite common practices that leverage this knowledge to maintain the generalization ability. An example is the careful hand-tuning of early stopping and layer-wise learning rates, which is shown to improve synthetic-to-real generalization but is also laborious and heuristic. In this work, we explicitly encourage the synthetically trained model to maintain similar representations with the ImageNet pre-trained model, and propose a \textit{learning-to-optimize (L2O)} strategy to automate the selection of layer-wise learning rates. We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data, while also benefiting downstream tasks such as domain adaptation. Code is available at: this https URL https://github.com/NVlabs/ASG

    Principled Architecture-aware Scaling of Hyperparameters

    Full text link
    Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process. Current works try to automatically optimize or design principles of hyperparameters, such that they can generalize to diverse unseen scenarios. However, most designs or optimization methods are agnostic to the choice of network structures, and thus largely ignore the impact of neural architectures on hyperparameters. In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture, which includes the network depth, width, convolutional kernel size, and connectivity patterns. By pursuing every parameter to be maximally updated with the same mean squared change in pre-activations, we can generalize our initialization and learning rates across MLPs (multi-layer perception) and CNNs (convolutional neural network) with sophisticated graph topologies. We verify our principles with comprehensive experiments. More importantly, our strategy further sheds light on advancing current benchmarks for architecture design. A fair comparison of AutoML algorithms requires accurate network rankings. However, we demonstrate that network rankings can be easily changed by better training networks in benchmarks with our architecture-aware learning rates and initialization

    Automated Synthetic-to-Real Generalization

    Get PDF
    Models trained on synthetic images often face degraded generalization to real data. As a convention, these models are often initialized with ImageNet pre-trained representation. Yet the role of ImageNet knowledge is seldom discussed despite common practices that leverage this knowledge to maintain the generalization ability. An example is the careful hand-tuning of early stopping and layer-wise learning rates, which is shown to improve synthetic-to-real generalization but is also laborious and heuristic. In this work, we explicitly encourage the synthetically trained model to maintain similar representations with the ImageNet pre-trained model, and propose a \textit{learning-to-optimize (L2O)} strategy to automate the selection of layer-wise learning rates. We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data, while also benefiting downstream tasks such as domain adaptation. Code is available at: https://github.com/NVlabs/ASG.Comment: Accepted to ICML 202
    corecore