244 research outputs found
Influence of Environmental Knowledge and Green Trust on Green Purchase Behaviour
This study investigates the influence of environmental knowledge and green trust on green purchase intentions and behavior, and the mediating role of purchase intentions. Through questionnaire surveys and statistical analysis, the research findings reveal that environmental knowledge and green trust significantly influence green purchase intentions and actual green purchasing behaviour. The results of this study hold significant meaning for the formulation of environmental education and environmental advocacy strategies, contributing to enhancing consumers' environmental awareness and promoting green purchasing behaviour
The Application of the “Three Accordance” Principles in the Translation of Foreign Publicity Texts: Taking the translation of Chinese leaders’ epidemic speech as examples
This paper takes the epidemic speech of Chinese leaders as the research object, analyzes the English translation of the epidemic speech of Chinese leaders in detail based on the “Three Accordance” Principles for External Publicity. The translation techniques of foreign publicity texts are discussed - comparing words with each other, turning images into meanings and adapting to local customs. These three techniques are used in order to show the strengths of China and avoid the prejudice and better publicize Chinese opinions and attitudes
Towards Visually Explaining Variational Autoencoders
Recent advances in Convolutional Neural Network (CNN) model interpretability
have led to impressive progress in visualizing and understanding model
predictions. In particular, gradient-based visual attention methods have driven
much recent effort in using visual attention maps as a means for visual
explanations. A key problem, however, is these methods are designed for
classification and categorization tasks, and their extension to explaining
generative models, e.g. variational autoencoders (VAE) is not trivial. In this
work, we take a step towards bridging this crucial gap, proposing the first
technique to visually explain VAEs by means of gradient-based attention. We
present methods to generate visual attention from the learned latent space, and
also demonstrate such attention explanations serve more than just explaining
VAE predictions. We show how these attention maps can be used to localize
anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD
dataset. We also show how they can be infused into model training, helping
bootstrap the VAE into learning improved latent space disentanglement,
demonstrated on the Dsprites dataset
Learning Similarity Attention
We consider the problem of learning similarity functions. While there has
been substantial progress in learning suitable distance metrics, these
techniques in general lack decision reasoning, i.e., explaining why the input
set of images is similar or dissimilar. In this work, we solve this key problem
by proposing the first method to generate generic visual similarity
explanations with gradient-based attention. We demonstrate that our technique
is agnostic to the specific similarity model type, e.g., we show applicability
to Siamese, triplet, and quadruplet models. Furthermore, we make our proposed
similarity attention a principled part of the learning process, resulting in a
new paradigm for learning similarity functions. We demonstrate that our
learning mechanism results in more generalizable, as well as explainable,
similarity models. Finally, we demonstrate the generality of our framework by
means of experiments on a variety of tasks, including image retrieval, person
re-identification, and low-shot semantic segmentation.Comment: 10 pages, 7 figures, 4 table
Graph Meets LLM: A Novel Approach to Collaborative Filtering for Robust Conversational Understanding
Conversational AI systems such as Alexa need to understand defective queries
to ensure robust conversational understanding and reduce user friction. These
defective queries often arise from user ambiguities, mistakes, or errors in
automatic speech recognition (ASR) and natural language understanding (NLU).
Personalized query rewriting is an approach that focuses on reducing defects
in queries by taking into account the user's individual behavior and
preferences. It typically relies on an index of past successful user
interactions with the conversational AI. However, unseen interactions within
the user's history present additional challenges for personalized query
rewriting. This paper presents our "Collaborative Query Rewriting" approach,
which specifically addresses the task of rewriting new user interactions that
have not been previously observed in the user's history. This approach builds a
"User Feedback Interaction Graph" (FIG) of historical user-entity interactions
and leverages multi-hop graph traversal to enrich each user's index to cover
future unseen defective queries. The enriched user index is called a
Collaborative User Index and contains hundreds of additional entries. To
counteract precision degradation from the enlarged index, we add additional
transformer layers to the L1 retrieval model and incorporate graph-based and
guardrail features into the L2 ranking model.
Since the user index can be pre-computed, we further investigate the
utilization of a Large Language Model (LLM) to enhance the FIG for user-entity
link prediction in the Video/Music domains. Specifically, this paper
investigates the Dolly-V2 7B model. We found that the user index augmented by
the fine-tuned Dolly-V2 generation significantly enhanced the coverage of
future unseen user interactions, thereby boosting QR performance on unseen
queries compared with the graph traversal only approach
Effects of 1.5°C and 2°C of warming on regional reference evapotranspiration and drying:A case study of the Yellow River Basin, China
Progressive Multi-view Human Mesh Recovery with Self-Supervision
To date, little attention has been given to multi-view 3D human mesh
estimation, despite real-life applicability (e.g., motion capture, sport
analysis) and robustness to single-view ambiguities. Existing solutions
typically suffer from poor generalization performance to new settings, largely
due to the limited diversity of image-mesh pairs in multi-view training data.
To address this shortcoming, people have explored the use of synthetic images.
But besides the usual impact of visual gap between rendered and target data,
synthetic-data-driven multi-view estimators also suffer from overfitting to the
camera viewpoint distribution sampled during training which usually differs
from real-world distributions. Tackling both challenges, we propose a novel
simulation-based training pipeline for multi-view human mesh recovery, which
(a) relies on intermediate 2D representations which are more robust to
synthetic-to-real domain gap; (b) leverages learnable calibration and
triangulation to adapt to more diversified camera setups; and (c) progressively
aggregates multi-view information in a canonical 3D space to remove ambiguities
in 2D representations. Through extensive benchmarking, we demonstrate the
superiority of the proposed solution especially for unseen in-the-wild
scenarios.Comment: Accepted by AAAI202
PREF: Predictability Regularized Neural Motion Fields
Knowing the 3D motions in a dynamic scene is essential to many vision
applications. Recent progress is mainly focused on estimating the activity of
some specific elements like humans. In this paper, we leverage a neural motion
field for estimating the motion of all points in a multiview setting. Modeling
the motion from a dynamic scene with multiview data is challenging due to the
ambiguities in points of similar color and points with time-varying color. We
propose to regularize the estimated motion to be predictable. If the motion
from previous frames is known, then the motion in the near future should be
predictable. Therefore, we introduce a predictability regularization by first
conditioning the estimated motion on latent embeddings, then by adopting a
predictor network to enforce predictability on the embeddings. The proposed
framework PREF (Predictability REgularized Fields) achieves on par or better
results than state-of-the-art neural motion field-based dynamic scene
representation methods, while requiring no prior knowledge of the scene.Comment: Accepted at ECCV 2022 (oral). Paper + supplementary materia
- …