163 research outputs found
The Research on Impact Factors of Perceived Online Review Usefulness
Online review has become a research focus of marketing researchers recently, especially on its impacting on consumers’ purchasing decision. But considering the questionnaire research method and ignorance of influencing mechanism research, this study is established to study the detail impact factors of online reviews of usefulness. The study uses text mining method to collect valid data on Yelp.com, the biggest online review platform over the world. Results indicate that online reviews depth, review humor marked by other users, reviewers’ historic comments amount, reviewers’ rank, reviewers’ centrality of social network and others’ responds all have significant impact on the perceived online reviews usefulness. And the product involvement of review receiver plays moderating role in influencing the content of information and sources of information on the perceived online reviews usefulness
Long and Diverse Text Generation with Planning-based Hierarchical Variational Model
Existing neural methods for data-to-text generation are still struggling to
produce long and diverse texts: they are insufficient to model input data
dynamically during generation, to capture inter-sentence coherence, or to
generate diversified expressions. To address these issues, we propose a
Planning-based Hierarchical Variational Model (PHVM). Our model first plans a
sequence of groups (each group is a subset of input items to be covered by a
sentence) and then realizes each sentence conditioned on the planning result
and the previously generated context, thereby decomposing long text generation
into dependent sentence generation sub-tasks. To capture expression diversity,
we devise a hierarchical latent structure where a global planning latent
variable models the diversity of reasonable planning and a sequence of local
latent variables controls sentence realization. Experiments show that our model
outperforms state-of-the-art baselines in long and diverse text generation.Comment: To appear in EMNLP 201
Optimization on the container loading sequence based on hybrid dynamic programming
Retrieving export containers from a container yard is an important part of the ship loading process during which arranging the retrieving sequence to enhance port efficiency has become a vital issue. This paper presents a twophase hybrid dynamic algorithm aiming at obtaining an optimized container loading sequence for a crane to retrieve all the containers from the yard to the ship. The optimization goal is to minimize the number of relocation operations which has a direct impact upon container loading operation efficiency. The two phases of the proposed dynamic algorithms are designed as follows: at the first phase, a heuristic algorithm is developed to retrieve the containers subset which needs no relocation and may be loaded directly onto the ship; at the second phase, a dynamic programming with heuristic rules is applied to solve the loading sequence problem for the rest of the containers. Numerical experiments showed the effectiveness and practicability of the model and the algorithm by comparing with the loading proposals from an existing research and actual rules respectively.
First published online:Â 14 Jan 201
Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy
Large language models are powerful text processors and reasoners, but are
still subject to limitations including outdated knowledge and hallucinations,
which necessitates connecting them to the world. Retrieval-augmented large
language models have raised extensive attention for grounding model generation
on external knowledge. However, retrievers struggle to capture relevance,
especially for queries with complex information needs. Recent work has proposed
to improve relevance modeling by having large language models actively involved
in retrieval, i.e., to improve retrieval with generation. In this paper, we
show that strong performance can be achieved by a method we call Iter-RetGen,
which synergizes retrieval and generation in an iterative manner. A model
output shows what might be needed to finish a task, and thus provides an
informative context for retrieving more relevant knowledge which in turn helps
generate a better output in the next iteration. Compared with recent work which
interleaves retrieval with generation when producing an output, Iter-RetGen
processes all retrieved knowledge as a whole and largely preserves the
flexibility in generation without structural constraints. We evaluate
Iter-RetGen on multi-hop question answering, fact verification, and commonsense
reasoning, and show that it can flexibly leverage parametric knowledge and
non-parametric knowledge, and is superior to or competitive with
state-of-the-art retrieval-augmented baselines while causing fewer overheads of
retrieval and generation. We can further improve performance via
generation-augmented retrieval adaptation.Comment: Preprin
Mitochondrial genomes of two Barklice, Psococerastis albimaculata and Longivalvus hyalospilus (Psocoptera: Psocomorpha): contrasting rates in mitochondrial gene rearrangement between major lineages of Psocodea
The superorder Psocodea has ∼10,000 described species in two orders: Psocoptera (barklice and booklice) and Phthiraptera (parasitic lice). One booklouse, Liposcelis bostrychophila and six species of parasitic lice have been sequenced for complete mitochondrial (mt) genomes; these seven species have the most rearranged mt genomes seen in insects. The mt genome of a barklouse, lepidopsocid sp., has also been sequenced and is much less rearranged than those of the booklouse and the parasitic lice. To further understand mt gene rearrangements in the Psocodea, we sequenced the mt genomes of two barklice, Psococerastis albimaculata and Longivalvus hyalospilus, the first representatives from the suborder Psocomorpha, which is the most species-rich suborder of the Psocodea. We found that these two barklice have the least rearranged mt genomes seen in the Psocodea to date: a protein-coding gene (nad3) and five tRNAs (trnN, trnS1, trnE, trnM and trnC) have translocated. Rearrangements of mt genes in these two barklice can be accounted for by two events of tandem duplication followed by random deletions. Phylogenetic analyses of the mt genome sequences support the view that Psocoptera is paraphyletic whereas Phthiraptera is monophyletic. The booklouse, L. bostrychophila (suborder Troctomorpha) is most closely related to the parasitic lice. The barklice (suborders Trogiomorpha and Psocomorpha) are closely related and form a monophyletic group. We conclude that mt gene rearrangement has been substantially faster in the lineage leading to the booklice and the parasitic lice than in the lineage leading to the barklice. Lifestyle change appears to be associated with the contrasting rates in mt gene rearrangements between the two lineages of the Psocodea
Intrinsic polarization conversion and avoided-mode crossing in X-cut lithium niobate microrings
Compared with well-developed free space polarization converters, polarization
conversion between TE and TM modes in waveguide is generally considered to be
caused by shape birefringence, like curvature, morphology of waveguide cross
section and scattering. Here, we reveal a hidden polarization conversion
mechanism in X-cut lithium niobate microrings, that is the conversion can be
implemented by birefringence of waveguides, which will also introduce an
unavoidable avoided-mode crossing. In the experiment, we find that this mode
crossing results in severe suppression of one sideband in local nondegenerate
four-wave mixing and disrupts the cascaded four-wave mixing on this side.
Simultaneously, we proposed, for the first time to our best knowledge, one
two-dimensional method to simulate the eigenmodes (TE and TM) in X-cut
microrings, which avoids the obstacle from large computational effort in
three-dimensional anisotropic microrings simulation, and the mode crossing
point. This work will provide an entirely novel approach to the design of
polarization converters and simulation for monolithic photonics integrated
circuits, and may be helpful to the studies of missed temporal dissipative
soliton formation in X-cut lithium niobate rings
Rethinking the Reference-based Distinctive Image Captioning
Distinctive Image Captioning (DIC) -- generating distinctive captions that
describe the unique details of a target image -- has received considerable
attention over the last few years. A recent DIC work proposes to generate
distinctive captions by comparing the target image with a set of
semantic-similar reference images, i.e., reference-based DIC (Ref-DIC). It aims
to make the generated captions can tell apart the target and reference images.
Unfortunately, reference images used by existing Ref-DIC works are easy to
distinguish: these reference images only resemble the target image at
scene-level and have few common objects, such that a Ref-DIC model can
trivially generate distinctive captions even without considering the reference
images. To ensure Ref-DIC models really perceive the unique objects (or
attributes) in target images, we first propose two new Ref-DIC benchmarks.
Specifically, we design a two-stage matching mechanism, which strictly controls
the similarity between the target and reference images at object-/attribute-
level (vs. scene-level). Secondly, to generate distinctive captions, we develop
a strong Transformer-based Ref-DIC baseline, dubbed as TransDIC. It not only
extracts visual features from the target image, but also encodes the
differences between objects in the target and reference images. Finally, for
more trustworthy benchmarking, we propose a new evaluation metric named
DisCIDEr for Ref-DIC, which evaluates both the accuracy and distinctiveness of
the generated captions. Experimental results demonstrate that our TransDIC can
generate distinctive captions. Besides, it outperforms several state-of-the-art
models on the two new benchmarks over different metrics.Comment: ACM MM 202
CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing
Recent developments in large language models (LLMs) have been impressive.
However, these models sometimes show inconsistencies and problematic behavior,
such as hallucinating facts, generating flawed code, or creating offensive and
toxic content. Unlike these models, humans typically utilize external tools to
cross-check and refine their initial content, like using a search engine for
fact-checking, or a code interpreter for debugging. Inspired by this
observation, we introduce a framework called CRITIC that allows LLMs, which are
essentially "black boxes" to validate and progressively amend their own outputs
in a manner similar to human interaction with tools. More specifically,
starting with an initial output, CRITIC interacts with appropriate tools to
evaluate certain aspects of the text, and then revises the output based on the
feedback obtained during this validation process. Comprehensive evaluations
involving free-form question answering, mathematical program synthesis, and
toxicity reduction demonstrate that CRITIC consistently enhances the
performance of LLMs. Meanwhile, our research highlights the crucial importance
of external feedback in promoting the ongoing self-improvement of LLMs.Comment: ICLR 202
- …