85 research outputs found
Social Group Buying as a Marketing Strategy
Social group buying (SGB) is a novel form of group buying that encourages customers to purchase deeply discounted products together with friends. Over the past few years, SGB has become a popular marketing strategy for online sellers to acquire new customers. Using a dataset from an e-commerce platform, we investigate whether and how SGB affects the sales of sellers. We find that enrolling a few products into SGB has a positive spillover effect on the sales of the sellersâ other products, and the effect varies substantially across different types of sellers. Specifically, the positive spillover effect is larger for smaller sellers and more diversified sellers. Moreover, we find that the spillover effect exhibits similar heterogeneity at the brand level, except that it can be negative for large brands and non-diversified brands. This finding suggests that sellers may gain from SGB at the expense of large or non-diversified brands
AI Assistant in Online Pharmacy
Artificial intelligence (AI) has been increasingly popular in diagnosing diseases and recommending drugs in digital healthcare platforms. Leveraging the introduction of an AI-powered medical assistant to one drug category in an online pharmacy platform, we investigate how the adoption of AI affects usersâ purchase behaviors using a difference-in-differences design. We find that the adoption of the AI assistant significantly increases usersâ purchases in the platform, even for drugs not recommended by the AI assistant. Furthermore, we find that the positive effect of the AI assistant adoption is stronger for early technology adopters, inexperienced users, and users with higher privacy concerns, likely because these users tend to perceive higher value from AI. Finally, our mediation analysis shows that the AI feature increases usersâ purchases by increasing their engagement levels in the platform. Our results have important implications for designing and evaluating AI features in online platforms
Learning Explicit Contact for Implicit Reconstruction of Hand-held Objects from Monocular Images
Reconstructing hand-held objects from monocular RGB images is an appealing
yet challenging task. In this task, contacts between hands and objects provide
important cues for recovering the 3D geometry of the hand-held objects. Though
recent works have employed implicit functions to achieve impressive progress,
they ignore formulating contacts in their frameworks, which results in
producing less realistic object meshes. In this work, we explore how to model
contacts in an explicit way to benefit the implicit reconstruction of hand-held
objects. Our method consists of two components: explicit contact prediction and
implicit shape reconstruction. In the first part, we propose a new subtask of
directly estimating 3D hand-object contacts from a single image. The part-level
and vertex-level graph-based transformers are cascaded and jointly learned in a
coarse-to-fine manner for more accurate contact probabilities. In the second
part, we introduce a novel method to diffuse estimated contact states from the
hand mesh surface to nearby 3D space and leverage diffused contact
probabilities to construct the implicit neural representation for the
manipulated object. Benefiting from estimating the interaction patterns between
the hand and the object, our method can reconstruct more realistic object
meshes, especially for object parts that are in contact with hands. Extensive
experiments on challenging benchmarks show that the proposed method outperforms
the current state of the arts by a great margin.Comment: 17 pages, 8 figure
PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images
We present PyMAF-X, a regression-based approach to recovering a full-body
parametric model from a single image. This task is very challenging since minor
parametric deviation may lead to noticeable misalignment between the estimated
mesh and the input image. Moreover, when integrating part-specific estimations
to the full-body model, existing solutions tend to either degrade the alignment
or produce unnatural wrist poses. To address these issues, we propose a
Pyramidal Mesh Alignment Feedback (PyMAF) loop in our regression network for
well-aligned human mesh recovery and extend it as PyMAF-X for the recovery of
expressive full-body models. The core idea of PyMAF is to leverage a feature
pyramid and rectify the predicted parameters explicitly based on the mesh-image
alignment status. Specifically, given the currently predicted parameters,
mesh-aligned evidence will be extracted from finer-resolution features
accordingly and fed back for parameter rectification. To enhance the alignment
perception, an auxiliary dense supervision is employed to provide mesh-image
correspondence guidance while spatial alignment attention is introduced to
enable the awareness of the global contexts for our network. When extending
PyMAF for full-body mesh recovery, an adaptive integration strategy is proposed
in PyMAF-X to produce natural wrist poses while maintaining the well-aligned
performance of the part-specific estimations. The efficacy of our approach is
validated on several benchmark datasets for body-only and full-body mesh
recovery, where PyMAF and PyMAF-X effectively improve the mesh-image alignment
and achieve new state-of-the-art results. The project page with code and video
results can be found at https://www.liuyebin.com/pymaf-x.Comment: An eXpressive extension of PyMAF [arXiv:2103.16507], Supporting
SMPL-X, Project page: https://www.liuyebin.com/pymaf-
MIMO Is All You Need : A Strong Multi-In-Multi-Out Baseline for Video Prediction
The mainstream of the existing approaches for video prediction builds up
their models based on a Single-In-Single-Out (SISO) architecture, which takes
the current frame as input to predict the next frame in a recursive manner.
This way often leads to severe performance degradation when they try to
extrapolate a longer period of future, thus limiting the practical use of the
prediction model. Alternatively, a Multi-In-Multi-Out (MIMO) architecture that
outputs all the future frames at one shot naturally breaks the recursive manner
and therefore prevents error accumulation. However, only a few MIMO models for
video prediction are proposed and they only achieve inferior performance due to
the date. The real strength of the MIMO model in this area is not well noticed
and is largely under-explored. Motivated by that, we conduct a comprehensive
investigation in this paper to thoroughly exploit how far a simple MIMO
architecture can go. Surprisingly, our empirical studies reveal that a simple
MIMO model can outperform the state-of-the-art work with a large margin much
more than expected, especially in dealing with longterm error accumulation.
After exploring a number of ways and designs, we propose a new MIMO
architecture based on extending the pure Transformer with local spatio-temporal
blocks and a new multi-output decoder, namely MIMO-VP, to establish a new
standard in video prediction. We evaluate our model in four highly competitive
benchmarks (Moving MNIST, Human3.6M, Weather, KITTI). Extensive experiments
show that our model wins 1st place on all the benchmarks with remarkable
performance gains and surpasses the best SISO model in all aspects including
efficiency, quantity, and quality. We believe our model can serve as a new
baseline to facilitate the future research of video prediction tasks. The code
will be released
Non-destructive monitoring method for leaf area of Brassica napus based on image processing and deep learning
IntroductionLeaves are important organs for photosynthesis in plants, and the restriction of leaf growth is among the earliest visible effects under abiotic stress such as nutrient deficiency. Rapidly and accurately monitoring plant leaf area is of great importance in understanding plant growth status in modern agricultural production.MethodIn this paper, an image processing-based non-destructive monitoring device that includes an image acquisition device and image process deep learning net for acquiring Brassica napus (rapeseed) leaf area is proposed. A total of 1,080 rapeseed leaf image areas from five nutrient amendment treatments were continuously collected using the automatic leaf acquisition device and the commonly used area measurement methods (manual and stretching methods).ResultsThe average error rate of the manual method is 12.12%, the average error rate of the stretching method is 5.63%, and the average error rate of the splint method is 0.65%. The accuracy of the automatic leaf acquisition device was improved by 11.47% and 4.98% compared with the manual and stretching methods, respectively, and had the advantages of speed and automation. Experiments on the effects of the manual method, stretching method, and splinting method on the growth of rapeseed are conducted, and the growth rate of rapeseed leaves under the stretching method treatment is considerably greater than that of the normal treatment rapeseed.DiscussionThe growth rate of leaves under the splinting method treatment was less than that of the normal rapeseed treatment. The mean intersection over union (mIoU) of the UNet-Attention model reached 90%, and the splint method had higher prediction accuracy with little influence on rapeseed
Universal Information Extraction with Meta-Pretrained Self-Retrieval
Universal Information Extraction~(Universal IE) aims to solve different
extraction tasks in a uniform text-to-structure generation manner. Such a
generation procedure tends to struggle when there exist complex information
structures to be extracted. Retrieving knowledge from external knowledge bases
may help models to overcome this problem but it is impossible to construct a
knowledge base suitable for various IE tasks. Inspired by the fact that large
amount of knowledge are stored in the pretrained language models~(PLM) and can
be retrieved explicitly, in this paper, we propose MetaRetriever to retrieve
task-specific knowledge from PLMs to enhance universal IE. As different IE
tasks need different knowledge, we further propose a Meta-Pretraining Algorithm
which allows MetaRetriever to quicktly achieve maximum task-specific retrieval
performance when fine-tuning on downstream IE tasks. Experimental results show
that MetaRetriever achieves the new state-of-the-art on 4 IE tasks, 12 datasets
under fully-supervised, low-resource and few-shot scenarios.Comment: Accepted to ACL 202
Asymmetric Somatic Hybridization Affects Synonymous Codon Usage Bias in Wheat
Asymmetric somatic hybridization is an efficient strategy for crop breeding by introducing exogenous chromatin fragments, which leads to whole genomic shock and local chromosomal shock that induces genome-wide genetic variation including indel (insertion and deletion) and nucleotide substitution. Nucleotide substitution causes synonymous codon usage bias (SCUB), an indicator of genomic mutation and natural selection. However, how asymmetric somatic hybridization affects SCUB has not been addressed. Here, we explored this issue by comparing expressed sequence tags of a common wheat cultivar and its asymmetric somatic hybrid line. Asymmetric somatic hybridization affected SCUB and promoted the bias to A- and T-ending synonymous codon (SCs). SCUB frequencies in chromosomes introgressed with exogenous fragments were comparable to those in chromosomes without exogenous fragments, showing that exogenous fragments had no local chromosomal effect. Asymmetric somatic hybridization affected SCUB frequencies in indel-flanking sequences more strongly than in non-flanking sequences, and this stronger effect was present in both chromosomes with and without exogenous fragments. DNA methylation-driven SCUB shift was more pronounced than other SC pairs. SCUB shift was similar among seven groups of allelic chromosomes as well as three sub-genomes. Our work demonstrates that the SCUB shift induced by asymmetric somatic hybridization is attributed to the whole genomic shock, and DNA methylation is a putative force of SCUB shift during asymmetric somatic hybridization. Asymmetric somatic hybridization provides an available method for deepening the nature of SCUB shift and genetic variation induced by genomic shock
Stability and drug dissolution evaluation of Qingkailing soft/hard capsules based on multi-component quantification and fingerprint pattern statistical analysis
Purpose: To carry out a post-marketing evaluation of the stability and drug dissolution of Qingkailing soft/hard capsules.Methods: High performance liquid chromatography with diode array detection (HPLC-DAD) method was developed for the determination of three key ingredients (chlorogenic acid, geniposide and baicalin) and fingerprints of QKL soft/hard capsules. Stability tests were carried out based on long-term testing. The drug release profile of Qingkailing soft and hard capsules were studied using semi-bionic incubation experiments.Results: The linearity, precision, stability, repeatability and recovery of HPLC and fingerprint all met the requirements of CFDA. Stability data from long-term studies showed that within 6 months the contents of the three key ingredients in both soft and hard capsules remained > 90 %. However, fingerprint pattern statistical analysis showed that the soft capsule is more stable than the hard capsule. Furthermore, the key ingredients of the hard capsule dissolved much faster (p < 0.05) than from the soft capsule. The level of dissolved drug of hard capsule is about 4 times the rate of soft capsule, after a 4-h incubation in gastric lavage fluid. In intestinal lavage fluid, more than 90 % of chlorogenic acid, geniposide and baicalin of hard capsule were dissolved in 2 h, while the soft capsule displayed a 12 h sustained release. Fingerprint pattern statistical analysis also showed that most of the components of soft capsule dissolved after 8 h.Conclusion: Compared with the hard capsule, Qingkailing soft capsule has certain advantages in stability and drug dissolution, which may affect the biopharmaceutics and the clinical effects of the drug.Keywords: Qingkailing capsule, Chlorogenic acid, Geniposide, Baicalin, Fingerprint, Sustained release, Principal component analysi
GujiBERT and GujiGPT: Construction of Intelligent Information Processing Foundation Language Models for Ancient Texts
In the context of the rapid development of large language models, we have
meticulously trained and introduced the GujiBERT and GujiGPT language models,
which are foundational models specifically designed for intelligent information
processing of ancient texts. These models have been trained on an extensive
dataset that encompasses both simplified and traditional Chinese characters,
allowing them to effectively handle various natural language processing tasks
related to ancient books, including but not limited to automatic sentence
segmentation, punctuation, word segmentation, part-of-speech tagging, entity
recognition, and automatic translation. Notably, these models have exhibited
exceptional performance across a range of validation tasks using publicly
available datasets. Our research findings highlight the efficacy of employing
self-supervised methods to further train the models using classical text
corpora, thus enhancing their capability to tackle downstream tasks. Moreover,
it is worth emphasizing that the choice of font, the scale of the corpus, and
the initial model selection all exert significant influence over the ultimate
experimental outcomes. To cater to the diverse text processing preferences of
researchers in digital humanities and linguistics, we have developed three
distinct categories comprising a total of nine model variations. We believe
that by sharing these foundational language models specialized in the domain of
ancient texts, we can facilitate the intelligent processing and scholarly
exploration of ancient literary works and, consequently, contribute to the
global dissemination of China's rich and esteemed traditional culture in this
new era.Comment: 22pages,0 figur
- âŠ