2,507 research outputs found
Afraid of Niche, Tired of Mass: Atypical Idea Combination on Crowdfunding Platform
A new idea usually follows a stream of similar ideas yet simultaneously combines atypical elements from ideas outside this stream. A successful business idea usually balances well between familiarity and atypicality. To investigate the relationship between atypicality innovation and crowdfunding project performance, we collected data from one of the largest crowdfunding platforms in China. We build a similarity network of crowdfunding projects to measure the degree of atypicality innovation for these projects. Using a double machine learning model, we find that the atypical combination of mainstream and niche ideas has a significant positive effect on the individual project\u27s funding, i.e., five times more successful than other projects. We also find the potential reasons that cause the poor performance of niche and mainstream projects. Donors are more conservative due to the high risk of niche projects and driven away by the monotonous repetition of mainstream projects
Spatial-temporal Transformer-guided Diffusion based Data Augmentation for Efficient Skeleton-based Action Recognition
Recently, skeleton-based human action has become a hot research topic because
the compact representation of human skeletons brings new blood to this research
domain. As a result, researchers began to notice the importance of using RGB or
other sensors to analyze human action by extracting skeleton information.
Leveraging the rapid development of deep learning (DL), a significant number of
skeleton-based human action approaches have been presented with fine-designed
DL structures recently. However, a well-trained DL model always demands
high-quality and sufficient data, which is hard to obtain without costing high
expenses and human labor. In this paper, we introduce a novel data augmentation
method for skeleton-based action recognition tasks, which can effectively
generate high-quality and diverse sequential actions. In order to obtain
natural and realistic action sequences, we propose denoising diffusion
probabilistic models (DDPMs) that can generate a series of synthetic action
sequences, and their generation process is precisely guided by a
spatial-temporal transformer (ST-Trans). Experimental results show that our
method outperforms the state-of-the-art (SOTA) motion generation approaches on
different naturality and diversity metrics. It proves that its high-quality
synthetic data can also be effectively deployed to existing action recognition
models with significant performance improvement
Transferring Procedural Knowledge across Commonsense Tasks
Stories about everyday situations are an essential part of human
communication, motivating the need to develop AI agents that can reliably
understand these stories. Despite the long list of supervised methods for story
completion and procedural understanding, current AI has no mechanisms to
automatically track and explain procedures in unseen stories. To bridge this
gap, we study the ability of AI models to transfer procedural knowledge to
novel narrative tasks in a transparent manner. We design LEAP: a comprehensive
framework that integrates state-of-the-art modeling architectures, training
regimes, and augmentation strategies based on both natural and synthetic
stories. To address the lack of densely annotated training data, we devise a
robust automatic labeler based on few-shot prompting to enhance the augmented
data. Our experiments with in- and out-of-domain tasks reveal insights into the
interplay of different architectures, training regimes, and augmentation
strategies. LEAP's labeler has a clear positive impact on out-of-domain
datasets, while the resulting dense annotation provides native explainability
Informative Policy Representations in Multi-Agent Reinforcement Learning via Joint-Action Distributions
In multi-agent reinforcement learning, the inherent non-stationarity of the
environment caused by other agents' actions posed significant difficulties for
an agent to learn a good policy independently. One way to deal with
non-stationarity is agent modeling, by which the agent takes into consideration
the influence of other agents' policies. Most existing work relies on
predicting other agents' actions or goals, or discriminating between their
policies. However, such modeling fails to capture the similarities and
differences between policies simultaneously and thus cannot provide useful
information when generalizing to unseen policies. To address this, we propose a
general method to learn representations of other agents' policies via the
joint-action distributions sampled in interactions. The similarities and
differences between policies are naturally captured by the policy distance
inferred from the joint-action distributions and deliberately reflected in the
learned representations. Agents conditioned on the policy representations can
well generalize to unseen agents. We empirically demonstrate that our method
outperforms existing work in multi-agent tasks when facing unseen agents
- …