9,103 research outputs found
Recruitment Market Trend Analysis with Sequential Latent Variable Models
Recruitment market analysis provides valuable understanding of
industry-specific economic growth and plays an important role for both
employers and job seekers. With the rapid development of online recruitment
services, massive recruitment data have been accumulated and enable a new
paradigm for recruitment market analysis. However, traditional methods for
recruitment market analysis largely rely on the knowledge of domain experts and
classic statistical models, which are usually too general to model large-scale
dynamic recruitment data, and have difficulties to capture the fine-grained
market trends. To this end, in this paper, we propose a new research paradigm
for recruitment market analysis by leveraging unsupervised learning techniques
for automatically discovering recruitment market trends based on large-scale
recruitment data. Specifically, we develop a novel sequential latent variable
model, named MTLVM, which is designed for capturing the sequential dependencies
of corporate recruitment states and is able to automatically learn the latent
recruitment topics within a Bayesian generative framework. In particular, to
capture the variability of recruitment topics over time, we design hierarchical
dirichlet processes for MTLVM. These processes allow to dynamically generate
the evolving recruitment topics. Finally, we implement a prototype system to
empirically evaluate our approach based on real-world recruitment data in
China. Indeed, by visualizing the results from MTLVM, we can successfully
reveal many interesting findings, such as the popularity of LBS related jobs
reached the peak in the 2nd half of 2014, and decreased in 2015.Comment: 11 pages, 30 figure, SIGKDD 201
The power index at infinity: Weighted voting in sequential infinite anonymous games
After we describe the waiting queue problem, we identify a partially observable 2n+1-player voting game with only one pivotal player; the player at the n-1 order. Given the simplest rule of heterogeneity presented in this paper, we show that for any infinite sequential voting game of size 2n+1, a power index of size n is a good approximation of the power index at infinity, and it is difficult to achieve. Moreover, we show that the collective utility value of a coalition for a partially observable anonymous game given an equal distribution of weights is n²+n. This formula is developed for infinite sequential anonymous games using a stochastic process that yields a utility function in terms of the probability of the sequence and voting outcome of the coalition. Evidence from Wikidata editing sequences is presented and the results are compared for 10 coalitions
Dirichlet belief networks for topic structure learning
Recently, considerable research effort has been devoted to developing deep
architectures for topic models to learn topic structures. Although several deep
models have been proposed to learn better topic proportions of documents, how
to leverage the benefits of deep structures for learning word distributions of
topics has not yet been rigorously studied. Here we propose a new multi-layer
generative process on word distributions of topics, where each layer consists
of a set of topics and each topic is drawn from a mixture of the topics of the
layer above. As the topics in all layers can be directly interpreted by words,
the proposed model is able to discover interpretable topic hierarchies. As a
self-contained module, our model can be flexibly adapted to different kinds of
topic models to improve their modelling accuracy and interpretability.
Extensive experiments on text corpora demonstrate the advantages of the
proposed model.Comment: accepted in NIPS 201
Domain transfer for deep natural language generation from abstract meaning representations
Stochastic natural language generation systems that are trained from labelled datasets are often domainspecific in their annotation and in their mapping from semantic input representations to lexical-syntactic outputs. As a result, learnt models fail to generalize across domains, heavily restricting their usability beyond single applications. In this article, we focus on the problem of domain adaptation for natural language generation. We show how linguistic knowledge from a source domain, for which labelled data is available, can be adapted to a target domain by reusing training data across domains. As a key to this, we propose to employ abstract meaning representations as a common semantic representation across domains. We model natural language generation as a long short-term memory recurrent neural network encoderdecoder, in which one recurrent neural network learns a latent representation of a semantic input, and a second recurrent neural network learns to decode it to a sequence of words. We show that the learnt representations can be transferred across domains and can be leveraged effectively to improve training on new unseen domains. Experiments in three different domains and with six datasets demonstrate that the lexical-syntactic constructions learnt in one domain can be transferred to new domains and achieve up to 75-100% of the performance of in-domain training. This is based on objective metrics such as BLEU and semantic error rate and a subjective human rating study. Training a policy from prior knowledge from a different domain is consistently better than pure in-domain training by up to 10%
- …