13,418 research outputs found
Trade Coefficients and the Role of Elasticity in a Spatial CGE Model Based on the Armington Assumption
The Armington Assumption in the context of multi-regional CGE models is commonly
interpreted as follows: Same commodities with different origins are imperfect substitutes for each
other. In this paper, a static spatial CGE model that is compatible with this assumption and
explicitly considers the transport sector and regional price differentials is formulated. Trade
coefficients, which are derived endogenously from the optimization behaviors of firms and
households, are shown to take the form of a potential function. To investigate how the elasticity
of substitutions affects equilibrium solutions, a simpler version of the model that incorporates
three regions and two sectors (besides the transport sector) is introduced. Results indicate: (1) if
commodities produced in different regions are perfect substitutes, regional economies will be
either autarkic or completely symmetric and (2) if they are imperfect substitutes, the impact of
elasticity on the price equilibrium system as well as trade coefficients will be nonlinear and
sometimes very sensitive.Armington Assumption, Spatial CGE, Elasticity of substitution, Trade coefficient, Econometric model
PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks
Unsupervised text embedding methods, such as Skip-gram and Paragraph Vector,
have been attracting increasing attention due to their simplicity, scalability,
and effectiveness. However, comparing to sophisticated deep learning
architectures such as convolutional neural networks, these methods usually
yield inferior results when applied to particular machine learning tasks. One
possible reason is that these text embedding methods learn the representation
of text in a fully unsupervised way, without leveraging the labeled information
available for the task. Although the low dimensional representations learned
are applicable to many different tasks, they are not particularly tuned for any
task. In this paper, we fill this gap by proposing a semi-supervised
representation learning method for text data, which we call the
\textit{predictive text embedding} (PTE). Predictive text embedding utilizes
both labeled and unlabeled data to learn the embedding of text. The labeled
information and different levels of word co-occurrence information are first
represented as a large-scale heterogeneous text network, which is then embedded
into a low dimensional space through a principled and efficient algorithm. This
low dimensional embedding not only preserves the semantic closeness of words
and documents, but also has a strong predictive power for the particular task.
Compared to recent supervised approaches based on convolutional neural
networks, predictive text embedding is comparable or more effective, much more
efficient, and has fewer parameters to tune.Comment: KDD 201
LINE: Large-scale Information Network Embedding
This paper studies the problem of embedding very large information networks
into low-dimensional vector spaces, which is useful in many tasks such as
visualization, node classification, and link prediction. Most existing graph
embedding methods do not scale for real world information networks which
usually contain millions of nodes. In this paper, we propose a novel network
embedding method called the "LINE," which is suitable for arbitrary types of
information networks: undirected, directed, and/or weighted. The method
optimizes a carefully designed objective function that preserves both the local
and global network structures. An edge-sampling algorithm is proposed that
addresses the limitation of the classical stochastic gradient descent and
improves both the effectiveness and the efficiency of the inference. Empirical
experiments prove the effectiveness of the LINE on a variety of real-world
information networks, including language networks, social networks, and
citation networks. The algorithm is very efficient, which is able to learn the
embedding of a network with millions of vertices and billions of edges in a few
hours on a typical single machine. The source code of the LINE is available
online.Comment: WWW 201
GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding
Learning continuous representations of nodes is attracting growing interest
in both academia and industry recently, due to their simplicity and
effectiveness in a variety of applications. Most of existing node embedding
algorithms and systems are capable of processing networks with hundreds of
thousands or a few millions of nodes. However, how to scale them to networks
that have tens of millions or even hundreds of millions of nodes remains a
challenging problem. In this paper, we propose GraphVite, a high-performance
CPU-GPU hybrid system for training node embeddings, by co-optimizing the
algorithm and the system. On the CPU end, augmented edge samples are parallelly
generated by random walks in an online fashion on the network, and serve as the
training data. On the GPU end, a novel parallel negative sampling is proposed
to leverage multiple GPUs to train node embeddings simultaneously, without much
data transfer and synchronization. Moreover, an efficient collaboration
strategy is proposed to further reduce the synchronization cost between CPUs
and GPUs. Experiments on multiple real-world networks show that GraphVite is
super efficient. It takes only about one minute for a network with 1 million
nodes and 5 million edges on a single machine with 4 GPUs, and takes around 20
hours for a network with 66 million nodes and 1.8 billion edges. Compared to
the current fastest system, GraphVite is about 50 times faster without any
sacrifice on performance.Comment: accepted at WWW 201
An Attention-based Collaboration Framework for Multi-View Network Representation Learning
Learning distributed node representations in networks has been attracting
increasing attention recently due to its effectiveness in a variety of
applications. Existing approaches usually study networks with a single type of
proximity between nodes, which defines a single view of a network. However, in
reality there usually exists multiple types of proximities between nodes,
yielding networks with multiple views. This paper studies learning node
representations for networks with multiple views, which aims to infer robust
node representations across different views. We propose a multi-view
representation learning approach, which promotes the collaboration of different
views and lets them vote for the robust representations. During the voting
process, an attention mechanism is introduced, which enables each node to focus
on the most informative views. Experimental results on real-world networks show
that the proposed approach outperforms existing state-of-the-art approaches for
network representation learning with a single view and other competitive
approaches with multiple views.Comment: CIKM 201
- …
