318 research outputs found
Scaling Attributed Network Embedding to Massive Graphs
Given a graph G where each node is associated with a set of attributes,
attributed network embedding (ANE) maps each node vin G to a compact vector Xv,
which can be used in downstream machine learning tasks. Ideally, Xv should
capture node v's affinity to each attribute, which considers not only v's own
attribute associations, but also those of its connected nodes along edges in G.
It is challenging to obtain high-utility embeddings that enable accurate
predictions; scaling effective ANE computation to massive graphs with millions
of nodes pushes the difficulty of the problem to a whole new level. Existing
solutions largely fail on such graphs, leading to prohibitive costs,
low-quality embeddings, or both. This paper proposes PANE, an effective and
scalable approach to ANE computation for massive graphs that achieves
state-of-the-art result quality on multiple benchmark datasets, measured by the
accuracy of three common prediction tasks: attribute inference, link
prediction, and node classification. PANE obtains high scalability and
effectiveness through three main algorithmic designs. First, it formulates the
learning objective based on a novel random walk model for attributed networks.
The resulting optimization task is still challenging on large graphs. Second,
PANE includes a highly efficient solver for the above optimization problem,
whose key module is a carefully designed initialization of the embeddings,
which drastically reduces the number of iterations required to converge.
Finally, PANE utilizes multi-core CPUs through non-trivial parallelization of
the above solver, which achieves scalability while retaining the high quality
of the resulting embeddings. Extensive experiments, comparing 10 existing
approaches on 8 real datasets, demonstrate that PANE consistently outperforms
all existing methods in terms of result quality, while being orders of
magnitude faster.Comment: 16 pages. PVLDB 2021. Volume 14, Issue
PyHST2: an hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities
We present the PyHST2 code which is in service at ESRF for phase-contrast and
absorption tomography. This code has been engineered to sustain the high data
flow typical of the third generation synchrotron facilities (10 terabytes per
experiment) by adopting a distributed and pipelined architecture. The code
implements, beside a default filtered backprojection reconstruction, iterative
reconstruction techniques with a-priori knowledge. These latter are used to
improve the reconstruction quality or in order to reduce the required data
volume and reach a given quality goal. The implemented a-priori knowledge
techniques are based on the total variation penalisation and a new recently
found convex functional which is based on overlapping patches.
We give details of the different methods and their implementations while the
code is distributed under free license.
We provide methods for estimating, in the absence of ground-truth data, the
optimal parameters values for a-priori techniques
- …