221,673 research outputs found
Neural 3D Morphable Models: Spiral Convolutional Networks for 3D Shape Representation Learning and Generation
Generative models for 3D geometric data arise in many important applications
in 3D computer vision and graphics. In this paper, we focus on 3D deformable
shapes that share a common topological structure, such as human faces and
bodies. Morphable Models and their variants, despite their linear formulation,
have been widely used for shape representation, while most of the recently
proposed nonlinear approaches resort to intermediate representations, such as
3D voxel grids or 2D views. In this work, we introduce a novel graph
convolutional operator, acting directly on the 3D mesh, that explicitly models
the inductive bias of the fixed underlying graph. This is achieved by enforcing
consistent local orderings of the vertices of the graph, through the spiral
operator, thus breaking the permutation invariance property that is adopted by
all the prior work on Graph Neural Networks. Our operator comes by construction
with desirable properties (anisotropic, topology-aware, lightweight,
easy-to-optimise), and by using it as a building block for traditional deep
generative architectures, we demonstrate state-of-the-art results on a variety
of 3D shape datasets compared to the linear Morphable Model and other graph
convolutional operators.Comment: to appear at ICCV 201
Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design
Graph neural networks (GNNs) have shown significant accuracy improvements in
a variety of graph learning domains, sparking considerable research interest.
To translate these accuracy improvements into practical applications, it is
essential to develop high-performance and efficient hardware acceleration for
GNN models. However, designing GNN accelerators faces two fundamental
challenges: the high bandwidth requirement of GNN models and the diversity of
GNN models. Previous works have addressed the first challenge by using more
expensive memory interfaces to achieve higher bandwidth. For the second
challenge, existing works either support specific GNN models or have generic
designs with poor hardware utilization.
In this work, we tackle both challenges simultaneously. First, we identify a
new type of partition-level operator fusion, which we utilize to internally
reduce the high bandwidth requirement of GNNs. Next, we introduce
partition-level multi-threading to schedule the concurrent processing of graph
partitions, utilizing different hardware resources. To further reduce the extra
on-chip memory required by multi-threading, we propose fine-grained graph
partitioning to generate denser graph partitions. Importantly, these three
methods make no assumptions about the targeted GNN models, addressing the
challenge of model variety. We implement these methods in a framework called
SwitchBlade, consisting of a compiler, a graph partitioner, and a hardware
accelerator. Our evaluation demonstrates that SwitchBlade achieves an average
speedup of and energy savings of compared to the
NVIDIA V100 GPU. Additionally, SwitchBlade delivers performance comparable to
state-of-the-art specialized accelerators
The expansion of tensor models with two symmetric tensors
It is well known that tensor models for a tensor with no symmetry admit a
expansion dominated by melonic graphs. This result relies crucially on
identifying \emph{jackets} which are globally defined ribbon graphs embedded in
the tensor graph. In contrast, no result of this kind has so far been
established for symmetric tensors because global jackets do not exist.
In this paper we introduce a new approach to the expansion in tensor
models adapted to symmetric tensors. In particular we do not use any global
structure like the jackets. We prove that, for any rank , a tensor model
with two symmetric tensors and interactions the complete graph admits
a expansion dominated by melonic graphs.Comment: misprints corrected, references adde
Asymptotic expansion of the multi-orientable random tensor model
Three-dimensional random tensor models are a natural generalization of the
celebrated matrix models. The associated tensor graphs, or 3D maps, can be
classified with respect to a particular integer or half-integer, the degree of
the respective graph. In this paper we analyze the general term of the
asymptotic expansion in N, the size of the tensor, of a particular random
tensor model, the multi-orientable tensor model. We perform their enumeration
and we establish which are the dominant configurations of a given degree.Comment: 27 pages, 24 figures, several minor modifications have been made, one
figure has been added; accepted for publication in "Electronic Journal of
Combinatorics
Dimer Models from Mirror Symmetry and Quivering Amoebae
Dimer models are 2-dimensional combinatorial systems that have been shown to
encode the gauge groups, matter content and tree-level superpotential of the
world-volume quiver gauge theories obtained by placing D3-branes at the tip of
a singular toric Calabi-Yau cone. In particular the dimer graph is dual to the
quiver graph. However, the string theoretic explanation of this was unclear. In
this paper we use mirror symmetry to shed light on this: the dimer models live
on a T^2 subspace of the T^3 fiber that is involved in mirror symmetry and is
wrapped by D6-branes. These D6-branes are mirror to the D3-branes at the
singular point, and geometrically encode the same quiver theory on their
world-volume.Comment: 55 pages, 27 figures, LaTeX2
- …