61,340 research outputs found
Proof-graphs for Minimal Implicational Logic
It is well-known that the size of propositional classical proofs can be huge.
Proof theoretical studies discovered exponential gaps between normal or cut
free proofs and their respective non-normal proofs. The aim of this work is to
study how to reduce the weight of propositional deductions. We present the
formalism of proof-graphs for purely implicational logic, which are graphs of a
specific shape that are intended to capture the logical structure of a
deduction. The advantage of this formalism is that formulas can be shared in
the reduced proof.
In the present paper we give a precise definition of proof-graphs for the
minimal implicational logic, together with a normalization procedure for these
proof-graphs. In contrast to standard tree-like formalisms, our normalization
does not increase the number of nodes, when applied to the corresponding
minimal proof-graph representations.Comment: In Proceedings DCM 2013, arXiv:1403.768
Graph Convolutional Matrix Completion
We consider matrix completion for recommender systems from the point of view
of link prediction on graphs. Interaction data such as movie ratings can be
represented by a bipartite user-item graph with labeled edges denoting observed
ratings. Building on recent progress in deep learning on graph-structured data,
we propose a graph auto-encoder framework based on differentiable message
passing on the bipartite interaction graph. Our model shows competitive
performance on standard collaborative filtering benchmarks. In settings where
complimentary feature information or structured data such as a social network
is available, our framework outperforms recent state-of-the-art methods.Comment: 9 pages, 3 figures, updated with additional experimental evaluatio
Simple-Current Symmetries, Rank-Level Duality, and Linear Skein Relations for Chern-Simons Graphs
A previously proposed two-step algorithm for calculating the expectation
values of Chern-Simons graphs fails to determine certain crucial signs. The
step which involves calculating tetrahedra by solving certain non- linear
equations is repaired by introducing additional linear equations. As a first
step towards a new algorithm for general graphs we find useful linear equations
for those special graphs which support knots and links. Using the improved set
of equations for tetrahedra we examine the symmetries between tetrahedra
generated by arbitrary simple currents. Along the way we uncover the classical
origin of simple-current charges. The improved skein relations also lead to
exact identities between planar tetrahedra in level and level
CS theories, where denotes a classical group. These results are
recast as identities for quantum -symbols and WZW braid matrices. We obtain
the transformation properties of arbitrary graphs and links under simple
current symmetries and rank-level duality. For links with knotted components
this requires precise control of the braid eigenvalue permutation signs, which
we obtain from plethysm and an explicit expression for the (multiplicity free)
signs, valid for all compact gauge groups and all fusion products.Comment: 58 pages, BRX-TH-30
The cavity approach for Steiner trees packing problems
The Belief Propagation approximation, or cavity method, has been recently
applied to several combinatorial optimization problems in its zero-temperature
implementation, the max-sum algorithm. In particular, recent developments to
solve the edge-disjoint paths problem and the prize-collecting Steiner tree
problem on graphs have shown remarkable results for several classes of graphs
and for benchmark instances. Here we propose a generalization of these
techniques for two variants of the Steiner trees packing problem where multiple
"interacting" trees have to be sought within a given graph. Depending on the
interaction among trees we distinguish the vertex-disjoint Steiner trees
problem, where trees cannot share nodes, from the edge-disjoint Steiner trees
problem, where edges cannot be shared by trees but nodes can be members of
multiple trees. Several practical problems of huge interest in network design
can be mapped into these two variants, for instance, the physical design of
Very Large Scale Integration (VLSI) chips. The formalism described here relies
on two components edge-variables that allows us to formulate a massage-passing
algorithm for the V-DStP and two algorithms for the E-DStP differing in the
scaling of the computational time with respect to some relevant parameters. We
will show that one of the two formalisms used for the edge-disjoint variant
allow us to map the max-sum update equations into a weighted maximum matching
problem over proper bipartite graphs. We developed a heuristic procedure based
on the max-sum equations that shows excellent performance in synthetic networks
(in particular outperforming standard multi-step greedy procedures by large
margins) and on large benchmark instances of VLSI for which the optimal
solution is known, on which the algorithm found the optimum in two cases and
the gap to optimality was never larger than 4 %
Neural complexity: a graph theoretic interpretation
One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end Tononi et al. [Proc. Nat. Acad. Sci. USA 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system's dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns et al. [Cereb. Cortex 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia et al. [Phys. Rev. E 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular we explicitly establish a dependency of neural complexity on cyclic graph motifs
The Epstein-Glaser approach to pQFT: graphs and Hopf algebras
The paper aims at investigating perturbative quantum field theory (pQFT) in
the approach of Epstein and Glaser (EG) and, in particular, its formulation in
the language of graphs and Hopf algebras (HAs). Various HAs are encountered,
each one associated with a special combination of physical concepts such as
normalization, localization, pseudo-unitarity, causality and an associated
regularization, and renormalization. The algebraic structures, representing the
perturbative expansion of the S-matrix, are imposed on the operator-valued
distributions which are equipped with appropriate graph indices. Translation
invariance ensures the algebras to be analytically well-defined and graded
total symmetry allows to formulate bialgebras. The algebraic results are given
embedded in the physical framework, which covers the two recent EG versions by
Fredenhagen and Scharf that differ with respect to the concrete recursive
implementation of causality. Besides, the ultraviolet divergences occuring in
Feynman's representation are mathematically reasoned. As a final result, the
change of the renormalization scheme in the EG framework is modeled via a HA
which can be seen as the EG-analog of Kreimer's HA.Comment: 52 pages, 5 figure
- …