425 research outputs found
Properties of dense partially random graphs
We study the properties of random graphs where for each vertex a {\it
neighbourhood} has been previously defined. The probability of an edge joining
two vertices depends on whether the vertices are neighbours or not, as happens
in Small World Graphs (SWGs). But we consider the case where the average degree
of each node is of order of the size of the graph (unlike SWGs, which are
sparse). This allows us to calculate the mean distance and clustering, that are
qualitatively similar (although not in such a dramatic scale range) to the case
of SWGs. We also obtain analytically the distribution of eigenvalues of the
corresponding adjacency matrices. This distribution is discrete for large
eigenvalues and continuous for small eigenvalues. The continuous part of the
distribution follows a semicircle law, whose width is proportional to the
"disorder" of the graph, whereas the discrete part is simply a rescaling of the
spectrum of the substrate. We apply our results to the calculation of the
mixing rate and the synchronizability threshold.Comment: 14 pages. To be published in Physical Review
Graph Signal Processing: Overview, Challenges and Applications
Research in Graph Signal Processing (GSP) aims to develop tools for
processing data defined on irregular graph domains. In this paper we first
provide an overview of core ideas in GSP and their connection to conventional
digital signal processing. We then summarize recent developments in developing
basic GSP tools, including methods for sampling, filtering or graph learning.
Next, we review progress in several application areas using GSP, including
processing and analysis of sensor network data, biological data, and
applications to image processing and machine learning. We finish by providing a
brief historical perspective to highlight how concepts recently developed in
GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
Neural Network Approximation of Continuous Functions in High Dimensions with Applications to Inverse Problems
The remarkable successes of neural networks in a huge variety of inverse
problems have fueled their adoption in disciplines ranging from medical imaging
to seismic analysis over the past decade. However, the high dimensionality of
such inverse problems has simultaneously left current theory, which predicts
that networks should scale exponentially in the dimension of the problem,
unable to explain why the seemingly small networks used in these settings work
as well as they do in practice. To reduce this gap between theory and practice,
we provide a general method for bounding the complexity required for a neural
network to approximate a H\"older (or uniformly) continuous function defined on
a high-dimensional set with a low-complexity structure. The approach is based
on the observation that the existence of a Johnson-Lindenstrauss embedding
of a given high-dimensional set
into a low dimensional cube implies that for
any H\"older (or uniformly) continuous function , there
exists a H\"older (or uniformly) continuous function
such that for all . Hence, if
one has a neural network which approximates , then a
layer can be added that implements the JL embedding to obtain a neural
network that approximates . By pairing JL embedding results
along with results on approximation of H\"older (or uniformly) continuous
functions by neural networks, one then obtains results which bound the
complexity required for a neural network to approximate H\"older (or uniformly)
continuous functions on high dimensional sets. The end result is a general
theoretical framework which can then be used to better explain the observed
empirical successes of smaller networks in a wider variety of inverse problems
than current theory allows.Comment: 26 pages, 1 figur
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning
The paper introduces the application of information geometry to describe the
ground states of Ising models by utilizing parity-check matrices of cyclic and
quasi-cyclic codes on toric and spherical topologies. The approach establishes
a connection between machine learning and error-correcting coding. This
proposed approach has implications for the development of new embedding methods
based on trapping sets. Statistical physics and number geometry applied for
optimize error-correcting codes, leading to these embedding and sparse
factorization methods. The paper establishes a direct connection between DNN
architecture and error-correcting coding by demonstrating how state-of-the-art
architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range
arena can be equivalent to of block and convolutional LDPC codes (Cage-graph,
Repeat Accumulate). QC codes correspond to certain types of chemical elements,
with the carbon element being represented by the mixed automorphism
Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and
the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix
are elaborated upon in detail. The Quantum Approximate Optimization Algorithm
(QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous
to the back-propagation loss function landscape in training DNNs. This
similarity creates a comparable problem with TS pseudo-codeword, resembling the
belief propagation method. Additionally, the layer depth in QAOA correlates to
the number of decoding belief propagation iterations in the Wiberg decoding
tree. Overall, this work has the potential to advance multiple fields, from
Information Theory, DNN architecture design (sparse and structured prior graph
topology), efficient hardware design for Quantum and Classical DPU/TPU (graph,
quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text
overlap with arXiv:2109.08184 by other author
Mixed-Weight Open Locating-Dominating Sets
The detection and location of issues in a network is a common problem encompassing a wide variety of research areas. Location-detection problems have been studied for wireless sensor networks and environmental monitoring, microprocessor fault detection, public utility contamination, and finding intruders in buildings. Modeling these systems as a graph, we want to find the smallest subset of nodes that, when sensors are placed at those locations, can detect and locate any anomalies that arise. One type of set that solves this problem is the open locating-dominating set (OLD-set), a set of nodes that forms a unique and nonempty neighborhood with every node in the graph. For this work, we begin with a study of OLD-sets in circulant graphs. Circulant graphs are a group of regular cyclic graphs that are often used in massively parallel systems. We prove the optimal OLD-set size for two circulant graphs using two proof techniques: the discharging method and Hall\u27s Theorem. Next we introduce the mixed-weight open locating-dominating set (mixed-weight OLD-set), an extension of the OLD-set. The mixed-weight OLD-set allows nodes in the graph to have different weights, representing systems that use sensors of varying strengths. This is a novel approach to the study of location-detection problems. We show that the decision problem for the minimum mixed-weight OLD-set, for any weights up to positive integer d, is NP-complete. We find the size of mixed-weight OLD-sets in paths and cycles for weights 1 and 2. We consider mixed-weight OLD-sets in random graphs by providing probabilistic bounds on the size of the mixed-weight OLD-set and use simulation to reinforce the theoretical results. Finally, we build and study an integer linear program to solve for mixed-weight OLD-sets and use greedy algorithms to generate mixed-weight OLD-set estimates in random geometric graphs. We also extend our results for mixed-weight OLD-sets in random graphs to random geometric graphs by estimating the probabilistic upper bound for the size of the set
Recommended from our members
Modern Problems in Mathematical Signal Processing: Quantized Compressed Sensing and Randomized Neural Networks
We study two problems from mathematical signal processing. First, we consider problem of approximately recovering signals on a smooth, compact manifold from one-bit linear measurements drawn from either a Gaussian ensemble, partial circulant ensemble, or bounded orthonormal ensemble and quantized using or distributed noise-shaping schemes. We construct a convex optimization algorithm for signal recovery that, given a Geometric Multi-Resolution Analysis approximation of the manifold, guarantees signal recovery with high probability. We prove an upper bound on the recovery error which outperforms prior works that use memoryless scalar quantization, requires a simpler analysis, and extends the class of measurements beyond Gaussians.Second, we consider the problem of approximation continuous functions on compact domains using neural networks. The learning speed of feed-forward neural networks is notoriously slow and has presented a bottleneck in deep learning applications for several decades. For instance, gradient-based learning algorithms, which are used extensively to train neural networks, tend to work slowly when all of the network parameters must be iteratively tuned. To counter this, both researchers and practitioners have tried introducing randomness to reduce the learning requirement. Based on the original construction of B.~Igelnik and Y.H.~Pao, single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice, but the necessary theoretical justification is lacking. We begin to fill this theoretical gap by providing a (corrected) rigorous proof that the Igelnik and Pao construction is a universal approximator for continuous functions on compact domains, with -error convergence rate inversely proportional to the number of network nodes; we then extend this result to the non-asymptotic setting using a concentration inequality for Monte-Carlo integral approximations. We further adapt this randomized neural network architecture to approximate functions on smooth, compact submanifolds of Euclidean space, providing theoretical guarantees in both the asymptotic and non-asymptotic cases
- âŠ