103,149 research outputs found

    Markov Chain Methods For Analyzing Complex Transport Networks

    Full text link
    We have developed a steady state theory of complex transport networks used to model the flow of commodity, information, viruses, opinions, or traffic. Our approach is based on the use of the Markov chains defined on the graph representations of transport networks allowing for the effective network design, network performance evaluation, embedding, partitioning, and network fault tolerance analysis. Random walks embed graphs into Euclidean space in which distances and angles acquire a clear statistical interpretation. Being defined on the dual graph representations of transport networks random walks describe the equilibrium configurations of not random commodity flows on primary graphs. This theory unifies many network concepts into one framework and can also be elegantly extended to describe networks represented by directed graphs and multiple interacting networks.Comment: 26 pages, 4 figure

    Null twisted geometries

    Full text link
    We define and investigate a quantisation of null hypersurfaces in the context of loop quantum gravity on a fixed graph. The main tool we use is the parametrisation of the theory in terms of twistors, which has already proved useful in discussing the interpretation of spin networks as the quantization of twisted geometries. The classical formalism can be extended in a natural way to null hypersurfaces, with the Euclidean polyhedra replaced by null polyhedra with space-like faces, and SU(2) by the little group ISO(2). The main difference is that the simplicity constraints present in the formalims are all first class, and the symplectic reduction selects only the helicity subgroup of the little group. As a consequence, information on the shapes of the polyhedra is lost, and the result is a much simpler, abelian geometric picture. It can be described by an Euclidean singular structure on the 2-dimensional space-like surface defined by a foliation of space-time by null hypersurfaces. This geometric structure is naturally decomposed into a conformal metric and scale factors, forming locally conjugate pairs. Proper action-angle variables on the gauge-invariant phase space are described by the eigenvectors of the Laplacian of the dual graph. We also identify the variables of the phase space amenable to characterize the extrinsic geometry of the foliation. Finally, we quantise the phase space and its algebra using Dirac's algorithm, obtaining a notion of spin networks for null hypersurfaces. Such spin networks are labelled by SO(2) quantum numbers, and are embedded non-trivially in the unitary, infinite-dimensional irreducible representations of the Lorentz group.Comment: 22 pages, 3 figures. v2: minor corrections, improved presentation in section 4, references update

    A Proof of Kirchhoff's First Law for Hyperbolic Conservation Laws on Networks

    Full text link
    Networks are essential models in many applications such as information technology, chemistry, power systems, transportation, neuroscience, and social sciences. In light of such broad applicability, a general theory of dynamical systems on networks may capture shared concepts, and provide a setting for deriving abstract properties. To this end, we develop a calculus for networks modeled as abstract metric spaces and derive an analog of Kirchhoff's first law for hyperbolic conservation laws. In dynamical systems on networks, Kirchhoff's first law connects the study of abstract global objects, and that of a computationally-beneficial edgewise-Euclidean perspective by stating its equivalence. In particular, our results show that hyperbolic conservation laws on networks can be stated without explicit Kirchhoff-type boundary conditions.Comment: 20 pages, 6 figure

    A VLSI-design of the minimum entropy neuron

    Get PDF
    One of the most interesting domains of feedforward networks is the processing of sensor signals. There do exist some networks which extract most of the information by implementing the maximum entropy principle for Gaussian sources. This is done by transforming input patterns to the base of eigenvectors of the input autocorrelation matrix with the biggest eigenvalues. The basic building block of these networks is the linear neuron, learning with the Oja learning rule. Nevertheless, some researchers in pattern recognition theory claim that for pattern recognition and classification clustering transformations are needed which reduce the intra-class entropy. This leads to stable, reliable features and is implemented for Gaussian sources by a linear transformation using the eigenvectors with the smallest eigenvalues. In another paper (Brause 1992) it is shown that the basic building block for such a transformation can be implemented by a linear neuron using an Anti-Hebb rule and restricted weights. This paper shows the analog VLSI design for such a building block, using standard modules of multiplication and addition. The most tedious problem in this VLSI-application is the design of an analog vector normalization circuitry. It can be shown that the standard approaches of weight summation will not give the convergence to the eigenvectors for a proper feature transformation. To avoid this problem, our design differs significantly from the standard approaches by computing the real Euclidean norm. Keywords: minimum entropy, principal component analysis, VLSI, neural networks, surface approximation, cluster transformation, weight normalization circuit

    The Error-Pattern-Correcting Turbo Equalizer

    Full text link
    The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.Comment: This work has been submitted to the special issue of the IEEE Transactions on Information Theory titled: "Facets of Coding Theory: from Algorithms to Networks". This work was supported in part by the NSF Theoretical Foundation Grant 0728676
    corecore