5,357 research outputs found

    Learning Graphs from Linear Measurements: Fundamental Trade-offs and Applications

    Get PDF
    We consider a specific graph learning task: reconstructing a symmetric matrix that represents an underlying graph using linear measurements. We present a sparsity characterization for distributions of random graphs (that are allowed to contain high-degree nodes), based on which we study fundamental trade-offs between the number of measurements, the complexity of the graph class, and the probability of error. We first derive a necessary condition on the number of measurements. Then, by considering a three-stage recovery scheme, we give a sufficient condition for recovery. Furthermore, assuming the measurements are Gaussian IID, we prove upper and lower bounds on the (worst-case) sample complexity for both noisy and noiseless recovery. In the special cases of the uniform distribution on trees with n nodes and the Erdős-Rényi (n,p) class, the fundamental trade-offs are tight up to multiplicative factors with noiseless measurements. In addition, for practical applications, we design and implement a polynomial-time (in n ) algorithm based on the three-stage recovery scheme. Experiments show that the heuristic algorithm outperforms basis pursuit on star graphs. We apply the heuristic algorithm to learn admittance matrices in electric grids. Simulations for several canonical graph classes and IEEE power system test cases demonstrate the effectiveness and robustness of the proposed algorithm for parameter reconstruction

    Fundamental Limits on Data Acquisition: Trade-offs between Sample Complexity and Query Difficulty

    Full text link
    We consider query-based data acquisition and the corresponding information recovery problem, where the goal is to recover kk binary variables (information bits) from parity measurements of those variables. The queries and the corresponding parity measurements are designed using the encoding rule of Fountain codes. By using Fountain codes, we can design potentially limitless number of queries, and corresponding parity measurements, and guarantee that the original kk information bits can be recovered with high probability from any sufficiently large set of measurements of size nn. In the query design, the average number of information bits that is associated with one parity measurement is called query difficulty (dˉ\bar{d}) and the minimum number of measurements required to recover the kk information bits for a fixed dˉ\bar{d} is called sample complexity (nn). We analyze the fundamental trade-offs between the query difficulty and the sample complexity, and show that the sample complexity of n=cmax{k,(klogk)/dˉ}n=c\max\{k,(k\log k)/\bar{d}\} for some constant c>0c>0 is necessary and sufficient to recover kk information bits with high probability as kk\to\infty

    SPAD: a distributed middleware architecture for QoS enhanced alternate path discovery

    Get PDF
    In the next generation Internet, the network will evolve from a plain communication medium into one that provides endless services to the users. These services will be composed of multiple cooperative distributed application elements. We name these services overlay applications. The cooperative application elements within an overlay application will build a dynamic communication mesh, namely an overlay association. The Quality of Service (QoS) perceived by the users of an overlay application greatly depends on the QoS experienced on the communication paths of the corresponding overlay association. In this paper, we present SPAD (Super-Peer Alternate path Discovery), a distributed middleware architecture that aims at providing enhanced QoS between end-points within an overlay association. To achieve this goal, SPAD provides a complete scheme to discover and utilize composite alternate end-to end paths with better QoS than the path given by the default IP routing mechanisms

    Grid tool integration within the eMinerals Project

    Get PDF
    In this article we describe the eMinerals mini grid, which is now running in production mode. Thisis an integration of both compute and data components, the former build upon Condor, PBS and thefunctionality of Globus v2, and the latter being based on the combined use of the Storage ResourceBroker and the CCLRC data portal. We describe how we have integrated the middleware components,and the different facilities provided to the users for submitting jobs within such an environment. We willalso describe additional functionality we found it necessary to provide ourselves

    Class of correlated random networks with hidden variables

    Get PDF
    We study a class models of correlated random networks in which vertices are characterized by \textit{hidden variables} controlling the establishment of edges between pairs of vertices. We find analytical expressions for the main topological properties of these models as a function of the distribution of hidden variables and the probability of connecting vertices. The expressions obtained are checked by means of numerical simulations in a particular example. The general model is extended to describe a practical algorithm to generate random networks with an \textit{a priori} specified correlation structure. We also present an extension of the class, to map non-equilibrium growing networks to networks with hidden variables that represent the time at which each vertex was introduced in the system

    Conflict and Computation on Wikipedia: a Finite-State Machine Analysis of Editor Interactions

    Full text link
    What is the boundary between a vigorous argument and a breakdown of relations? What drives a group of individuals across it? Taking Wikipedia as a test case, we use a hidden Markov model to approximate the computational structure and social grammar of more than a decade of cooperation and conflict among its editors. Across a wide range of pages, we discover a bursty war/peace structure where the systems can become trapped, sometimes for months, in a computational subspace associated with significantly higher levels of conflict-tracking "revert" actions. Distinct patterns of behavior characterize the lower-conflict subspace, including tit-for-tat reversion. While a fraction of the transitions between these subspaces are associated with top-down actions taken by administrators, the effects are weak. Surprisingly, we find no statistical signal that transitions are associated with the appearance of particularly anti-social users, and only weak association with significant news events outside the system. These findings are consistent with transitions being driven by decentralized processes with no clear locus of control. Models of belief revision in the presence of a common resource for information-sharing predict the existence of two distinct phases: a disordered high-conflict phase, and a frozen phase with spontaneously-broken symmetry. The bistability we observe empirically may be a consequence of editor turn-over, which drives the system to a critical point between them.Comment: 23 pages, 3 figures. Matches published version. Code for HMM fitting available at http://bit.ly/sfihmm ; time series and derived finite state machines at bit.ly/wiki_hm

    Sleep Period Optimization Model For Layered Video Service Delivery Over eMBMS Networks

    Full text link
    Long Term Evolution-Advanced (LTE-A) and the evolved Multimedia Broadcast Multicast System (eMBMS) are the most promising technologies for the delivery of highly bandwidth demanding applications. In this paper we propose a green resource allocation strategy for the delivery of layered video streams to users with different propagation conditions. The goal of the proposed model is to minimize the user energy consumption. That goal is achieved by minimizing the time required by each user to receive the broadcast data via an efficient power transmission allocation model. A key point in our system model is that the reliability of layered video communications is ensured by means of the Random Linear Network Coding (RLNC) approach. Analytical results show that the proposed resource allocation model ensures the desired quality of service constraints, while the user energy footprint is significantly reduced.Comment: Proc. of IEEE ICC 2015, Selected Areas in Communications Symposium - Green Communications Track, to appea

    Exploring the concept of interaction computing through the discrete algebraic analysis of the Belousov–Zhabotinsky reaction

    Get PDF
    Interaction computing (IC) aims to map the properties of integrable low-dimensional non-linear dynamical systems to the discrete domain of finite-state automata in an attempt to reproduce in software the self-organizing and dynamically stable properties of sub-cellular biochemical systems. As the work reported in this paper is still at the early stages of theory development it focuses on the analysis of a particularly simple chemical oscillator, the Belousov-Zhabotinsky (BZ) reaction. After retracing the rationale for IC developed over the past several years from the physical, biological, mathematical, and computer science points of view, the paper presents an elementary discussion of the Krohn-Rhodes decomposition of finite-state automata, including the holonomy decomposition of a simple automaton, and of its interpretation as an abstract positional number system. The method is then applied to the analysis of the algebraic properties of discrete finite-state automata derived from a simplified Petri net model of the BZ reaction. In the simplest possible and symmetrical case the corresponding automaton is, not surprisingly, found to contain exclusively cyclic groups. In a second, asymmetrical case, the decomposition is much more complex and includes five different simple non-abelian groups whose potential relevance arises from their ability to encode functionally complete algebras. The possible computational relevance of these findings is discussed and possible conclusions are drawn
    corecore