9,779 research outputs found

    Lower Bounds for Structuring Unreliable Radio Networks

    Full text link
    In this paper, we study lower bounds for randomized solutions to the maximal independent set (MIS) and connected dominating set (CDS) problems in the dual graph model of radio networks---a generalization of the standard graph-based model that now includes unreliable links controlled by an adversary. We begin by proving that a natural geographic constraint on the network topology is required to solve these problems efficiently (i.e., in time polylogarthmic in the network size). We then prove the importance of the assumption that nodes are provided advance knowledge of their reliable neighbors (i.e, neighbors connected by reliable links). Combined, these results answer an open question by proving that the efficient MIS and CDS algorithms from [Censor-Hillel, PODC 2011] are optimal with respect to their dual graph model assumptions. They also provide insight into what properties of an unreliable network enable efficient local computation.Comment: An extended abstract of this work appears in the 2014 proceedings of the International Symposium on Distributed Computing (DISC

    Exact two-terminal reliability of some directed networks

    Full text link
    The calculation of network reliability in a probabilistic context has long been an issue of practical and academic importance. Conventional approaches (determination of bounds, sums of disjoint products algorithms, Monte Carlo evaluations, studies of the reliability polynomials, etc.) only provide approximations when the network's size increases, even when nodes do not fail and all edges have the same reliability p. We consider here a directed, generic graph of arbitrary size mimicking real-life long-haul communication networks, and give the exact, analytical solution for the two-terminal reliability. This solution involves a product of transfer matrices, in which individual reliabilities of edges and nodes are taken into account. The special case of identical edge and node reliabilities (p and rho, respectively) is addressed. We consider a case study based on a commonly-used configuration, and assess the influence of the edges being directed (or not) on various measures of network performance. While the two-terminal reliability, the failure frequency and the failure rate of the connection are quite similar, the locations of complex zeros of the two-terminal reliability polynomials exhibit strong differences, and various structure transitions at specific values of rho. The present work could be extended to provide a catalog of exactly solvable networks in terms of reliability, which could be useful as building blocks for new and improved bounds, as well as benchmarks, in the general case

    Exact solutions for the two- and all-terminal reliabilities of the Brecht-Colbourn ladder and the generalized fan

    Full text link
    The two- and all-terminal reliabilities of the Brecht-Colbourn ladder and the generalized fan have been calculated exactly for arbitrary size as well as arbitrary individual edge and node reliabilities, using transfer matrices of dimension four at most. While the all-terminal reliabilities of these graphs are identical, the special case of identical edge (pp) and node (ρ\rho) reliabilities shows that their two-terminal reliabilities are quite distinct, as demonstrated by their generating functions and the locations of the zeros of the reliability polynomials, which undergo structural transitions at ρ=1/2\rho = \displaystyle {1/2}

    Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments

    Full text link
    Data centres that use consumer-grade disks drives and distributed peer-to-peer systems are unreliable environments to archive data without enough redundancy. Most redundancy schemes are not completely effective for providing high availability, durability and integrity in the long-term. We propose alpha entanglement codes, a mechanism that creates a virtual layer of highly interconnected storage devices to propagate redundant information across a large scale storage system. Our motivation is to design flexible and practical erasure codes with high fault-tolerance to improve data durability and availability even in catastrophic scenarios. By flexible and practical, we mean code settings that can be adapted to future requirements and practical implementations with reasonable trade-offs between security, resource usage and performance. The codes have three parameters. Alpha increases storage overhead linearly but increases the possible paths to recover data exponentially. Two other parameters increase fault-tolerance even further without the need of additional storage. As a result, an entangled storage system can provide high availability, durability and offer additional integrity: it is more difficult to modify data undetectably. We evaluate how several redundancy schemes perform in unreliable environments and show that alpha entanglement codes are flexible and practical codes. Remarkably, they excel at code locality, hence, they reduce repair costs and become less dependent on storage locations with poor availability. Our solution outperforms Reed-Solomon codes in many disaster recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN

    On finding a minimum vertex cover of a series-parallel graph

    Get PDF
    AbstractWe present a simple linear time algorithm for finding a minimum vertex cover for series-parallel graphs

    From Multiview Image Curves to 3D Drawings

    Full text link
    Reconstructing 3D scenes from multiple views has made impressive strides in recent years, chiefly by correlating isolated feature points, intensity patterns, or curvilinear structures. In the general setting - without controlled acquisition, abundant texture, curves and surfaces following specific models or limiting scene complexity - most methods produce unorganized point clouds, meshes, or voxel representations, with some exceptions producing unorganized clouds of 3D curve fragments. Ideally, many applications require structured representations of curves, surfaces and their spatial relationships. This paper presents a step in this direction by formulating an approach that combines 2D image curves into a collection of 3D curves, with topological connectivity between them represented as a 3D graph. This results in a 3D drawing, which is complementary to surface representations in the same sense as a 3D scaffold complements a tent taut over it. We evaluate our results against truth on synthetic and real datasets.Comment: Expanded ECCV 2016 version with tweaked figures and including an overview of the supplementary material available at multiview-3d-drawing.sourceforge.ne
    corecore