1,706 research outputs found

    Extra Connectivity of Strong Product of Graphs

    Full text link
    The gg-extraextra connectivityconnectivity κg(G)\kappa_{g}(G) of a connected graph GG is the minimum cardinality of a set of vertices, if it exists, whose deletion makes GG disconnected and leaves each remaining component with more than gg vertices, where gg is a non-negative integer. The strongstrong productproduct G1⊠G2G_1 \boxtimes G_2 of graphs G1G_1 and G2G_2 is the graph with vertex set V(G1⊠G2)=V(G1)×V(G2)V(G_1 \boxtimes G_2)=V(G_1)\times V(G_2), where two distinct vertices (x1,y1),(x2,y2)∈V(G1)×V(G2)(x_{1}, y_{1}),(x_{2}, y_{2}) \in V(G_1)\times V(G_2) are adjacent in G1⊠G2G_1 \boxtimes G_2 if and only if x1=x2x_{1}=x_{2} and y1y2∈E(G2)y_{1} y_{2} \in E(G_2) or y1=y2y_{1}=y_{2} and x1x2∈E(G1)x_{1} x_{2} \in E(G_1) or x1x2∈E(G1)x_{1} x_{2} \in E(G_1) and y1y2∈E(G2)y_{1} y_{2} \in E(G_2). In this paper, we give the g (≤3)g\ (\leq 3)-extraextra connectivityconnectivity of G1⊠G2G_1\boxtimes G_2, where GiG_i is a maximally connected ki (≥2)k_i\ (\geq 2)-regular graph for i=1,2i=1,2. As a byproduct, we get g (≤3)g\ (\leq 3)-extraextra conditional fault-diagnosability of G1⊠G2G_1\boxtimes G_2 under PMCPMC model

    Star Structure Connectivity of Folded hypercubes and Augmented cubes

    Full text link
    The connectivity is an important parameter to evaluate the robustness of a network. As a generalization, structure connectivity and substructure connectivity of graphs were proposed. For connected graphs GG and HH, the HH-structure connectivity κ(G;H)\kappa(G; H) (resp. HH-substructure connectivity κs(G;H)\kappa^{s}(G; H)) of GG is the minimum cardinality of a set of subgraphs FF of GG that each is isomorphic to HH (resp. to a connected subgraph of HH) so that G−FG-F is disconnected or the singleton. As popular variants of hypercubes, the nn-dimensional folded hypercubes FQnFQ_{n} and augmented cubes AQnAQ_{n} are attractive interconnected network prototypes for multiple processor systems. In this paper, we obtain that κ(FQn;K1,m)=κs(FQn;K1,m)=⌈n+12⌉\kappa(FQ_{n};K_{1,m})=\kappa^{s}(FQ_{n};K_{1,m})=\lceil\frac{n+1}{2}\rceil for 2⩽m⩽n−12\leqslant m\leqslant n-1, n⩾7n\geqslant 7, and κ(AQn;K1,m)=κs(AQn;K1,m)=⌈n−12⌉\kappa(AQ_{n};K_{1,m})=\kappa^{s}(AQ_{n};K_{1,m})=\lceil\frac{n-1}{2}\rceil for 4⩽m⩽3n−1544\leqslant m\leqslant \frac{3n-15}{4}

    Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments

    Full text link
    Data centres that use consumer-grade disks drives and distributed peer-to-peer systems are unreliable environments to archive data without enough redundancy. Most redundancy schemes are not completely effective for providing high availability, durability and integrity in the long-term. We propose alpha entanglement codes, a mechanism that creates a virtual layer of highly interconnected storage devices to propagate redundant information across a large scale storage system. Our motivation is to design flexible and practical erasure codes with high fault-tolerance to improve data durability and availability even in catastrophic scenarios. By flexible and practical, we mean code settings that can be adapted to future requirements and practical implementations with reasonable trade-offs between security, resource usage and performance. The codes have three parameters. Alpha increases storage overhead linearly but increases the possible paths to recover data exponentially. Two other parameters increase fault-tolerance even further without the need of additional storage. As a result, an entangled storage system can provide high availability, durability and offer additional integrity: it is more difficult to modify data undetectably. We evaluate how several redundancy schemes perform in unreliable environments and show that alpha entanglement codes are flexible and practical codes. Remarkably, they excel at code locality, hence, they reduce repair costs and become less dependent on storage locations with poor availability. Our solution outperforms Reed-Solomon codes in many disaster recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN

    Static reliability and resilience in dynamic systems

    Get PDF
    Two systems are modeled in this thesis. First, we consider a multi-component stochastic monotone binary system, or SMBS for short. The reliability of an SMBS is the probability of correct operation. A statistical approximation of the system reliability is provided for these systems, inspired in Monte Carlo Methods. Then, we are focused on the diameter constrained reliability model (DCR), which was originally developed for delay sensitive applications over the Internet infrastructure. The computational complexity of the DCR is analyzed. Networks with an efficient (i.e., polynomial time) DCR computation are offered, termed Weak graphs. Second, we model the effect of a dynamic epidemic propagation. Our first approach is to develop a SIR-based simulation, where unrealistic assumptions for SIR model (infinite, homogeneous, fully-mixed population) are discarded. Finally, we formalize a stochastic rocess that counts infected individuals, and further investigate node-immunization strategies, subject to a budget nstraint. A combinatorial optimization problem is here introduced, called Graph Fragmentation Problem. There, the impact of a highly virulent epidemic propagation is analyzed, and we mathematically prove that Greedy heuristic is suboptimal

    Evaluation of wsn technology for dependable monitoring in water environmnts

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitetura e Redes de Computadores) Universidade de Lisboa, Faculdade de Ciências, 2019Few problems arise when trying to reliably monitor a surrounding environment by the use of sensors and a wireless network to disseminate the information gathered. In the context of an aquatic environment, the undulation and the low predictability of the surrounding environment could cause faults in the transmission of data. The initial motivation for the work developed in this thesis was the Aquamon project. Aquamon is a project that has as objective the deployment of a dependable Wireless Sensor Network (WSN) for the purposes of water quality monitoring and the study of tidal movements. Therefore, Aquamon, like any other WSN will have to go through the process of choosing a technology that meets its application requirements as well as the requirements imposed by the deployment environment. WSNs can have constraints when it comes to the Quality of Service and availability it can provide. These networks generally have a set requirements that need to be satisfied. Thus, there needs to be a selection of one (or multiple) wireless technologies that can satisfy said requirements. This selection process is usually done in a ad-hoc way, weighting the advantages and disadvantages of different possible solutions with respect to some requirements, often using empirical knowledge or simply dictated by the designer’s preference for some particular technology. When several functional and non-functional requirements have to be addressed, finding an optimal or close to optimal solution may become a hard problem. This thesis proposes a methodology for addressing this optimization problem in an automated way. It considers various application requirements and the characteristics of the available technologies (including Sigfox, LoRa, NB-IoT, ZigBee and ZigBee Pro) and delivers the solution that better satisfies the requirements. It illustrates how the methodology is applied to a specific use case of WSN-based environmental monitoring in the Seixal Bay

    Identification of key players in networks using multi-objective optimization and its applications

    Get PDF
    Identification of a set of key players, is of interest in many disciplines such as sociology, politics, finance, economics, etc. Although many algorithms have been proposed to identify a set of key players, each emphasizes a single objective of interest. Consequently, the prevailing deficiency of each of these methods is that, they perform well only when we consider their objective of interest as the only characteristic that the set of key players should have. But in complicated real life applications, we need a set of key players which can perform well with respect to multiple objectives of interest. In this dissertation, a new perspective for key player identification is proposed, based on optimizing multiple objectives of interest. The proposed approach is useful in identifying both key nodes and key edges in networks. Experimental results show that the sets of key players which optimize multiple objectives perform better than the key players identified using existing algorithms, in multiple applications such as eventual influence limitation problem, immunization problem, improving the fault tolerance of the smart grid, etc. We utilize multi-objective optimization algorithms to optimize a set of objectives for a particular application. A large number of solutions are obtained when the number of objectives is high and the objectives are uncorrelated. But decision-makers usually require one or two solutions for their applications. In addition, the computational time required for multi-objective optimization increases with the number of objectives. A novel approach to obtain a subset of the Pareto optimal solutions is proposed and shown to alleviate the aforementioned problems. As the size and the complexity of the networks increase, so does the computational effort needed to compute the network analysis measures. We show that degree centrality based network sampling can be used to reduce the running times without compromising the quality of key nodes obtained
    • …
    corecore