398 research outputs found

    Traffic-driven Epidemic Spreading in Finite-size Scale-Free Networks

    Full text link
    The study of complex networks sheds light on the relation between the structure and function of complex systems. One remarkable result is the absence of an epidemic threshold in infinite-size scale-free networks, which implies that any infection will perpetually propagate regardless of the spreading rate. The vast majority of current theoretical approaches assumes that infections are transmitted as a reaction process from nodes to all neighbors. Here we adopt a different perspective and show that the epidemic incidence is shaped by traffic flow conditions. Specifically, we consider the scenario in which epidemic pathways are defined and driven by flows. Through extensive numerical simulations and theoretical predictions, it is shown that the value of the epidemic threshold in scale-free networks depends directly on flow conditions, in particular on the first and second moments of the betweenness distribution given a routing protocol. We consider the scenarios in which the delivery capability of the nodes is bounded or unbounded. In both cases, the threshold values depend on the traffic and decrease as flow increases. Bounded delivery provokes the emergence of congestion, slowing down the spreading of the disease and setting a limit for the epidemic incidence. Our results provide a general conceptual framework to understand spreading processes on complex networks.Comment: Final version to be published in Proceedings of the National Academy of Sciences US

    Exploiting the power of multiplicity: a holistic survey of network-layer multipath

    Get PDF
    The Internet is inherently a multipath network: For an underlying network with only a single path, connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity, through which a diverse collection of paths is resource pooled as a single resource, to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities, promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault tolerance (through the use of multiple paths in backup/redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be multipath, including the use of multipath technology in data center computing; the ready availability of multiple heterogeneous radio interfaces in wireless (such as Wi-Fi and cellular) in wireless devices; ubiquity of mobile devices that are multihomed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as multipath TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely, the control plane problem of how to compute and select the routes and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work

    Navigability and synchronization in complex networks: a computational approach

    Get PDF
    Les xarxes complexes han demostrat ser una eina molt valuosa per estudiar sistemes reals, en part, gràcies a la creixent capacitat de computació. En aquesta tesi abordem computacionalment diversos problemes dividits en dos blocs. El primer bloc està motivat pels problemes que planteja la ràpida evolució de la Internet. D’una banda, el creixement exponencial de la xarxa està comprometent la seva escalabilitat per les dependències a les taules d’enrutament globals. Al Capítol 4 proposem un esquema d’enrutament descentralitzat que fa servir la projecció TSVD de l’estructura mesoscòpica de la xarxa com a mapa. Els resultats mostren que fent servir informació local podem guiar amb èxit en l’enrutament. Al Capítol 3 també avaluem la fiabilitat d’aquesta projecció davant el creixement de la xarxa. Els resultats indiquen que aquest mapa és robust i no necessita actualitzacions contínues. D’altra banda, la creixent demanda d’ample de banda és un factor potencial per produir congestió. Al Capítol 5 estenem un esquema d’enrutament dinàmic en el context de les xarxes multiplex, i l’analitzem amb xarxes sintètiques amb diferents assortativitats d’acoblament. Els resultats mostren que tenir en compte el volum de trànsit en l’enrutament retarda l’inici de la congestió. Tot i això, la distribució uniforme del trànsit produeix una transició de fase abrupta. Amb tot, l’acoblament assortatiu es presenta com la millor opció per a dissenys de xarxes òptimes. El segon bloc ve motivat per l’actual crisi financera mundial. Al Capítol 6 proposem estudiar la propagació de les crisis econòmiques utilitzant un model simple de xarxa formada per oscil·ladors integrate-and-fire, i caracteritzar la seva sincronització durant l’evolució de la xarxa de comerç. Els resultats mostren l’aparició d’un procés de globalització que dilueix les fronteres topològiques i accelera la propagació de les crisis financeres.Las redes complejas han demostrado ser una herramienta muy valiosa para estudiar sistemas reales, en parte, gracias a la creciente capacidad de computación. En esta tesis abordamos computacionalmente varios problemas divididos en dos bloques. El primer bloque está motivado por los problemas que plantea la rápida evolución de Internet. Por un lado, el crecimiento exponencial de la red está comprometiendo su escalabilidad por las dependencias a las tablas de enrutado globales. En el Capítulo 4 proponemos un esquema de enrutamiento descentralizado que utiliza la proyección TSVD de la estructura mesoscópica de la red como mapa. Los resultados muestran que utilizando información local podemos guiar con éxito el enrutado. En el Calítulo 3 también evaluamos la fiabilidad de esta proyección bajo cambios en la topología de la red. Los resultados indican que este mapa es robusto y no necesita actualizaciones continuas. Por otra parte, la creciente demanda de ancho de banda es un factor potencial de congestión. En el Capítulo 5 extendemos un esquema de enrutamiento dinámico en el marco de las redes multiplex, y lo analizamos en redes sintéticas con distintas asortatividades de acoplamiento. Los resultados muestran que tener en cuenta el volumen de tráfico en el enrutado retrasa la congestión. Sin embargo, la distribución uniforme del tráfico produce una transición de fase abrupta. Además, el acoplamiento asortativo se presenta como la mejor opción para diseños de redes óptimas. El segundo bloque viene motivado por la actual crisis financiera mundial. En el Capítulo 6 proponemos estudiar la propagación de las crisis económicas utilizando un modelo simple de red formada por osciladores integrate-and-fire, y caracterizar su sincronización durante la evolución de la red de comercio. Los resultados muestran la aparición de un proceso de globalización que diluye las fronteras topológicas y acelera la propagación de las crisis financieras.Complex networks are a powerful tool to study many real systems, partly thanks to the increasing capacity of computational resources. In this dissertation we address computationally a broad scope of problems that are framed in two parts. The first part is motivated by the issues posed by the rapid evolution of the Internet. On one side, the exponential growth of the network is compromising its scalability due to dependencies on global routing tables. In Chapter 4 we propose a decentralized routing scheme that exploits the TSVD projection of the mesoscopic structure of the network as a map. The results show that, using only local information, we can achieve good success rates in the routing process. Additionally, Chapter 3 evaluates the reliability of this projection when network topology changes. The results indicate that this map is very robust and does not need continual updates. On the other side, the increasing bandwidth demand is a potential trigger for congestion episodes. In Chapter 5 we extend a dynamic traffic-aware routing scheme to the context of multiplex networks, and we conduct the analysis on synthetic networks with different coupling assortativity. The results show that considering the traffic load in the transmission process delays the congestion onset. However, the uniform distribution of traffic produces an abrupt phase transition from free-flow to congested state. Withal, assortative coupling is depicted as the best consideration for optimal network designs. The second part is motivated by the current global financial crises. Chapter 6 presents a study on the spreading of economic crises using a simple model of networked integrate-and-fire oscillators and we characterize synchronization process on the evolving trade network. The results show the emergence of a globalization process that dilutes the topological borders and accelerates the spreading of financial crashes

    Complex Systems: Nonlinearity and Structural Complexity in spatially extended and discrete systems

    Get PDF
    Resumen Esta Tesis doctoral aborda el estudio de sistemas de muchos elementos (sistemas discretos) interactuantes. La fenomenología presente en estos sistemas esta dada por la presencia de dos ingredientes fundamentales: (i) Complejidad dinámica: Las ecuaciones del movimiento que rigen la evolución de los constituyentes son no lineales de manera que raramente podremos encontrar soluciones analíticas. En el espacio de fases de estos sistemas pueden coexistir diferentes tipos de trayectorias dinámicas (multiestabilidad) y su topología puede variar enormemente dependiendo de dos parámetros usados en las ecuaciones. La conjunción de dinámica no lineal y sistemas de muchos grados de libertad (como los que aquí se estudian) da lugar a propiedades emergentes como la existencia de soluciones localizadas en el espacio, sincronización, caos espacio-temporal, formación de patrones, etc... (ii) Complejidad estructural: Se refiere a la existencia de un alto grado de aleatoriedad en el patrón de las interacciones entre los componentes. En la mayoría de los sistemas estudiados esta aleatoriedad se presenta de forma que la descripción de la influencia del entorno sobre un único elemento del sistema no puede describirse mediante una aproximación de campo medio. El estudio de estos dos ingredientes en sistemas extendidos se realizará de forma separada (Partes I y II de esta Tesis) y conjunta (Parte III). Si bien en los dos primeros casos la fenomenología introducida por cada fuente de complejidad viene siendo objeto de amplios estudios independientes a lo largo de los últimos años, la conjunción de ambas da lugar a un campo abierto y enormemente prometedor, donde la interdisciplinariedad concerniente a los campos de aplicación implica un amplio esfuerzo de diversas comunidades científicas. En particular, este es el caso del estudio de la dinámica en sistemas biológicos cuyo análisis es difícil de abordar con técnicas exclusivas de la Bioquímica, la Física Estadística o la Física Matemática. En definitiva, el objetivo marcado en esta Tesis es estudiar por separado dos fuentes de complejidad inherentes a muchos sistemas de interés para, finalmente, estar en disposición de atacar con nuevas perspectivas problemas relevantes para la Física de procesos celulares, la Neurociencia, Dinámica Evolutiva, etc..

    Self-organisation in ant-based peer-to-peer systems

    Get PDF
    Peer-to-peer systems are a highly decentralised form of distributed computing, which has ad¬ vantages of robustness and redundancy over more centralised systems. When the peer-to-peer system has a stable and static population of nodes, variations and bursts in traffic levels cause momentary levels of congestion in the system, which have to be dealt with by routing policies implemented within the peer-to-peer system in order to maintain efficient and effective routes.Peer-to-peer systems, however, are dynamic in nature, as they exhibit churn, i.e. nodes enter and leave the system during their use. This dynamic nature makes it difficult to identify consistent routing policies that ensure a reasonable proportion of traffic in the system is routed successfully to its destination. Studies have shown that chum in peer-to-peer systems is difficult to model and characterise, and further, is difficult to manage.The task of creating and maintaining efficient routes and network topologies in dynamic environments, such as those described above, is one of dynamic optimisation. Complex adap¬ tive systems such as ant colony optimisation and genetic algorithms have been shown to display adaptive properties in dynamic environments. Although complex adaptive systems have been applied to a small number of dynamic optimisation problems, their application to dynamic opti¬ misation problems is new in general and also application to routing in dynamic environments is new. Further, the problem characteristics and conditions under which these algorithms perform well, and the reasons for doing so, are not yet fully understood. The assessment of how good the complex adaptive systems are at creating solutions to the dynamic routing optimisation problem detailed above is dependent on the metrics used to make the measurements.A contribution of this thesis is the development of a theoretical framework within which we can analyse the behaviours and responses of any peer-to-peer system. We do this by considering a peer-to-peer system to be a graph generating algorithm, which has input parameters and has outputs which can be measured using topological metrics and statistics that characterise the traffic through the network. Specifically, we consider the behaviour of an ant-based peer-to-peer system and we have designed and implemented an ant-based peer-to-peer simulator to enable this.Recently methods for characterising graphs by their scaling properties have been developed and a small number of distinct categories of graphs have been identified (such as random graphs, lattices, small world graphs, and scale-free graphs). These graph characterisation methods have also enabled the creation of new metrics to enable measurements of properties of the graphs belonging to different categories.We use these new graph characterisation techniques mentioned above and the associated metrics to implement a systematic approach to the analysis of the behaviour of our ant peer-to-peer system. We present the results of a number of simulation runs of our system initiated with a range of values of key parameters. The resulting networks are then analysed from both the point of view of traffic statistics, and also topological metrics.Three sets of experiments have been designed and conducted using the simulator created during this project. The first set, equilibrium experiments, consider the behaviour of the system when the number of operational nodes in the system is constant and also the demand placed on the system is constant. The second set of experiments considers the changes that occur when there are bursts in traffic levels or the demand placed on the system. The final set considers the effect of churn in the system, where nodes enter and leave the system during its operation. In crafting the experiments we have been able to identify many of the major control parameters of the ant-based peer-to-peer system.A further contribution of this thesis is the results of the experiments which show that under conditions of network congestion the ant peer-to-peer system becomes very brittle. This is characterised by small average path lengths, a low proportion of ants successfully getting through to their destination node, and also a low average degree of the nodes in the network. This brittleness is made worse when nodes fail and also when the demand applied to the system changes abruptly.A further contribution of this thesis is the creation of a method of ranking the topology of a network with respect to a target topology. This method can be used as the basis for topological control (i.e. the distributed self-assembly of network topologies within a peer-to-peer system that have desired topological properties) and assessing how best to modify a topology in order to move it closer to the desired (or reference) topology. We use this method when measuring the outcome of our experiments to determine how far the resulting graph is from a random graph. In principle this method could be used to measure the distance of the graph of the peer-to-peer network from any reference topology (e.g. a lattice or a tree).A final contribution of this thesis is the definition of a distributed routing policy which uses a measure of confidence that nodes in the system are in an operational state when making calculations regarding onward routing. The method of implementing the routing algorithm within the ant peer-to-peer system has been specified, although this has not been implemented within this thesis. It is conjectured that this algorithm would improve the performance of the ant peer-to-peer system under conditions of churn.The main question this thesis is concerned with is how the behaviour of the ant-based peer-to-peer system can best be measured using a simulation-based approach, and how these measurables can be used to control and optimise the performance of the ant-based peer-to-peer system in conditions of equilibrium, and also non-equilibrium (specifically varying levels of bursts in traffic demand, and also varying rates of nodes entering and leaving the peer-to-peer system)

    Design Methodologies and CAD Tools for Leakage Power Optimization in FPGAs

    Get PDF
    The scaling of the CMOS technology has precipitated an exponential increase in both subthreshold and gate leakage currents in modern VLSI designs. Consequently, the contribution of leakage power to the total chip power dissipation for CMOS designs is increasing rapidly, which is estimated to be 40% for the current technology generations and is expected to exceed 50% by the 65nm CMOS technology. In FPGAs, the power dissipation problem is further aggravated when compared to ASIC designs because FPGA use more transistors per logic function when compared to ASIC designs. Consequently, solving the leakage power problem is pivotal to devising power-aware FPGAs in the nanometer regime. This thesis focuses on devising both architectural and CAD techniques for leakage mitigation in FPGAs. Several CAD and architectural modifications are proposed to reduce the impact of leakage power dissipation on modern FPGAs. Firstly, multi-threshold CMOS (MTCMOS) techniques are introduced to FPGAs to permanently turn OFF the unused resources of the FPGA, FPGAs are characterized with low utilization percentages that can reach 60%. Moreover, such architecture enables the dynamic shutting down of the FPGA idle parts, thus reducing the standby leakage significantly. Employing the MTCMOS technique in FPGAs requires several changes to the FPGA architecture, including the placement and routing of the sleep signals and the MTCMOS granularity. On the CAD level, the packing and placement stages are modified to allow the possibility of dynamically turning OFF the idle parts of the FPGA. A new activity generation algorithm is proposed and implemented that aims to identify the logic blocks in a design that exhibit similar idleness periods. Several criteria for the activity generation algorithm are used, including connectivity and logic function. Several versions of the activity generation algorithm are implemented to trade power savings with runtime. A newly developed packing algorithm uses the resulting activities to minimize leakage power dissipation by packing the logic blocks with similar or close activities together. By proposing an FPGA architecture that supports MTCMOS and developing a CAD tool that supports the new architecture, an average power savings of 30% is achieved for a 90nm CMOS process while incurring a speed penalty of less than 5%. This technique is further extended to provide a timing-sensitive version of the CAD flow to vary the speed penalty according to the criticality of each logic block. Secondly, a new technique for leakage power reduction in FPGAs based on the use of input dependency is developed. Both subthreshold and gate leakage power are heavily dependent on the input state. In FPGAs, the effect of input dependency is exacerbated due to the use of pass-transistor multiplexer logic, which can exhibit up to 50% variation in leakage power due to the input states. In this thesis, a new algorithm is proposed that uses bit permutation to reduce subthreshold and gate leakage power dissipation in FPGAs. The bit permutation algorithm provides an average leakage power reduction of 40% while having less than 2% impact on the performance and no penalty on the design area. Thirdly, an accurate probabilistic power model for FPGAs is developed to quantify the savings from the proposed leakage power reduction techniques. The proposed power model accounts for dynamic, short circuit, and leakage power (including both subthreshold and gate leakage power) dissipation in FPGAs. Moreover, the power model accounts for power due to glitches, which accounts for almost 20% of the dynamic power dissipation in FPGAs. The use of probabilities in the power model makes it more computationally efficient than the other FPGA power models in the literature that rely on long input sequence simulations. One of the main advantages of the proposed power model is the incorporation of spatial correlation while estimating the signal probability. Other probabilistic FPGA power models assume spatial independence among the design signals, thus overestimating the power calculations. In the proposed model, a probabilistic model is proposed for spatial correlations among the design signals. Moreover, a different variation is proposed that manages to capture most of the spatial correlations with minimum impact on runtime. Furthermore, the proposed power model accounts for the input dependency of subthreshold and gate leakage power dissipation. By comparing the proposed power model to HSpice simulation, the estimated power is within 8% and is closer to HSpice simulations than other probabilistic FPGA power models by an average of 20%

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    A complete design path for the layout of flexible macros

    Get PDF
    XIV+172hlm.;24c
    corecore