42,464 research outputs found

    Compact Routing on Internet-Like Graphs

    Full text link
    The Thorup-Zwick (TZ) routing scheme is the first generic stretch-3 routing scheme delivering a nearly optimal local memory upper bound. Using both direct analysis and simulation, we calculate the stretch distribution of this routing scheme on random graphs with power-law node degree distributions, PkkγP_k \sim k^{-\gamma}. We find that the average stretch is very low and virtually independent of γ\gamma. In particular, for the Internet interdomain graph, γ2.1\gamma \sim 2.1, the average stretch is around 1.1, with up to 70% of paths being shortest. As the network grows, the average stretch slowly decreases. The routing table is very small, too. It is well below its upper bounds, and its size is around 50 records for 10410^4-node networks. Furthermore, we find that both the average shortest path length (i.e. distance) dˉ\bar{d} and width of the distance distribution σ\sigma observed in the real Internet inter-AS graph have values that are very close to the minimums of the average stretch in the dˉ\bar{d}- and σ\sigma-directions. This leads us to the discovery of a unique critical quasi-stationary point of the average TZ stretch as a function of dˉ\bar{d} and σ\sigma. The Internet distance distribution is located in a close neighborhood of this point. This observation suggests the analytical structure of the average stretch function may be an indirect indicator of some hidden optimization criteria influencing the Internet's interdomain topology evolution.Comment: 29 pages, 16 figure

    Emergent behaviors in the Internet of things: The ultimate ultra-large-scale system

    Get PDF
    To reach its potential, the Internet of Things (IoT) must break down the silos that limit applications' interoperability and hinder their manageability. Doing so leads to the building of ultra-large-scale systems (ULSS) in several areas, including autonomous vehicles, smart cities, and smart grids. The scope of ULSS is both large and complex. Thus, the authors propose Hierarchical Emergent Behaviors (HEB), a paradigm that builds on the concepts of emergent behavior and hierarchical organization. Rather than explicitly programming all possible decisions in the vast space of ULSS scenarios, HEB relies on the emergent behaviors induced by local rules at each level of the hierarchy. The authors discuss the modifications to classical IoT architectures required by HEB, as well as the new challenges. They also illustrate the HEB concepts in reference to autonomous vehicles. This use case paves the way to the discussion of new lines of research.Damian Roca work was supported by a Doctoral Scholarship provided by Fundación La Caixa. This work has been supported by the Spanish Government (Severo Ochoa grants SEV2015-0493) and by the Spanish Ministry of Science and Innovation (contracts TIN2015-65316-P).Peer ReviewedPostprint (author's final draft

    Diffusive capture processes for information search

    Get PDF
    We show how effectively the diffusive capture processes (DCP) on complex networks can be applied to information search in the networks. Numerical simulations show that our method generates only 2% of traffic compared with the most popular flooding-based query-packet-forwarding (FB) algorithm. We find that the average searching time, , of the our model is more scalable than another well known $n$-random walker model and comparable to the FB algorithm both on real Gnutella network and scale-free networks with $\gamma =2.4$. We also discuss the possible relationship between and , the second moment of the degree distribution of the networks

    Towards Simulation and Emulation of Large-Scale Computer Networks

    Get PDF
    Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today\u27s networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation

    Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things

    Get PDF
    The number of connected sensors and devices is expected to increase to billions in the near future. However, centralised cloud-computing data centres present various challenges to meet the requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput and bandwidth constraints. Edge computing is becoming the standard computing paradigm for latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related to centralised cloud-computing models. Such a paradigm relies on bringing computation close to the source of data, which presents serious operational challenges for large-scale cloud-computing providers. In this work, we present an architecture composed of low-cost Single-Board-Computer clusters near to data sources, and centralised cloud-computing data centres. The proposed cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT workload requirements while keeping scalability. We include an extensive empirical analysis to assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud architectures, and evaluate them through extensive simulation. We finally show that acquisition costs can be drastically reduced while keeping performance levels in data-intensive IoT use cases.Ministerio de Economía y Competitividad TIN2017-82113-C2-1-RMinisterio de Economía y Competitividad RTI2018-098062-A-I00European Union’s Horizon 2020 No. 754489Science Foundation Ireland grant 13/RC/209

    The state of peer-to-peer network simulators

    Get PDF
    Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate and extend existing work. We look at the landscape of simulators for research in peer-to-peer (P2P) networks by conducting a survey of a combined total of over 280 papers from before and after 2007 (the year of the last survey in this area), and comment on the large quantity of research using bespoke, closed-source simulators. We propose a set of criteria that P2P simulators should meet, and poll the P2P research community for their agreement. We aim to drive the community towards performing their experiments on simulators that allow for others to validate their results

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Decentralized Greedy-Based Algorithm for Smart Energy Management in Plug-in Electric Vehicle Energy Distribution Systems

    Get PDF
    Variations in electricity tariffs arising due to stochastic demand loads on the power grids have stimulated research in finding optimal charging/discharging scheduling solutions for electric vehicles (EVs). Most of the current EV scheduling solutions are either centralized, which suffer from low reliability and high complexity, while existing decentralized solutions do not facilitate the efficient scheduling of on-move EVs in large-scale networks considering a smart energy distribution system. Motivated by smart cities applications, we consider in this paper the optimal scheduling of EVs in a geographically large-scale smart energy distribution system where EVs have the flexibility of charging/discharging at spatially-deployed smart charging stations (CSs) operated by individual aggregators. In such a scenario, we define the social welfare maximization problem as the total profit of both supply and demand sides in the form of a mixed integer non-linear programming (MINLP) model. Due to the intractability, we then propose an online decentralized algorithm with low complexity which utilizes effective heuristics to forward each EV to the most profitable CS in a smart manner. Results of simulations on the IEEE 37 bus distribution network verify that the proposed algorithm improves the social welfare by about 30% on average with respect to an alternative scheduling strategy under the equal participation of EVs in charging and discharging operations. Considering the best-case performance where only EV profit maximization is concerned, our solution also achieves upto 20% improvement in flatting the final electricity load. Furthermore, the results reveal the existence of an optimal number of CSs and an optimal vehicle-to-grid penetration threshold for which the overall profit can be maximized. Our findings serve as guidelines for V2G system designers in smart city scenarios to plan a cost-effective strategy for large-scale EVs distributed energy management
    corecore