554 research outputs found

    DCCast: Efficient Point to Multipoint Transfers Across Datacenters

    Full text link
    Using multiple datacenters allows for higher availability, load balancing and reduced latency to customers of cloud services. To distribute multiple copies of data, cloud providers depend on inter-datacenter WANs that ought to be used efficiently considering their limited capacity and the ever-increasing data demands. In this paper, we focus on applications that transfer objects from one datacenter to several datacenters over dedicated inter-datacenter networks. We present DCCast, a centralized Point to Multi-Point (P2MP) algorithm that uses forwarding trees to efficiently deliver an object from a source datacenter to required destination datacenters. With low computational overhead, DCCast selects forwarding trees that minimize bandwidth usage and balance load across all links. With simulation experiments on Google's GScale network, we show that DCCast can reduce total bandwidth usage and tail Transfer Completion Times (TCT) by up to 50%50\% compared to delivering the same objects via independent point-to-point (P2P) transfers.Comment: 9th USENIX Workshop on Hot Topics in Cloud Computing, https://www.usenix.org/conference/hotcloud17/program/presentation/noormohammadpou

    Deployment of a Hybrid Multicast Switch in Energy-Aware Data Center Network: A Case of Fat-Tree Topology

    Get PDF
    Recently, energy efficiency or green IT has become a hot issue for many IT infrastructures as they attempt to utilize energy-efficient strategies in their enterprise IT systems in order to minimize operational costs. Networking devices are shared resources connecting important IT infrastructures, especially in a data center network they are always operated 24/7 which consume a huge amount of energy, and it has been obviously shown that this energy consumption is largely independent of the traffic through the devices. As a result, power consumption in networking devices is becoming more and more a critical problem, which is of interest for both research community and general public. Multicast benefits group communications in saving link bandwidth and improving application throughput, both of which are important for green data center. In this paper, we study the deployment strategy of multicast switches in hybrid mode in energy-aware data center network: a case of famous fat-tree topology. The objective is to find the best location to deploy multicast switch not only to achieve optimal bandwidth utilization but also to minimize power consumption. We show that it is possible to easily achieve nearly 50% of energy consumption after applying our proposed algorithm

    Game and Balance Multicast Architecture Algorithms for Sensor Grid

    Get PDF
    We propose a scheme to attain shorter multicast delay and higher efficiency in the data transfer of sensor grid. Our scheme, in one cluster, seeks the central node, calculates the space and the data weight vectors. Then we try to find a new vector composed by linear combination of the two old ones. We use the equal correlation coefficient between the new and old vectors to find the point of game and balance of the space and data factorsbuild a binary simple equation, seek linear parameters, and generate a least weight path tree. We handled the issue from a quantitative way instead of a qualitative way. Based on this idea, we considered the scheme from both the space and data factor, then we built the mathematic model, set up game and balance relationship and finally resolved the linear indexes, according to which we improved the transmission efficiency of sensor grid. Extended simulation results indicate that our scheme attains less average multicast delay and number of links used compared with other well-known existing schemes

    Recent Trends in Communication Networks

    Get PDF
    In recent years there has been many developments in communication technology. This has greatly enhanced the computing power of small handheld resource-constrained mobile devices. Different generations of communication technology have evolved. This had led to new research for communication of large volumes of data in different transmission media and the design of different communication protocols. Another direction of research concerns the secure and error-free communication between the sender and receiver despite the risk of the presence of an eavesdropper. For the communication requirement of a huge amount of multimedia streaming data, a lot of research has been carried out in the design of proper overlay networks. The book addresses new research techniques that have evolved to handle these challenges

    GMPLS-OBS interoperability and routing acalability in internet

    Get PDF
    The popularization of Internet has turned the telecom world upside down over the last two decades. Network operators, vendors and service providers are being challenged to adapt themselves to Internet requirements in a way to properly serve the huge number of demanding users (residential and business). The Internet (data-oriented network) is supported by an IP packet-switched architecture on top of a circuit-switched, optical-based architecture (voice-oriented network), which results in a complex and rather costly infrastructure to the transport of IP traffic (the dominant traffic nowadays). In such a way, a simple and IP-adapted network architecture is desired. From the transport network perspective, both Generalized Multi-Protocol Label Switching (GMPLS) and Optical Burst Switching (OBS) technologies are part of the set of solutions to progress towards an IP-over-WDM architecture, providing intelligence in the control and management of resources (i.e. GMPLS) as well as a good network resource access and usage (i.e. OBS). The GMPLS framework is the key enabler to orchestrate a unified optical network control and thus reduce network operational expenses (OPEX), while increasing operator's revenues. Simultaneously, the OBS technology is one of the well positioned switching technologies to realize the envisioned IP-over-WDM network architecture, leveraging on the statistical multiplexing of data plane resources to enable sub-wavelength in optical networks. Despite of the GMPLS principle of unified control, little effort has been put on extending it to incorporate the OBS technology and many open questions still remain. From the IP network perspective, the Internet is facing scalability issues as enormous quantities of service instances and devices must be managed. Nowadays, it is believed that the current Internet features and mechanisms cannot cope with the size and dynamics of the Future Internet. Compact Routing is one of the main breakthrough paradigms on the design of a routing system scalable with the Future Internet requirements. It intends to address the fundamental limits of current stretch-1 shortest-path routing in terms of RT scalability (aiming at sub-linear growth). Although "static" compact routing works fine, scaling logarithmically on the number of nodes even in scale-free graphs such as Internet, it does not handle dynamic graphs. Moreover, as multimedia content/services proliferate, the multicast is again under the spotlight as bandwidth efficiency and low RT sizes are desired. However, it makes the problem even worse as more routing entries should be maintained. In a nutshell, the main objective of this thesis in to contribute with fully detailed solutions dealing both with i) GMPLS-OBS control interoperability (Part I), fostering unified control over multiple switching domains and reduce redundancy in IP transport. The proposed solution overcomes every interoperability technology-specific issue as well as it offers (absolute) QoS guarantees overcoming OBS performance issues by making use of the GMPLS traffic-engineering (TE) features. Keys extensions to the GMPLS protocol standards are equally approached; and ii) new compact routing scheme for multicast scenarios, in order to overcome the Future Internet inter-domain routing system scalability problem (Part II). In such a way, the first known name-independent (i.e. topology unaware) compact multicast routing algorithm is proposed. On the other hand, the AnyTraffic Labeled concept is also introduced saving on forwarding entries by sharing a single forwarding entry to unicast and multicast traffic type. Exhaustive simulation campaigns are run in both cases in order to assess the reliability and feasible of the proposals

    Application of overlay techniques to network monitoring

    Get PDF
    Measurement and monitoring are important for correct and efficient operation of a network, since these activities provide reliable information and accurate analysis for characterizing and troubleshooting a network’s performance. The focus of network measurement is to measure the volume and types of traffic on a particular network and to record the raw measurement results. The focus of network monitoring is to initiate measurement tasks, collect raw measurement results, and report aggregated outcomes. Network systems are continuously evolving: besides incremental change to accommodate new devices, more drastic changes occur to accommodate new applications, such as overlay-based content delivery networks. As a consequence, a network can experience significant increases in size and significant levels of long-range, coordinated, distributed activity; furthermore, heterogeneous network technologies, services and applications coexist and interact. Reliance upon traditional, point-to-point, ad hoc measurements to manage such networks is becoming increasingly tenuous. In particular, correlated, simultaneous 1-way measurements are needed, as is the ability to access measurement information stored throughout the network of interest. To address these new challenges, this dissertation proposes OverMon, a new paradigm for edge-to-edge network monitoring systems through the application of overlay techniques. Of particular interest, the problem of significant network overheads caused by normal overlay network techniques has been addressed by constructing overlay networks with topology awareness - the network topology information is derived from interior gateway protocol (IGP) traffic, i.e. OSPF traffic, thus eliminating all overlay maintenance network overhead. Through a prototype that uses overlays to initiate measurement tasks and to retrieve measurement results, systematic evaluation has been conducted to demonstrate the feasibility and functionality of OverMon. The measurement results show that OverMon achieves good performance in scalability, flexibility and extensibility, which are important in addressing the new challenges arising from network system evolution. This work, therefore, contributes an innovative approach of applying overly techniques to solve realistic network monitoring problems, and provides valuable first hand experience in building and evaluating such a distributed system

    Effective bootstrapping of Peer-to Peer networks over Mobile Ad-hoc networks

    Get PDF
    Mobile Ad-hoc Networks (MANETs) and Peer-to-Peer (P2P) networks are vigorous, revolutionary communication technologies in the 21st century. They lead the trend of decentralization. Decentralization will ultimately win clients over client/server model, because it gives ordinary network users more control, and stimulates their active participation. It is a determinant factor in shaping the future of networking. MANETs and P2P networks are very similar in nature. Both are dynamic, distributed. Both use multi-hop broadcast or multicast as major pattern of traffic. Both set up connection by self-organizing and maintain connection by self-healing. Embodying the slogan networking without networks, both abandoned traditional client/server model and disclaimed pre-existing infrastructure. However, their status quo levels of real world application are widely divergent. P2P networks are now accountable for about 50 ~ 70% internet traffic, while MANETs are still primarily in the laboratory. The interesting and confusing phenomenon has sparked considerable research effort to transplant successful approaches from P2P networks into MANETs. While most research in the synergy of P2P networks and MANETs focuses on routing, the network bootstrapping problem remains indispensable for any such transplantation to be realized. The most pivotal problems in bootstrapping are: (1) automatic configuration of nodes addresses and IDs, (2) topology discovery and transformation in different layers and name spaces. In this dissertation research, we have found novel solutions for these problems. The contributions of this dissertation are: (1) a non-IP, flat address automatic configuration scheme, which integrates lower layer addresses and P2P IDs in application layer and makes simple cryptographical assignment possible. A related paper entitled Pastry over Ad-Hoc Networks with Automatic Flat Address Configuration was submitted to Elsevier Journal of Ad Hoc Networks in May. (2) an effective ring topology construction algorithm which builds perfect ring in P2P ID space using only simplest multi-hop unicast or multicast. Upon this ring, popular structured P2P networks like Chord, Pastry could be built with great ease. A related paper entitled Chord Bootstrapping on MANETs - All Roads lead to Rome will be ready for submission after defense of the dissertation

    Trade-off among timeliness, messages and accuracy for large-Ssale information management

    Get PDF
    The increasing amount of data and the number of nodes in large-scale environments require new techniques for information management. Examples of such environments are the decentralized infrastructures of Computational Grid and Computational Cloud applications. These large-scale applications need different kinds of aggregated information such as resource monitoring, resource discovery or economic information. The challenge of providing timely and accurate information in large scale environments arise from the distribution of the information. Reasons for delays in distributed information system are a long information transmission time due to the distribution, churn and failures. A problem of large applications such as peer-to-peer (P2P) systems is the increasing retrieval time of the information due to the decentralization of the data and the failure proneness. However, many applications need a timely information provision. Another problem is an increasing network consumption when the application scales to millions of users and data. Using approximation techniques allows reducing the retrieval time and the network consumption. However, the usage of approximation techniques decreases the accuracy of the results. Thus, the remaining problem is to offer a trade-off in order to solve the conflicting requirements of fast information retrieval, accurate results and low messaging cost. Our goal is to reach a self-adaptive decision mechanism to offer a trade-off among the retrieval time, the network consumption and the accuracy of the result. Self-adaption enables distributed software to modify its behavior based on changes in the operating environment. In large-scale information systems that use hierarchical data aggregation, we apply self-adaptation to control the approximation used for the information retrieval and reduces the network consumption and the retrieval time. The hypothesis of the thesis is that approximation techniquescan reduce the retrieval time and the network consumption while guaranteeing an accuracy of the results, while considering user’s defined priorities. First, this presented research addresses the problem of a trade-off among a timely information retrieval, accurate results and low messaging cost by proposing a summarization algorithm for resource discovery in P2P-content networks. After identifying how summarization can improve the discovery process, we propose an algorithm which uses a precision-recall metric to compare the accuracy and to offer a user-driven trade-off. Second, we propose an algorithm that applies a self-adaptive decision making on each node. The decision is about the pruning of the query and returning the result instead of continuing the query. The pruning reduces the retrieval time and the network consumption at the cost of a lower accuracy in contrast to continuing the query. The algorithm uses an analytic hierarchy process to assess the user’s priorities and to propose a trade-off in order to satisfy the accuracy requirements with a low message cost and a short delay. A quantitative analysis evaluates our presented algorithms with a simulator, which is fed with real data of a network topology and the nodes’ attributes. The usage of a simulator instead of the prototype allows the evaluation in a large scale of several thousands of nodes. The algorithm for content summarization is evaluated with half a million of resources and with different query types. The selfadaptive algorithm is evaluated with a simulator of several thousands of nodes that are created from real data. A qualitative analysis addresses the integration of the simulator’s components in existing market frameworks for Computational Grid and Cloud applications. The proposed content summarization algorithm reduces the information retrieval time from a logarithmic increase to a constant factor. Furthermore, the message size is reduced significantly by applying the summarization technique. For the user, a precision-recall metric allows defining the relation between the retrieval time and the accuracy. The self-adaptive algorithm reduces the number of messages needed from an exponential increase to a constant factor. At the same time, the retrieval time is reduced to a constant factor under an increasing number of nodes. Finally, the algorithm delivers the data with the required accuracy adjusting the depth of the query according to the network conditions.La gestió de la informació exigeix noves tècniques que tractin amb la creixent quantitat de dades i nodes en entorns a gran escala. Alguns exemples d’aquests entorns són les infraestructures descentralitzades de Computacional Grid i Cloud. Les aplicacions a gran escala necessiten diferents classes d’informació agregada com monitorització de recursos i informació econòmica. El desafiament de proporcionar una provisió ràpida i acurada d’informació en ambients de grans escala sorgeix de la distribució de la informació. Una raó és que el sistema d’informació ha de tractar amb l’adaptabilitat i fracassos d’aquests ambients. Un problema amb aplicacions molt grans com en sistemes peer-to-peer (P2P) és el creixent temps de recuperació de l’informació a causa de la descentralització de les dades i la facilitat al fracàs. No obstant això, moltes aplicacions necessiten una provisió d’informació puntual. A més, alguns usuaris i aplicacions accepten inexactituds dels resultats si la informació es reparteix a temps. A més i més, el consum de xarxa creixent fa que sorgeixi un altre problema per l’escalabilitat del sistema. La utilització de tècniques d’aproximació permet reduir el temps de recuperació i el consum de xarxa. No obstant això, l’ús de tècniques d’aproximació disminueix la precisió dels resultats. Així, el problema restant és oferir un compromís per resoldre els requisits en conflicte d’extracció de la informació ràpida, resultats acurats i cost d’enviament baix. El nostre objectiu és obtenir un mecanisme de decisió completament autoadaptatiu per tal d’oferir el compromís entre temps de recuperació, consum de xarxa i precisió del resultat. Autoadaptacío permet al programari distribuït modificar el seu comportament en funció dels canvis a l’entorn d’operació. En sistemes d’informació de gran escala que utilitzen agregació de dades jeràrquica, l’auto-adaptació permet controlar l’aproximació utilitzada per a l’extracció de la informació i redueixen el consum de xarxa i el temps de recuperació. La hipòtesi principal d’aquesta tesi és que els tècniques d’aproximació permeten reduir el temps de recuperació i el consum de xarxa mentre es garanteix una precisió adequada definida per l’usari. La recerca que es presenta, introdueix un algoritme de sumarització de continguts per a la descoberta de recursos a xarxes de contingut P2P. Després d’identificar com sumarització pot millorar el procés de descoberta, proposem una mètrica que s’utilitza per comparar la precisió i oferir un compromís definit per l’usuari. Després, introduïm un algoritme nou que aplica l’auto-adaptació a un ordre per satisfer els requisits de precisió amb un cost de missatge baix i un retard curt. Basat en les prioritats d’usuari, l’algoritme troba automàticament un compromís. L’anàlisi quantitativa avalua els algoritmes presentats amb un simulador per permetre l’evacuació d’uns quants milers de nodes. El simulador s’alimenta amb dades d’una topologia de xarxa i uns atributs dels nodes reals. L’algoritme de sumarització de contingut s’avalua amb mig milió de recursos i amb diferents tipus de sol·licituds. L’anàlisi qualitativa avalua la integració del components del simulador en estructures de mercat existents per a aplicacions de Computacional Grid i Cloud. Així, la funcionalitat implementada del simulador (com el procés d’agregació i la query language) és comprovada per la integració de prototips. L’algoritme de sumarització de contingut proposat redueix el temps d’extracció de l’informació d’un augment logarítmic a un factor constant. A més, també permet que la mida del missatge es redueix significativament. Per a l’usuari, una precision-recall mètric permet definir la relació entre el nivell de precisió i el temps d’extracció de la informació. Alhora, el temps de recuperació es redueix a un factor constant sota un nombre creixent de nodes. Finalment, l’algoritme reparteix les dades amb la precisió exigida i ajusta la profunditat de la sol·licitud segons les condicions de xarxa. Els algoritmes introduïts són prometedors per ser utilitzats per l’agregació d’informació en nous sistemes de gestió de la informació de gran escala en el futur
    corecore