2,840 research outputs found

    Resource virtualisation of network routers

    Get PDF
    There is now considerable interest in applications that transport time-sensitive data across the best-effort Internet. We present a novel network router architecture, which has the potential to improve the Quality of Service guarantees provided to such flows. This router architecture makes use of virtual machine techniques, to assign an individual virtual routelet to each network flow requiring QoS guarantees. We describe a prototype of this virtual routelet architecture, and evaluate its effectiveness. Experimental results of the performance and flow partitioning of this prototype, compared with a standard software router, suggest promise in the virtual routelet architecture

    EGOIST: Overlay Routing Using Selfish Neighbor Selection

    Full text link
    A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a prototype overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using measurements on PlanetLab and trace-based simulations, we demonstrate that Egoist's neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we discuss some of the potential benefits Egoist may offer to applications.National Science Foundation (CISE/CSR 0720604, ENG/EFRI 0735974, CISE/CNS 0524477, CNS/NeTS 0520166, CNS/ITR 0205294; CISE/EIA RI 0202067; CAREER 04446522); European Commission (RIDS-011923

    ENORM: A Framework For Edge NOde Resource Management

    Get PDF
    Current computing techniques using the cloud as a centralised server will become untenable as billions of devices get connected to the Internet. This raises the need for fog computing, which leverages computing at the edge of the network on nodes, such as routers, base stations and switches, along with the cloud. However, to realise fog computing the challenge of managing edge nodes will need to be addressed. This paper is motivated to address the resource management challenge. We develop the first framework to manage edge nodes, namely the Edge NOde Resource Management (ENORM) framework. Mechanisms for provisioning and auto-scaling edge node resources are proposed. The feasibility of the framework is demonstrated on a PokeMon Go-like online game use-case. The benefits of using ENORM are observed by reduced application latency between 20% - 80% and reduced data transfer and communication frequency between the edge node and the cloud by up to 95\%. These results highlight the potential of fog computing for improving the quality of service and experience.Comment: 14 pages; accepted to IEEE Transactions on Services Computing on 12 September 201

    Split and Migrate: Resource-Driven Placement and Discovery of Microservices at the Edge

    Get PDF
    Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud. Applications such as collaborative editing tools require frequent interactions between the front-end running on users\u27 machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others. We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application

    funcX: A Federated Function Serving Fabric for Science

    Full text link
    Exploding data volumes and velocities, new computational methods and platforms, and ubiquitous connectivity demand new approaches to computation in the sciences. These new approaches must enable computation to be mobile, so that, for example, it can occur near data, be triggered by events (e.g., arrival of new data), be offloaded to specialized accelerators, or run remotely where resources are available. They also require new design approaches in which monolithic applications can be decomposed into smaller components, that may in turn be executed separately and on the most suitable resources. To address these needs we present funcX---a distributed function as a service (FaaS) platform that enables flexible, scalable, and high performance remote function execution. funcX's endpoint software can transform existing clouds, clusters, and supercomputers into function serving systems, while funcX's cloud-hosted service provides transparent, secure, and reliable function execution across a federated ecosystem of endpoints. We motivate the need for funcX with several scientific case studies, present our prototype design and implementation, show optimizations that deliver throughput in excess of 1 million functions per second, and demonstrate, via experiments on two supercomputers, that funcX can scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap with arXiv:1908.0490

    Cross-layer signalling and middleware: a survey for inelastic soft real-time applications in MANETs

    Get PDF
    This paper provides a review of the different cross-layer design and protocol tuning approaches that may be used to meet a growing need to support inelastic soft real-time streams in MANETs. These streams are characterised by critical timing and throughput requirements and low packet loss tolerance levels. Many cross-layer approaches exist either for provision of QoS to soft real-time streams in static wireless networks or to improve the performance of real and non-real-time transmissions in MANETs. The common ground and lessons learned from these approaches, with a view to the potential provision of much needed support to real-time applications in MANETs, is therefore discussed

    Runtime Detection of a Bandwidth Denial Attack from a Rogue Network-on-Chip

    Get PDF
    Chips with high computational power are the crux of today’s pervasive complex digital systems. Microprocessor circuits are evolving towards many core designs with the integration of hundreds of processing cores, memory elements and other devices on a single chip to sustain high performance computing while maintaining low design costs. Two decisive paradigm shifts in the semiconductor industry have made this evolution possible: (a) architectural and (b) organizational. At the heart of the architectural innovation is a scalable high speed data communication structure, the network-on-chip (NoC). NoC is an interconnect network for the glueless integration of on-chip components in the modern complex communication centric designs. In the recent days, NoC has replaced the traditional bus based architecture owing to its structured and modular design, scalability and low design cost. The organizational revolution has resulted in a globalized and collaborative supply chain with pervasive use of third party intellectual properties to reduce the time-to-market and overall design costs. Despite the advantages of these paradigm shifts, modern system-on-chips pose a plethora of security vulnerabilities. This work explores a threat model arising from a malicious NoC IP embedded with a hardware trojan affecting the resource availability of on-chip components. A rigorous simulation infrastructure is established to evaluate the feasibility and potency of such an attack. Further, a non-invasive runtime monitoring technique is proposed and thoroughly investigated to ensure the trustworthiness of a third party NoC IP with low overheads

    Multi-domain crankback operation for IP/MPLS & DWDM networks

    Get PDF
    Network carriers and operators have built and deployed a very wide range of networking technologies to meet their customers needs. These include ultra scalable fibre-optic backbone networks based upon dense wavelength division multiplexing (DWDM) solutions as well as advanced layer 2/3 IP multiprotocol label switching (MPLS) and Ethernet technologies as well. A range of networking control protocols has also been developed to implement service provisioning and management across these networks. As these infrastructures have been deployed, a range of new challenges have started to emerge. In particular, a major issue is that of provisioning connection services between networks running across different domain boundaries, e.g., administrative geographic, commercial, etc. As a result, many carriers are keenly interested in the design of multi-domain provisioning solutions and algorithms. Nevertheless, to date most such efforts have only looked at pre-configured, i.e., static, inter-domain route computation or more complex solutions based upon hierarchical routing. As such there is significant scope in developing more scalable and simplified multi-domain provisioning solutions. Moreover, it is here that crankback signaling offers much promise. Crankback makes use of active messaging techniques to compute routes in an iterative manner and avoid problematic resource-deficient links. However very few multi-domain crankback schemes have been proposed, leaving much room for further investigation. Along these lines, this thesis proposes crankback signaling solution for multi-domain IP/MPLS and DWDM network operation. The scheme uses a joint intra/inter-domain signaling strategy and is fully-compatible with the standardized resource reservation (RSVP-TE) protocol. Furthermore, the proposed solution also implements and advanced next-hop domain selection strategy to drive the overall crankback process. Finally the whole framework assumes realistic settings in which individual domains have full internal visibility via link-state routing protocols, e.g., open shortest path first traffic engineering (OSPF-TE), but limited \u27next-hop\u27 inter-domain visibility, e.g., as provided by inter-area or inter-autonomous system (AS) routing protocols. The performance of the proposed crankback solution is studied using software-based discrete event simulation. First, a range of multi-domain topologies are built and tested. Next, detailed simulation runs are conducted for a range of scenarios. Overall, the findings show that the proposed crankback solution is very competitive with hierarchical routing, in many cases even outperforming full mesh abstraction. Moreover the scheme maintains acceptable signaling overheads (owing to it dual inter/intra domain crankback design) and also outperforms existing multi-domain crankback algorithms.\u2
    corecore