66,974 research outputs found

    Multicast in DKS(N, k, f) Overlay Networks

    Get PDF
    Recent developments in the area of peer-to-peer computing show that structured overlay networks implementing distributed hash tables scale well and can serve as infrastructures for Internet scale applications. We are developing a family of infrastructures, DKS(N; k; f), for the construction of peer-to-peer applications. An instance of DKS(N; k; f) is an overlay network that implements a distributed hash table and which has a number of desirable properties: low cost of communication, scalability, logarithmic lookup length, fault-tolerance and strong guarantees of locating any data item that was inserted in the system. In this paper, we show how multicast is achieved in DKS(N, k, f) overlay networks. The design presented here is attractive in three main respects. First, members of a multicast group self-organize in an instance of DKS(N, k, f) in a way that allows co-existence of groups of different sizes, degree of fault-tolerance, and maintenance cost, thereby, providing flexibility. Second, each member of a group can multicast, rather than having single source multicast. Third, within a group, dissemination of a multicast message is optimal under normal system operation in the sense that there are no redundant messages despite the presence of outdated routing information

    EGOIST: Overlay Routing Using Selfish Neighbor Selection

    Full text link
    A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a prototype overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using measurements on PlanetLab and trace-based simulations, we demonstrate that Egoist's neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we discuss some of the potential benefits Egoist may offer to applications.National Science Foundation (CISE/CSR 0720604, ENG/EFRI 0735974, CISE/CNS 0524477, CNS/NeTS 0520166, CNS/ITR 0205294; CISE/EIA RI 0202067; CAREER 04446522); European Commission (RIDS-011923

    Online Resource Inference in Network Utility Maximization Problems

    Full text link
    The amount of transmitted data in computer networks is expected to grow considerably in the future, putting more and more pressure on the network infrastructures. In order to guarantee a good service, it then becomes fundamental to use the network resources efficiently. Network Utility Maximization (NUM) provides a framework to optimize the rate allocation when network resources are limited. Unfortunately, in the scenario where the amount of available resources is not known a priori, classical NUM solving methods do not offer a viable solution. To overcome this limitation we design an overlay rate allocation scheme that attempts to infer the actual amount of available network resources while coordinating the users rate allocation. Due to the general and complex model assumed for the congestion measurements, a passive learning of the available resources would not lead to satisfying performance. The coordination scheme must then perform active learning in order to speed up the resources estimation and quickly increase the system performance. By adopting an optimal learning formulation we are able to balance the tradeoff between an accurate estimation, and an effective resources exploitation in order to maximize the long term quality of the service delivered to the users

    Local Calibration of Pavement ME and Performance Evaluation of Pavement Rehabilitation and Preservation Asphalt Overlays in Louisiana

    Get PDF
    Asphalt overlays can be categorized as pavement rehabilitation structural overlays or pavement preservation overlays in terms of the thickness and timing of maintenance activities. Pavement rehabilitation overlay contributes to the structural enhancement of pavements and pavement preservation overlays generally provide functional improvement to an existing pavement. In recent times, the pavement design approach employed by the Louisiana Department of Transportation and Development (LADOTD) has been transitioning from the 1993 AASHTO design procedure to a locally calibrated Pavement ME method. The AASHTOWare Pavement ME (Mechanistic-Empirical) design procedure is considered the modern approach to pavement design. Significant updates have been introduced in the Pavement ME design software in the past few years. Therefore, this study focused on conducting a local calibration of the Pavement ME procedure for asphalt overlays in Louisiana. In addition, performance evaluation of the asphalt overlays was conducted to determine the factors that would have an impact on their performance. In total, 37 rehabilitation asphalt overlay projects and 33 preservation asphalt overlay projects were selected all over Louisiana with different base types and traffic volumes. These projects consisted of mainly three types of bases: soil cement base, crushed stone base and Portland cement concrete base. The pavement performance data were collected from the Louisiana Pavement Management System (LA-PMS) database and the data regarding geographical location, design thickness, and traffic were retrieved from the LADOTD intranet. The field performance of the asphalt overlays was investigated in terms of several design parameters i.e. base type, traffic level, total Hot Mix Asphalt (HMA) thickness after milling, pavement precondition and overlay service life. Manual calibration of Pavement ME was performed for rehabilitation asphalt overlays and a calibrator assistance tool was adopted for the local calibration of preservation asphalt overlays to calibrate the performance models in terms of typical pavement structure, traffic, and weather conditions of Louisiana. The results showed that the existing pavement condition does have an impact on preservation overlay performance since distress in pavements with poor precondition was observed to be higher. The pavement with good precondition and higher asphalt concrete (AC) thickness exhibited higher service life. A case study was performed to determine the optimal timing of preservation overlay applications based on cost-benefit analysis. Fatigue cracking was under-predicted and transverse cracking was over-predicted by the Pavement ME national model for rehabilitation overlays. For preservation overlays, both the fatigue cracking and transverse cracking were under-predicted. Therefore, the performance models were adjusted through the local calibration to match the field measurement

    Energy Efficiency in MIMO Underlay and Overlay Device-to-Device Communications and Cognitive Radio Systems

    Full text link
    This paper addresses the problem of resource allocation for systems in which a primary and a secondary link share the available spectrum by an underlay or overlay approach. After observing that such a scenario models both cognitive radio and D2D communications, we formulate the problem as the maximization of the secondary energy efficiency subject to a minimum rate requirement for the primary user. This leads to challenging non-convex, fractional problems. In the underlay scenario, we obtain the global solution by means of a suitable reformulation. In the overlay scenario, two algorithms are proposed. The first one yields a resource allocation fulfilling the first-order optimality conditions of the resource allocation problem, by solving a sequence of easier fractional problems. The second one enjoys a weaker optimality claim, but an even lower computational complexity. Numerical results demonstrate the merits of the proposed algorithms both in terms of energy-efficient performance and complexity, also showing that the two proposed algorithms for the overlay scenario perform very similarly, despite the different complexity.Comment: to appear in IEEE Transactions on Signal Processin

    Fine Grained Component Engineering of Adaptive Overlays: Experiences and Perspectives

    Get PDF
    Recent years have seen significant research being carried out into peer-to-peer (P2P) systems. This work has focused on the styles and applications of P2P computing, from grid computation to content distribution; however, little investigation has been performed into how these systems are built. Component based engineering is an approach that has seen successful deployment in the field of middleware development; functionality is encapsulated in ‘building blocks’ that can be dynamically plugged together to form complete systems. This allows efficient, flexible and adaptable systems to be built with lower overhead and development complexity. This paper presents an investigation into the potential of using component based engineering in the design and construction of peer-to-peer overlays. It is highlighted that the quality of these properties is dictated by the component architecture used to implement the system. Three reusable decomposition architectures are designed and evaluated using Chord and Pastry case studies. These demonstrate that significant improvements can be made over traditional design approaches resulting in much more reusable, (re)configurable and extensible systems
    • 

    corecore