2,705 research outputs found

    Self-Sustaining Caching Stations: Towards Cost-Effective 5G-Enabled Vehicular Networks

    Full text link
    In this article, we investigate the cost-effective 5G-enabled vehicular networks to support emerging vehicular applications, such as autonomous driving, in-car infotainment and location-based road services. To this end, self-sustaining caching stations (SCSs) are introduced to liberate on-road base stations from the constraints of power lines and wired backhauls. Specifically, the cache-enabled SCSs are powered by renewable energy and connected to core networks through wireless backhauls, which can realize "drop-and-play" deployment, green operation, and low-latency services. With SCSs integrated, a 5G-enabled heterogeneous vehicular networking architecture is further proposed, where SCSs are deployed along roadside for traffic offloading while conventional macro base stations (MBSs) provide ubiquitous coverage to vehicles. In addition, a hierarchical network management framework is designed to deal with high dynamics in vehicular traffic and renewable energy, where content caching, energy management and traffic steering are jointly investigated to optimize the service capability of SCSs with balanced power demand and supply in different time scales. Case studies are provided to illustrate SCS deployment and operation designs, and some open research issues are also discussed.Comment: IEEE Communications Magazine, to appea

    Modelling network memory servers with parallel processors, break-downs and repairs.

    Get PDF
    This paper presents an analytical method for the performability evaluation of a previously reported network memory server attached to a local area network. To increase the performance and availability of the proposed system, an additional server is added to the system. Such systems are prone to failures. With this in mind, a mathematical model has been developed to analyse the performability of the proposed system with break-downs and repairs. Mean queue lengths and the probability of job losses for the LAN feeding the network memory server is calculated and presented

    Operating-system support for distributed multimedia

    Get PDF
    Multimedia applications place new demands upon processors, networks and operating systems. While some network designers, through ATM for example, have considered revolutionary approaches to supporting multimedia, the same cannot be said for operating systems designers. Most work is evolutionary in nature, attempting to identify additional features that can be added to existing systems to support multimedia. Here we describe the Pegasus project's attempt to build an integrated hardware and operating system environment from\ud the ground up specifically targeted towards multimedia

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    A performance model of speculative prefetching in distributed information systems

    Full text link
    Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximization problem is expressed as a stretch knapsack problem. We develop an algorithm to maximize the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between speculative prefetching and caching is also investigated, albeit under the assumption of equal item sizes
    • 

    corecore