3 research outputs found

    A Demand Based Load Balanced Service Replication Model

    Get PDF
    Cloud computing allows service users and providers to access the applications, logical resources and files on any computer with ease. A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic. It is a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. It not only promises reliable services delivered through next-generation data centers that are built on compute and storage virtualization technologies but also addresses the key issues such as scalability, reliability, fault tolerance and file load balancing. The one way to achieve this is through service replication across different machines coupled with load balancing. Though replication potentially improves fault tolerance, it leads to the problem of ensuring consistency of replicas when certain service is updated or modified. However, fewer replicas also decrease concurrency and the level of service availability. A balanced synchronization between replication mechanism and consistency not only ensures highly reliable and fault tolerant system but also improves system performance significantly. This paper presents a load balancing based service replication model that creates a replica on other servers on the basis of number of service accesses. The simulation results indicate that the proposed model reduces the number of messages exchanged for service replication by 25-55% thus improving the overall system performance significantly. Also in case of CPU load based file replication, it is observed that file access time reduces by 5.56%-7.65%

    Technical analysis of content placement algorithms for content delivery network in cloud

    Get PDF
    Content placement algorithm is an integral part of the cloud-based content de-livery network. They are responsible for selecting a precise content to be re-posited over the surrogate servers distributed over a geographical region. Although various works are being already carried out in this sector, there are loopholes connected to most of the work, which doesn't have much disclosure. It is already known that quality of service, quality of experience, and the cost is always an essential objective targeting to be improved in existing work. Still, there are various other aspects and underlying reasons which are equally important from the design aspect. Therefore, this paper contributes towards reviewing the existing approaches of content placement algorithm over cloud-based content delivery network targeting to explore open-end re-search issues

    Providing Freshness for Cached Data in Unstructured Peer-to-Peer Systems

    Get PDF
    Replication is a popular technique for increasing data availability and improving perfor- mance in peer-to-peer systems. Maintaining freshness of replicated data is challenging due to the high cost of update management. While updates have been studied in structured networks, they have been neglected in unstructured networks. We therefore confront the problem of maintaining fresh replicas of data in unstructured peer-to-peer networks. We propose techniques that leverage path replication to support efficient lazy updates and provide freshness for cached data in these systems using only local knowledge. In addition, we show that locally available information may be used to provide additional guarantees of freshness at an acceptable cost to performance. Through performance simulations based on both synthetic and real-world workloads from big data environments, we demonstrate the effectiveness of our approach
    corecore