16 research outputs found

    Replicating Web Applications On-Demand

    Get PDF
    Many Web-based commercial services deliver their content using Web applications that generate pages dynamically based on user profiles, request parameters etc. The workload of these applications are often characterized by a large number of unique requests and a significant fraction of data updates. Hosting these applications drives the need for systems that replicates both the application code and its underlying data. We propose the design of such a system that is based on on-demand replication, where data units are replicated only to servers that access them often. This reduces the consistency overhead as updates are sent to a reduced number of servers. The proposed system allows complete replication transparency to the application, thereby allowing developers to build applications unaware of the underlying data replication. We show that the proposed techniques can reduce the client response time by a factor of 5 in comparison to existing techniques for a realworld e-commerce application used in the TPC-W benchmark. Furthermore, we evaluate our strategies for a wide range of workloads and show that on-demand replication performs better than centralized and fully replicated systems by reducing the average latency of read/write data accesses as well as the amount of bandwidth utilized to maintain data consistency. 1

    Data center resilience assessment : storage, networking and security.

    Get PDF
    Data centers (DC) are the core of the national cyber infrastructure. With the incredible growth of critical data volumes in financial institutions, government organizations, and global companies, data centers are becoming larger and more distributed posing more challenges for operational continuity in the presence of experienced cyber attackers and occasional natural disasters. The main objective of this research work is to present a new methodology for data center resilience assessment, this methodology consists of: • Define Data center resilience requirements. • Devise a high level metric for data center resilience. • Design and develop a tool to validate and the metric. Since computer networks are an important component in the data center architecture, this research work was extended to investigate computer network resilience enhancement opportunities within the area of routing protocols, redundancy, and server load to minimize the network down time and increase the time period of resisting attacks. Data center resilience assessment is a complex process as it involves several aspects such as: policies for emergencies, recovery plans, variation in data center operational roles, hosted/processed data types and data center architectures. However, in this dissertation, storage, networking and security are emphasized. The need for resilience assessment emerged due to the gap in existing reliability, availability, and serviceability (RAS) measures. Resilience as an evaluation metric leads to better proactive perspective in system design and management. The proposed Data center resilience assessment portal (DC-RAP) is designed to easily integrate various operational scenarios. DC-RAP features a user friendly interface to assess the resilience in terms of performance analysis and speed recovery by collecting the following information: time to detect attacks, time to resist, time to fail and recovery time. Several set of experiments were performed, results obtained from investigating the impact of routing protocols, server load balancing algorithms on network resilience, showed that using particular routing protocol or server load balancing algorithm can enhance network resilience level in terms of minimizing the downtime and ensure speed recovery. Also experimental results for investigating the use social network analysis (SNA) for identifying important router in computer network showed that the SNA was successful in identifying important routers. This important router list can be used to redundant those routers to ensure high level of resilience. Finally, experimental results for testing and validating the data center resilience assessment methodology using the DC-RAP showed the ability of the methodology quantify data center resilience in terms of providing steady performance, minimal recovery time and maximum resistance-attacks time. The main contributions of this work can be summarized as follows: • A methodology for evaluation data center resilience has been developed. • Implemented a Data Center Resilience Assessment Portal (D$-RAP) for resilience evaluations. • Investigated the usage of Social Network Analysis to Improve the computer network resilience

    Web Replica Hosting Systems

    Get PDF

    Generalized Snapshot Isolation and a Prefix-Consistent Implementation

    Get PDF
    Generalized snapshot isolation extends snapshot isolation as used in Oracle and other databases in a manner suitable for replicated databases. While (conventional) snapshot isolation requires that transactions observe the ``latest'' snapshot of the database, generalized snapshot isolation allows the use of ``older'' snapshots, facilitating a replicated implementation. We show that many of the desirable properties of snapshot isolation remain. In particular, under certain assumptions on the transaction workload the execution is serializable. An implementation of generalized snapshot isolation can choose which past snapshot it uses. An interesting choice for a replicated database is prefix-consistent snapshot isolation, in which the snapshot contains at least all the writes of locally committed transactions. As an instance of generalized snapshot isolation, it inherits all of its properties. In addition, read-only transactions never block, and consecutive transactions submitted in a single workflow on a particular replica observe the updates of their predecessors in the workflow. We present two implementation strategies of prefix-consistent snapshot isolation. We conclude with an analytical performance model of one of the implementations, bringing out the benefits, in particular reduced latency for read-only transactions, and showing that the potential downsides, in particular the change in abort rate of update transactions, are limited

    Balancing the Trade-Offs between Query Delay and Data Availability in MANETs

    Full text link

    Workload Interleaving with Performance Guarantees in Data Centers

    Get PDF
    In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purposes of reducing energy and operating costs, and of improving availability and reliability. Along with the above benefits, resource sharing also introduces performance challenges: when multiple workloads access the same resources concurrently, contention may occur and introduce delays in the performance of individual workloads. Providing performance isolation to individual workloads needs effective management methodologies. The challenges of deriving effective management methodologies lie in finding accurate, robust, compact metrics and models to drive algorithms that can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies aiming at solving the challenging performance isolation problem in workload interleaving in data centers, focusing on both storage components and computing components. at the storage node level, we focus on methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. More specifically, a scheduling policy for background workload based on the statistical characteristics of the system busy periods and a methodology that quantitatively estimates the performance impact of power savings are developed. at the storage cluster level, we consider methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. More specifically, we develop a framework that can estimate beforehand the benefits and overheads of each option in order to automate the process of reaching intelligent consolidation decisions while achieving faster eventual consistency. at the computing node level, we focus on improving workload interleaving at off-the-shelf servers as they are the basic building blocks of large-scale data centers. We develop priority scheduling middleware that employs different policies to schedule background tasks based on the instantaneous resource requirements of the high priority applications running on the server node. Finally, at the computing cluster level, we investigate popular computing frameworks for large-scale data intensive distributed processing, such as MapReduce and its Hadoop implementation. We develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives

    Geographically Distributed Database Management at the Cloud's Edge

    Get PDF
    Request latency resulting from the geographic separation between clients and remote application servers is a challenge for cloud-hosted web and mobile applications. Numerous studies have shown the importance of low latency to the end user experience. Small response time increases on the order of a few hundred milliseconds directly translate to reduced user satisfaction and loss of revenue that persist even after a low latency environment is restored. One way to address this challenge in geo-distributed settings is to push all or part of the application, along with the data it requires, to the edge of the cloud - closer to application clients. This thesis explores the idea of taking advantage of clients' proximity to the edge of the network in order to reduce request latencies. SpearDB is a prototype replicated distributed database system which operates in a star network topology, with a core site and a large number of edge sites that are close to clients. Clients access the nearest edge, which holds replicas of locally relevant portions of the database. SpearDB's edge sites coordinate through the core to provide a global transactional consistency guarantee (parallel snapshot isolation or PSI), while handling as much work locally as possible. SpearDB provides full general purpose transactional semantics with ACID guarantees. Experiments show that SpearDB is effective at reducing workload latencies for applications whose access patterns are geographically localizable. Many applications fit this criteria: bulletin boards (e.g., Craigslist, Kijiji), local commerce or services (e.g., Groupon, Uber), booking and ticketing (e.g., OpenTable, StubHub), location based services (mapping, directions, augmented reality), local news outlets and client-centric services (e-mail, rss feeds, gaming). SpearDB introduces protocols for executing application transactions in a geo-distributed setting under strong consistency guarantees. These protocols automatically hide the complexity as well as much of the latency introduced by geo-distribution from applications. The effectiveness of SpearDB depends on the placement of primary and secondary replicas at core and edge sites. The secondary replica placement problem is shown to be NP-hard. Several algorithms for automatic data partitioning and replication are presented to provide approximate solutions. These algorithms work in a geo-distributed core-edge setting under partial replication. Their goal is to bring data closer to clients in order to lower request latencies. Experimental comparisons of the resulting placements' latency impact show good results. Surprisingly however, the placements produced by the simplest of the proposed algorithms are comparable in quality to those produced by more complex approaches

    Scalable data management for web applications

    Get PDF
    Steen, M.R. van [Promotor]Pierre, G.E.O. [Copromotor]Chi, C.H. [Copromotor
    corecore