33 research outputs found

    Australia, Turkey, and the US, c.1975-2018: Testing the "Wobbly Cross"

    Get PDF
    This MPhil compares Turkish and Australian foreign policy relations with the United States (US) between the mid-1970s and 2018. This comparison will investigate whether these bilateral relations resemble the "wavy cross" explored in my doctoral thesis. From mid-1940s to the mid-1970s Turkish-American relations followed a fluctuating downward curve, while Australian-American relations followed a fluctuating upward curve. Turkey moved away from the US, Australia moved closer, a pattern like a "wavy cross". This thesis begins where my PhD concludes, c.1980. The thesis will test whether the divergence between global priorities of a great power and local/national priorities of two middle powers, together with the imbalance in their bilateral relations continued to affect the stability of the arms and tendency of the "wavy cross". Although the metaphorical depiction in my PhD is the "wavy cross", wobbly seems more appropriate in illustrating the fluctuations of the arms. Each chapter examines reasons inducing the wobbles and sustaining the tendency of the Turkish and the Australian arms, to test the validity of the "wobbly cross". The rationale behind comparing Turkey and Australia is threefold. First, both Turkey and Australia are well known examples of middle powers, but little research has been conducted on their comparison. Second, my ultimate aim is to combine the PhD and the MPhil theses and made them a book. Third, for International Relations and Political History literature middle powers' foreign policy actions is still an unclear area. Hopefully, this work could cast light on commonalities and differences of two significant middle powers' foreign policies. This is a Political History rather than an International Relations project, since it aims to compare foreign policies, primary records on policy makers' statements, and actions which are a reliable basis for analysis. As I experienced during my PhD research, International Relations theories and patterns do not explain middle powers' actions for such a big span of history

    ENHANCING CLOUD SYSTEM RUNTIME TO ADDRESS COMPLEX FAILURES

    Get PDF
    As the reliance on cloud systems intensifies in our progressively digital world, understanding and reinforcing their reliability becomes more crucial than ever. Despite impressive advancements in augmenting the resilience of cloud systems, the growing incidence of complex failures now poses a substantial challenge to the availability of these systems. With cloud systems continuing to scale and increase in complexity, failures not only become more elusive to detect but can also lead to more catastrophic consequences. Such failures question the foundational premises of conventional fault-tolerance designs, necessitating the creation of novel system designs to counteract them. This dissertation aims to enhance distributed systems’ capabilities to detect, localize, and react to complex failures at runtime. To this end, this dissertation makes contributions to address three emerging categories of failures in cloud systems. The first part delves into the investigation of partial failures, introducing OmegaGen, a tool adept at generating tailored checkers for detecting and localizing such failures. The second part grapples with silent semantic failures prevalent in cloud systems, showcasing our study findings, and introducing Oathkeeper, a tool that leverages past failures to infer rules and expose these silent issues. The third part explores solutions to slow failures via RESIN, a framework specifically designed to detect, diagnose, and mitigate memory leaks in cloud-scale infrastructures, developed in collaboration with Microsoft Azure. The dissertation concludes by offering insights into future directions for the construction of reliable cloud systems

    Workload Interleaving with Performance Guarantees in Data Centers

    Get PDF
    In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purposes of reducing energy and operating costs, and of improving availability and reliability. Along with the above benefits, resource sharing also introduces performance challenges: when multiple workloads access the same resources concurrently, contention may occur and introduce delays in the performance of individual workloads. Providing performance isolation to individual workloads needs effective management methodologies. The challenges of deriving effective management methodologies lie in finding accurate, robust, compact metrics and models to drive algorithms that can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies aiming at solving the challenging performance isolation problem in workload interleaving in data centers, focusing on both storage components and computing components. at the storage node level, we focus on methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. More specifically, a scheduling policy for background workload based on the statistical characteristics of the system busy periods and a methodology that quantitatively estimates the performance impact of power savings are developed. at the storage cluster level, we consider methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. More specifically, we develop a framework that can estimate beforehand the benefits and overheads of each option in order to automate the process of reaching intelligent consolidation decisions while achieving faster eventual consistency. at the computing node level, we focus on improving workload interleaving at off-the-shelf servers as they are the basic building blocks of large-scale data centers. We develop priority scheduling middleware that employs different policies to schedule background tasks based on the instantaneous resource requirements of the high priority applications running on the server node. Finally, at the computing cluster level, we investigate popular computing frameworks for large-scale data intensive distributed processing, such as MapReduce and its Hadoop implementation. We develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives

    On I/O Performance and Cost Efficiency of Cloud Storage: A Client\u27s Perspective

    Get PDF
    Cloud storage has gained increasing popularity in the past few years. In cloud storage, data are stored in the service provider’s data centers; users access data via the network and pay the fees based on the service usage. For such a new storage model, our prior wisdom and optimization schemes on conventional storage may not remain valid nor applicable to the emerging cloud storage. In this dissertation, we focus on understanding and optimizing the I/O performance and cost efficiency of cloud storage from a client’s perspective. We first conduct a comprehensive study to gain insight into the I/O performance behaviors of cloud storage from the client side. Through extensive experiments, we have obtained several critical findings and useful implications for system optimization. We then design a client cache framework, called Pacaca, to further improve end-to-end performance of cloud storage. Pacaca seamlessly integrates parallelized prefetching and cost-aware caching by utilizing the parallelism potential and object correlations of cloud storage. In addition to improving system performance, we have also made efforts to reduce the monetary cost of using cloud storage services by proposing a latency- and cost-aware client caching scheme, called GDS-LC, which can achieve two optimization goals for using cloud storage services: low access latency and low monetary cost. Our experimental results show that our proposed client-side solutions significantly outperform traditional methods. Our study contributes to inspiring the community to reconsider system optimization methods in the cloud environment, especially for the purpose of integrating cloud storage into the current storage stack as a primary storage layer

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Adaptive Data Storage and Placement in Distributed Database Systems

    Get PDF
    Distributed database systems are widely used to provide scalable storage, update and query facilities for application data. Distributed databases primarily use data replication and data partitioning to spread load across nodes or sites. The presence of hotspots in workloads, however, can result in imbalanced load on the distributed system resulting in performance degradation. Moreover, updates to partitioned and replicated data can require expensive distributed coordination to ensure that they are applied atomically and consistently. Additionally, data storage formats, such as row and columnar layouts, can significantly impact latencies of mixed transactional and analytical workloads. Consequently, how and where data is stored among the sites in a distributed database can significantly affect system performance, particularly if the workload is not known ahead of time. To address these concerns, this thesis proposes adaptive data placement and storage techniques for distributed database systems. This thesis demonstrates that the performance of distributed database systems can be improved by automatically adapting how and where data is stored by leveraging online workload information. A two-tiered architecture for adaptive distributed database systems is proposed that includes an adaptation advisor that decides at which site(s) and how transactions execute. The adaptation advisor makes these decisions based on submitted transactions. This design is used in three adaptive distributed database systems presented in this thesis: (i) DynaMast that efficiently transfers data mastership to guarantee single-site transactions while maintaining well-understood and established transactional semantics, (ii) MorphoSys that selectively and adaptively replicates, partitions and remasters data based on a learned cost model to improve transaction processing, and (iii) Proteus that uses learned workload models to predictively and adaptively change storage layouts to support both high transactional throughput and low latency analytical queries. Collectively, this thesis is a concrete step towards autonomous database systems that allow users to specify only the data to store and the queries to execute, leaving the system to judiciously choose the storage and execution mechanisms to deliver high performance

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Hardware-Assisted Processor Tracing for Automated Bug Finding and Exploit Prevention

    Get PDF
    The proliferation of binary-only program analysis techniques like fuzz testing and symbolic analysis have lead to an acceleration in the number of publicly disclosed vulnerabilities. Unfortunately, while bug finding has benefited from recent advances in automation and a decreasing barrier to entry, bug remediation has received less attention. Consequently, analysts are publicly disclosing bugs faster than developers and system administrators can mitigate them. Hardware-supported processor tracing within commodity processors opens new doors to observing low-level behaviors with efficiency, transparency, and integrity that can close this automation gap. Unfortunately, several trade-offs in its design raise serious technical challenges that have limited widespread adoption. Specifically, modern processor traces only capture control flow behavior, yield high volumes of data that can incur overhead to sift through, and generally introduce a semantic gap between low-level behavior and security relevant events. To solve the above challenges, I propose control-oriented record and replay, which combines concrete traces with symbolic analysis to uncover vulnerabilities and exploits. To demonstrate the efficacy and versatility of my approach, I first present a system called ARCUS, which is capable of analyzing processor traces flagged by host-based monitors to detect, localize, and provide preliminary patches to developers for memory corruption vulnerabilities. ARCUS has detected 27 previously known vulnerabilities alongside 4 novel cases, leading to the issuance of several advisories and official developer patches. Next, I present MARSARA, a system that protects the integrity of execution unit partitioning in data provenance-based forensic analysis. MARSARA prevents several expertly crafted exploits from corrupting partitioned provenance graphs while incurring little overhead compared to prior work. Finally, I present Bunkerbuster, which extends the ideas from ARCUS and MARSARA into a system capable of proactively hunting for bugs across multiple end-hosts simultaneously, resulting in the discovery and patching of 4 more novel bugs.Ph.D

    Extending the battery life of mobile device by computation offloading

    Get PDF
    Doctor of PhilosophyComputing and Information SciencesDaniel A. AndresenThe need for increased performance of mobile device directly conflicts with the desire for longer battery life. Offloading computation to resourceful servers is an effective method to reduce energy consumption and enhance performance for mobile applications. Today, most mobile devices have fast wireless link such as 4G and Wi-Fi, making computation offloading a reasonable solution to extend battery life of mobile device. Android provides mechanisms for creating mobile applications but lacks a native scheduling system for determining where code should be executed. We present Jade, a system that adds sophisticated energy-aware computation offloading capabilities to Android applications. Jade monitors device and application status and automatically decides where code should be executed. Jade dynamically adjusts offloading strategy by adapting to workload variation, communication costs, and device status. Jade minimizes the burden on developers to build applications with computation offloading ability by providing easy-to-use Jade API. Evaluation shows that Jade can effectively reduce up to 37% of average power consumption for mobile device while improving application performance
    corecore