9 research outputs found

    Scalable and Highly Available Database Systems in the Cloud

    Get PDF
    Cloud computing allows users to tap into a massive pool of shared computing resources such as servers, storage, and network. These resources are provided as a service to the users allowing them to “plug into the cloud” similar to a utility grid. The promise of the cloud is to free users from the tedious and often complex task of managing and provisioning computing resources to run applications. At the same time, the cloud brings several additional benefits including: a pay-as-you-go cost model, easier deployment of applications, elastic scalability, high availability, and a more robust and secure infrastructure. One important class of applications that users are increasingly deploying in the cloud is database management systems. Database management systems differ from other types of applications in that they manage large amounts of state that is frequently updated, and that must be kept consistent at all scales and in the presence of failure. This makes it difficult to provide scalability and high availability for database systems in the cloud. In this thesis, we show how we can exploit cloud technologies and relational database systems to provide a highly available and scalable database service in the cloud. The first part of the thesis presents RemusDB, a reliable, cost-effective high availability solution that is implemented as a service provided by the virtualization platform. RemusDB can make any database system highly available with little or no code modifications by exploiting the capabilities of virtualization. In the second part of the thesis, we present two systems that aim to provide elastic scalability for database systems in the cloud using two very different approaches. The three systems presented in this thesis bring us closer to the goal of building a scalable and reliable transactional database service in the cloud

    A software architecture for consensus based replication

    Get PDF
    Orientador: Luiz Eduardo BuzatoTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Esta tese explora uma das ferramentas fundamentais para construção de sistemas distribuídos: a replicação de componentes de software. Especificamente, procuramos resolver o problema de como simplificar a construção de aplicações replicadas que combinem alto grau de disponibilidade e desempenho. Como ferramenta principal para alcançar o objetivo deste trabalho de pesquisa desenvolvemos Treplica, uma biblioteca de replicação voltada para construção de aplicações distribuídas, porém com semântica de aplicações centralizadas. Treplica apresenta ao programador uma interface simples baseada em uma especificação orientada a objetos de replicação ativa. A conclusão que defendemos nesta tese é que é possível desenvolver um suporte modular e de uso simples para replicação que exibe alto desempenho, baixa latência e que permite recuperação eficiente em caso de falhas. Acreditamos que a arquitetura de software proposta tem aplicabilidade em qualquer sistema distribuído, mas é de especial interesse para sistemas que não são distribuídos pela ausência de uma forma simples, eficiente e confiável de replicá-losAbstract: This thesis explores one of the fundamental tools for the construction of distributed systems: the replication of software components. Specifically, we attempted to solve the problem of simplifying the construction of high-performance and high-availability replicated applications. We have developed Treplica, a replication library, as the main tool to reach this research objective. Treplica allows the construction of distributed applications that behave as centralized applications, presenting the programmer a simple interface based on an object-oriented specification for active replication. The conclusion we reach in this thesis is that it is possible to create a modular and simple to use support for replication, providing high performance, low latency and fast recovery in the presence of failures. We believe our proposed software architecture is applicable to any distributed system, but it is particularly interesting to systems that remain centralized due to the lack of a simple, efficient and reliable replication mechanismDoutoradoSistemas de ComputaçãoDoutor em Ciência da Computaçã

    Reliable Server Pooling - Evaluierung, Optimierung und Erweiterung einer neuen IETF-Architektur

    Get PDF
    The Reliable Server Pooling (RSerPool) architecture currently under standardization by the IETF RSerPool Working Group is an overlay network framework to provide server replication and session failover capabilities to applications using it. These functionalities as such are not new, but their combination into one generic, application-independent framework is. Initial goal of this thesis is to gain insight into the complex RSerPool mechanisms by performing experimental and simulative proof-of-concept tests. The further goals are to systematically validate the RSerPool architecture and its protocols, provide improvements and optimizations where necessary and propose extensions if useful. Based on these evaluations, recommendations to implementers and users of RSerPool should be provided, giving guidelines for the tuning of system parameters and the appropriate configuration of application scenarios. In particular, it is also a goal to transfer insights, optimizations and extensions of the RSerPool protocols from simulation to reality and also to bring the achievements from research into application by supporting and contributing relevant results to the IETF's ongoing RSerPool standardization process. To achieve the described goals, a prototype implementation as well as a simulation model are designed and realized at first. Using a generic application model and appropriate performance metrics, the performance of RSerPool systems in failure-free and server failure scenarios is systematically evaluated in order to identify critical parameter ranges and problematic protocol behaviour. Improvements developed as result of these performance analyses are evaluated and finally contributed into the standardization process of RSerPool

    Experimental Computational Simulation Environments for Algorithmic Trading

    Get PDF
    This thesis investigates experimental Computational Simulation Environments for Computational Finance that for the purpose of this study focused on Algorithmic Trading (AT) models and their risk. Within Computational Finance, AT combines different analytical techniques from statistics, machine learning and economics to create algorithms capable of taking, executing and administering investment decisions with optimal levels of profit and risk. Computational Simulation Environments are crucial for Big Data Analytics, and are increasingly being used by major financial institutions for researching algorithm models, evaluation of their stability, estimation of their optimal parameters and their expected risk and performance profiles. These large-scale Environments are predominantly designed for testing, optimisation and monitoring of algorithms running in virtual or real trading mode. The stateof-the-art Computational Simulation Environment described in this thesis is believed to be the first available for academic research in Computational Finance; specifically Financial Economics and AT. Consequently, the aim of the thesis was: 1) to set the operational expectations of the environment, and 2) to holistically evaluate the prototype software architecture of the system by providing access to it to the academic community via a series of trading competitions. Three key studies have been conducted as part of this thesis: a) an experiment investigating the design of Electronic Market Simulation Models; b) an experiment investigating the design of a Computational Simulation Environment for researching Algorithmic Trading; c) an experiment investigating algorithms and the design of a Portfolio Selection System, a key component of AT systems. Electronic Market Simulation Models (Experiment 1): this study investigates methods of simulating Electronic Markets (EMs) to enable computational finance experiments in trading. EMs are central hubs for bilateral exchange of securities in a well-defined, contracted and controlled manner. Such modern markets rely on electronic networks and are designed to replace Open Outcry Exchanges for the advantage of increased speed, reduced costs of transaction, and programmatic access. Study of simulation models of EMs is important from the point of view of testing trading paradigms, as it allows users to tailor the simulation to the needs of particular trading paradigms. This is a common practice amongst investment institutions to use EMs to fine-tune their algorithms before allowing the algorithms to trade with real funds. Simulations of EMs provide users with the ability to investigate the market micro-structure and to participate in a market, receive live data feeds and monitor their behaviour without bearing any of the risks associated with real-time market trading. Simulated EMs are used by risk managers to test risk characteristics and by quant developers to build and test quantitative financial systems against market behaviour. Computational Simulation Environments (Experiment 2): this study investigates the design, implementation and testing of an experimental Environment for Algorithmic Trading able to support a variety of AT strategies. The Environment consists of a set of distributed, multi-threaded, event-driven, real-time, Linux services communicating with each other via an asynchronous messaging system. The Environment allows multi-user real and virtual trading. It provides a proprietary application programming interface (API) to support research into algorithmic trading models and strategies. It supports advanced trading-signal generation and analysis in near real-time, with use of statistical and technical analysis as well as data mining methods. It provides data aggregation functionalities to process and store market data feeds. Portfolio Selection System (Experiment 3): this study investigates a key component of Computational Finance systems to discover exploitable relationships between financial time-series applicable amongst others to algorithmic trading; where the challenge lays in identification of similarities/dissimilarities in behaviour of elements within variable-size portfolios of tradable and non-tradable securities. Recognition of sets of securities characterized by a very similar/dissimilar behaviour over time, is beneficial from the perspective of risk management, recognition of statistical arbitrage and hedge opportunities, and can be also beneficial from the point of view of portfolio diversification. Consequently, a large-scale search algorithm enabling discovery of sets of securities with AT domain-specific similarity characteristics can be utilized in creation of better portfolio-based strategies, pairs-trading strategies, statistical arbitrage strategies, hedging and mean-reversion strategies. This thesis has the following contributions to science: Electronic Markets Simulation - identifies key features, modes of operation and software architecture of an electronic financial exchange for simulated (virtual) trading. It also identifies key exchange simulation models. These simulation models are crucial in the process of evaluation of trading algorithms and systemic risk. Majority of the proposed models are believed to be unique in the academia. Computational Simulation Environment - design, implementation and testing of a prototype experimental Computational Simulation Environment for Computational Finance research, currently supporting the design of trading algorithms and their associated risk. This is believed to be unique in the academia. Portfolio Selection System - defines what is believed to be a unique software system for portfolio selection containing a combinatorial framework for discovery of subsets of internally cointegrated time-series of financial securities and a graph-guided search algorithm for combinatorial selection of such time-series subsets

    Managing Smartphone Testbeds with SmartLab

    Get PDF
    The explosive number of smartphones with ever growing sensing and computing capabilities have brought a paradigm shift to many traditional domains of the computing field. Re-programming smartphones and instrumenting them for application testing and data gathering at scale is currently a tedious and time-consuming process that poses significant logistical challenges. In this paper, we make three major contributions: First, we propose a comprehensive architecture, coined SmartLab1, for managing a cluster of both real and virtual smartphones that are either wired to a private cloud or connected over a wireless link. Second, we propose and describe a number of Android management optimizations (e.g., command pipelining, screen-capturing, file management), which can be useful to the community for building similar functionality into their systems. Third, we conduct extensive experiments and microbenchmarks to support our design choices providing qualitative evidence on the expected performance of each module comprising our architecture. This paper also overviews experiences of using SmartLab in a research-oriented setting and also ongoing and future development efforts

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum
    corecore