7 research outputs found

    Processamento de eventos complexos como serviço em ambientes multi-nuvem

    Get PDF
    Orientadores: Luiz Fernando Bittencourt, Miriam Akemi Manabe CapretzTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O surgimento das tecnologias de dispositivos móveis e da Internet das Coisas, combinada com avanços das tecnologias Web, criou um novo mundo de Big Data em que o volume e a velocidade da geração de dados atingiu uma escala sem precedentes. Por ser uma tecnologia criada para processar fluxos contínuos de dados, o Processamento de Eventos Complexos (CEP, do inglês Complex Event Processing) tem sido frequentemente associado a Big Data e aplicado como uma ferramenta para obter informações em tempo real. Todavia, apesar desta onda de interesse, o mercado de CEP ainda é dominado por soluções proprietárias que requerem grandes investimentos para sua aquisição e não proveem a flexibilidade que os usuários necessitam. Como alternativa, algumas empresas adotam soluções de baixo nível que demandam intenso treinamento técnico e possuem alto custo operacional. A fim de solucionar esses problemas, esta pesquisa propõe a criação de um sistema de CEP que pode ser oferecido como serviço e usado através da Internet. Um sistema de CEP como Serviço (CEPaaS, do inglês CEP as a Service) oferece aos usuários as funcionalidades de CEP aliadas às vantagens do modelo de serviços, tais como redução do investimento inicial e baixo custo de manutenção. No entanto, a criação de tal serviço envolve inúmeros desafios que não são abordados no atual estado da arte de CEP. Em especial, esta pesquisa propõe soluções para três problemas em aberto que existem neste contexto. Em primeiro lugar, para o problema de entender e reusar a enorme variedade de procedimentos para gerência de sistemas CEP, esta pesquisa propõe o formalismo Reescrita de Grafos com Atributos para Gerência de Processamento de Eventos Complexos (AGeCEP, do inglês Attributed Graph Rewriting for Complex Event Processing Management). Este formalismo inclui modelos para consultas CEP e transformações de consultas que são independentes de tecnologia e linguagem. Em segundo lugar, para o problema de avaliar estratégias de gerência e processamento de consultas CEP, esta pesquisa apresenta CEPSim, um simulador de sistemas CEP baseado em nuvem. Por fim, esta pesquisa também descreve um sistema CEPaaS fundamentado em ambientes multi-nuvem, sistemas de gerência de contêineres e um design multiusuário baseado em AGeCEP. Para demonstrar sua viabilidade, o formalismo AGeCEP foi usado para projetar um gerente autônomo e um conjunto de políticas de auto-gerenciamento para sistemas CEP. Além disso, o simulador CEPSim foi minuciosamente avaliado através de experimentos que demonstram sua capacidade de simular sistemas CEP com acurácia e baixo custo adicional de processamento. Por fim, experimentos adicionais validaram o sistema CEPaaS e demonstraram que o objetivo de oferecer funcionalidades CEP como um serviço escalável e tolerante a falhas foi atingido. Em conjunto, esses resultados confirmam que esta pesquisa avança significantemente o estado da arte e também oferece novas ferramentas e metodologias que podem ser aplicadas à pesquisa em CEPAbstract: The rise of mobile technologies and the Internet of Things, combined with advances in Web technologies, have created a new Big Data world in which the volume and velocity of data generation have achieved an unprecedented scale. As a technology created to process continuous streams of data, Complex Event Processing (CEP) has been often related to Big Data and used as a tool to obtain real-time insights. However, despite this recent surge of interest, the CEP market is still dominated by solutions that are costly and inflexible or too low-level and hard to operate. To address these problems, this research proposes the creation of a CEP system that can be offered as a service and used over the Internet. Such a CEP as a Service (CEPaaS) system would give its users CEP functionalities associated with the advantages of the services model, such as no up-front investment and low maintenance cost. Nevertheless, creating such a service involves challenges that are not addressed by current CEP systems. This research proposes solutions for three open problems that exist in this context. First, to address the problem of understanding and reusing existing CEP management procedures, this research introduces the Attributed Graph Rewriting for Complex Event Processing Management (AGeCEP) formalism as a technology- and language-agnostic representation of queries and their reconfigurations. Second, to address the problem of evaluating CEP query management and processing strategies, this research introduces CEPSim, a simulator of cloud-based CEP systems. Finally, this research also introduces a CEPaaS system based on a multi-cloud architecture, container management systems, and an AGeCEP-based multi-tenant design. To demonstrate its feasibility, AGeCEP was used to design an autonomic manager and a selected set of self-management policies. Moreover, CEPSim was thoroughly evaluated by experiments that showed it can simulate existing systems with accuracy and low execution overhead. Finally, additional experiments validated the CEPaaS system and demonstrated it achieves the goal of offering CEP functionalities as a scalable and fault-tolerant service. In tandem, these results confirm this research significantly advances the CEP state of the art and provides novel tools and methodologies that can be applied to CEP researchDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação140920/2012-9CNP

    Complex Event Processing as a Service in Multi-Cloud Environments

    Get PDF
    The rise of mobile technologies and the Internet of Things, combined with advances in Web technologies, have created a new Big Data world in which the volume and velocity of data generation have achieved an unprecedented scale. As a technology created to process continuous streams of data, Complex Event Processing (CEP) has been often related to Big Data and used as a tool to obtain real-time insights. However, despite this recent surge of interest, the CEP market is still dominated by solutions that are costly and inflexible or too low-level and hard to operate. To address these problems, this research proposes the creation of a CEP system that can be offered as a service and used over the Internet. Such a CEP as a Service (CEPaaS) system would give its users CEP functionalities associated with the advantages of the services model, such as no up-front investment and low maintenance cost. Nevertheless, creating such a service involves challenges that are not addressed by current CEP systems. This research proposes solutions for three open problems that exist in this context. First, to address the problem of understanding and reusing existing CEP management procedures, this research introduces the Attributed Graph Rewriting for Complex Event Processing Management (AGeCEP) formalism as a technology- and language-agnostic representation of queries and their reconfigurations. Second, to address the problem of evaluating CEP query management and processing strategies, this research introduces CEPSim, a simulator of cloud-based CEP systems. Finally, this research also introduces a CEPaaS system based on a multi-cloud architecture, container management systems, and an AGeCEP-based multi-tenant design. To demonstrate its feasibility, AGeCEP was used to design an autonomic manager and a selected set of self-management policies. Moreover, CEPSim was thoroughly evaluated by experiments that showed it can simulate existing systems with accuracy and low execution overhead. Finally, additional experiments validated the CEPaaS system and demonstrated it achieves the goal of offering CEP functionalities as a scalable and fault-tolerant service. In tandem, these results confirm this research significantly advances the CEP state of the art and provides novel tools and methodologies that can be applied to CEP research

    SECURITY CHALLENGES IN CLOUD COMPUTING

    Get PDF

    Building the Infrastructure for Cloud Security

    Get PDF
    Computer scienc

    The Evolution of Cloud Data Architectures: Storage, Compute, and Migration

    Get PDF
    Recent advances in data architectures have shifted from on-premises to the cloud. However, new challenges emerge as data explosion continues to expand at an exponential rate. As a result, my Ph.D. research focuses on addressing the following challenges. First, cloud data-warehouses such as Snowflake, BigQuery, and Redshift often rely on storage systems such as distributed file systems or object stores to store massive amounts of data. The growth of data volumes is accompanied by an increase in the number of objects stored and the amount of metadata such systems must manage. By treating metadata management similar to data management, we built FileScale, an HDFS-based file system that replaces metadata management in HDFS with a three-tiered distributed architecture that incorporates a high throughput, distributed main-memory database system at the lowest layer, along with distributed caching and routing functionality above it. FileScale performs comparably to the single-machine architecture at a small scale, while enabling linear scalability as the file system metadata increases. Second, Function as a Service, or FaaS, is a new type of cloud-computing service that executes code in response to events without the complex infrastructure typically associated with building and launching microservices applications. FaaS offers cloud functions with millisecond billing granularity to be scaled automatically, independently, and instantaneously as needed. We built Flock, the first practical cloud-native SQL query engine that supports event stream processing on FaaS with heterogeneous hardware (x86 and Arm) with the ability to shuffle and aggregate data without requiring a centralized coordinator or remote storage such as Amazon S3. This architecture is more cost-effective than traditional systems, especially for dynamic workloads and continuous queries. Third, Software as a Service, or SaaS, is a method of software product delivery to end-users over the internet and via pay-as-you-go pricing in which the software is centrally hosted and managed by the cloud service provider. Continuous Deployment (CD) in SaaS, an aspect of DevOps, is the increasingly popular practice of frequent, automated deployment of software changes. To realize the benefits of CD, it must be straightforward to deploy updates to both front-end code and the database, even when the database’s schema has changed. Unfortunately, this is where current practices run into difficulty. So we built BullFrog, a PostgreSQL extension that is the first system to use lazy schema migration to support single-step, online schema evolution without downtime, which achieves efficient, exactly-once physical migration of data under contention

    A decision framework to mitigate vendor lock-in risks in cloud (SaaS category) migration.

    Get PDF
    Cloud computing offers an innovative business model to enterprise IT services consumption and delivery. However, vendor lock-in is recognised as being a major barrier to the adoption of cloud computing, due to lack of standardisation. So far, current solutions and efforts tackling the vendor lock-in problem have been confined to/or are predominantly technology-oriented. Limited studies exist to analyse and highlight the complexity of vendor lock-in problem existing in the cloud environment. Consequently, customers are unaware of proprietary standards which inhibit interoperability and portability of applications when taking services from vendors. The complexity of the service offerings makes it imperative for businesses to use a clear and well understood decision process to procure, migrate and/or discontinue cloud services. To date, the expertise and technological solutions to simplify such transition and facilitate good decision making to avoid lock-in risks in the cloud are limited. Besides, little research investigations have been carried out to provide a cloud migration decision framework to assist enterprises to avoid lock-in risks when implementing cloud-based Software-as-a-Service (SaaS) solutions within existing environments. Such decision framework is important to reduce complexity and variations in implementation patterns on the cloud provider side, while at the same time minimizing potential switching cost for enterprises by resolving integration issues with existing IT infrastructures. Thus, the purpose of this thesis is to propose a decision framework to mitigate vendor lock-in risks in cloud (SaaS) migration. The framework follows a systematic literature review and analysis to present research findings containing factual and objective information, and business requirements for vendor-neutral interoperable cloud services, and/or when making architectural decisions for secure cloud migration and integration. The underlying research procedure for this thesis investigation consists of a survey based on qualitative and quantitative approaches conducted to identify the main risk factors that give rise to cloud computing lock-in situations. Epistemologically, the research design consists of two distinct phases. In phase 1, qualitative data were collected using open-ended interviews with IT practitioners to explore the business-related issues of vendor lock-in affecting cloud adoption. Whereas the goal of phase 2 was to identify and evaluate the risks and opportunities of lock-in which affect stakeholders’ decision-making about migrating to cloud-based solutions. In synthesis, the survey analysis and the framework proposed by this research (through its step-by-step approach), provides guidance on how enterprises can avoid being locked to individual cloud service providers. This reduces the risk of dependency on a cloud provider for service provision, especially if data portability, as the most fundamental aspect, is not enabled. Moreover, it also ensures appropriate pre-planning and due diligence so that the correct cloud service provider(s) with the most acceptable risks to vendor lock-in is chosen, and that the impact on the business is properly understood (upfront), managed (iteratively), and controlled (periodically). Each decision step within the framework prepares the way for the subsequent step, which supports a company to gather the correct information to make a right decision before proceeding to the next step. The reason for such an approach is to support an organisation with its planning and adaptation of the services to suit the business requirements and objectives. Furthermore, several strategies are proposed on how to avoid and mitigate lock-in risks when migrating to cloud computing. The strategies relate to contract, selection of vendors that support standardised formats and protocols regarding data structures and APIs, negotiating cloud service agreements (SLA) accordingly as well as developing awareness of commonalities and dependencies among cloud-based solutions. The implementation of proposed strategies and supporting framework has a great potential to reduce the risks of vendor lock-in
    corecore