1,428 research outputs found

    Surfacing Automation Criteria: A Process Architecture Approach

    Get PDF
    This paper describes the outcomes of a case study aimed at surfacing and analyzing decision making criteria used to prioritize process automation initiatives with a view toward engendering a process-centric organization and eventually exposing these processes as services. The study used structured interviews, content analysis, and enterprise process architecture mapping techniques to explore the implicit and explicit logic underlying process automation decisions. The results point to a wide range of decision making criteria involving technology, business and industry characteristics. We discuss the implications of our analysis for future, larger-scale, research projects and describe the potential implications for organizations interested in moving toward a process-oriented enterprise

    Innovative techniques for deployment of microservices in cloud-edge environment

    Get PDF
    PhD ThesisThe evolution of microservice architecture allows complex applications to be structured into independent modular components (microservices) making them easier to develop and manage. Complemented with containers, microservices can be deployed across any cloud and edge environment. Although containerized microservices are getting popular in industry, less research is available specially in the area of performance characterization and optimized deployment of microservices. Depending on the application type (e.g. web, streaming) and the provided functionalities (e.g. ltering, encryption/decryption, storage), microservices are heterogeneous with speci c functional and Quality of Service (QoS) requirements. Further, cloud and edge environments are also complex with a huge number of cloud providers and edge devices along with their host con gurations. Due to these complexities, nding a suitable deployment solution for microservices becomes challenging. To handle the deployment of microservices in cloud and edge environments, this thesis presents multilateral research towards microservice performance characterization, run-time evaluation and system orchestration. Considering a variety of applications, numerous algorithms and policies have been proposed, implemented and prototyped. The main contributions of this thesis are given below: Characterizes the performance of containerized microservices considering various types of interference in the cloud environment. Proposes and models an orchestrator, SDBO for benchmarking simple webapplication microservices in a multi-cloud environment. SDBO is validated using an e-commerce test web-application. Proposes and models an advanced orchestrator, GeoBench for the deployment of complex web-application microservices in a multi-cloud environment. GeoBench is validated using a geo-distributed test web-application. - i - Proposes and models a run-time deployment framework for distributed streaming application microservices in a hybrid cloud-edge environment. The model is validated using a real-world healthcare analytics use case for human activity recognition.

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    Fatias de rede fim-a-fim : da extração de perfis de funções de rede a SLAs granulares

    Get PDF
    Orientador: Christian Rodolfo Esteve RothenbergTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Nos últimos dez anos, processos de softwarização de redes vêm sendo continuamente diversi- ficados e gradativamente incorporados em produção, principalmente através dos paradigmas de Redes Definidas por Software (ex.: regras de fluxos de rede programáveis) e Virtualização de Funções de Rede (ex.: orquestração de funções virtualizadas de rede). Embasado neste processo o conceito de network slice surge como forma de definição de caminhos de rede fim- a-fim programáveis, possivelmente sobre infrastruturas compartilhadas, contendo requisitos estritos de desempenho e dedicado a um modelo particular de negócios. Esta tese investiga a hipótese de que a desagregação de métricas de desempenho de funções virtualizadas de rede impactam e compõe critérios de alocação de network slices (i.e., diversas opções de utiliza- ção de recursos), os quais quando realizados devem ter seu gerenciamento de ciclo de vida implementado de forma transparente em correspondência ao seu caso de negócios de comu- nicação fim-a-fim. A verificação de tal assertiva se dá em três aspectos: entender os graus de liberdade nos quais métricas de desempenho de funções virtualizadas de rede podem ser expressas; métodos de racionalização da alocação de recursos por network slices e seus re- spectivos critérios; e formas transparentes de rastrear e gerenciar recursos de rede fim-a-fim entre múltiplos domínios administrativos. Para atingir estes objetivos, diversas contribuições são realizadas por esta tese, dentre elas: a construção de uma plataforma para automatização de metodologias de testes de desempenho de funções virtualizadas de redes; a elaboração de uma metodologia para análises de alocações de recursos de network slices baseada em um algoritmo classificador de aprendizado de máquinas e outro algoritmo de análise multi- critério; e a construção de um protótipo utilizando blockchain para a realização de contratos inteligentes envolvendo acordos de serviços entre domínios administrativos de rede. Por meio de experimentos e análises sugerimos que: métricas de desempenho de funções virtualizadas de rede dependem da alocação de recursos, configurações internas e estímulo de tráfego de testes; network slices podem ter suas alocações de recursos coerentemente classificadas por diferentes critérios; e acordos entre domínios administrativos podem ser realizados de forma transparente e em variadas formas de granularidade por meio de contratos inteligentes uti- lizando blockchain. Ao final deste trabalho, com base em uma ampla discussão as perguntas de pesquisa associadas à hipótese são respondidas, de forma que a avaliação da hipótese proposta seja realizada perante uma ampla visão das contribuições e trabalhos futuros desta teseAbstract: In the last ten years, network softwarisation processes have been continuously diversified and gradually incorporated into production, mainly through the paradigms of Software Defined Networks (e.g., programmable network flow rules) and Network Functions Virtualization (e.g., orchestration of virtualized network functions). Based on this process, the concept of network slice emerges as a way of defining end-to-end network programmable paths, possibly over shared network infrastructures, requiring strict performance metrics associated to a par- ticular business case. This thesis investigate the hypothesis that the disaggregation of network function performance metrics impacts and composes a network slice footprint incurring in di- verse slicing feature options, which when realized should have their Service Level Agreement (SLA) life cycle management transparently implemented in correspondence to their fulfilling end-to-end communication business case. The validation of such assertive takes place in three aspects: the degrees of freedom by which performance of virtualized network functions can be expressed; the methods of rationalizing the footprint of network slices; and transparent ways to track and manage network assets among multiple administrative domains. In order to achieve such goals, a series of contributions were achieved by this thesis, among them: the construction of a platform for automating methodologies for performance testing of virtual- ized network functions; an elaboration of a methodology for the analysis of footprint features of network slices based on a machine learning classifier algorithm and a multi-criteria analysis algorithm; and the construction of a prototype using blockchain to carry out smart contracts involving service level agreements between administrative systems. Through experiments and analysis we suggest that: performance metrics of virtualized network functions depend on the allocation of resources, internal configurations and test traffic stimulus; network slices can have their resource allocations consistently analyzed/classified by different criteria; and agree- ments between administrative domains can be performed transparently and in various forms of granularity through blockchain smart contracts. At the end of his thesis, through a wide discussion we answer all the research questions associated to the investigated hypothesis in such way its evaluation is performed in face of wide view of the contributions and future work of this thesisDoutoradoEngenharia de ComputaçãoDoutor em Engenharia ElétricaFUNCAM

    Integration of Event Processing with Service-oriented Architectures and Business Processes

    Get PDF
    Data sources like the Internet of Things or Cyber-physical Systems provide enormous amounts of real-time information in form of streams of events. The use of such event streams enables reactive software components as building blocks in a new generation of systems. Businesses, for example, can benefit from the integration of event streams; new services can be provided to customers, or existing business processes can be improved. The development of reactive systems and the integration with existing application landscapes, however, is challenging. While traditional system components follow a pull-based request/reply interaction style, event-based systems follow a push-based interaction scheme; events arrive continuously and application logic is triggered implicitly. To benefit from push-based and pull-based interactions together, an intuitive software abstraction is necessary to integrate push-based application logic with existing systems. In this work we introduce such an abstraction: we present Event Stream Processing Units (SPUs) - a container model for the encapsulation of event-processing application logic at the technical layer as well as at the business process layer. At the technical layer SPUs provide a service-like abstraction and simplify the development of scalable reactive applications. At the business process layer SPUs make event processing explicitly representable. SPUs have a managed lifecycle and are instantiated implicitly - upon arrival of appropriate events - or explicitly upon request. At the business process layer SPUs encapsulate application logic for event stream processing and enable a seamless transition between process models, executable process representations, and components at the IT layer. Throughout this work, we focus on different aspects of the SPU container model: we first introduce the SPU container model and its execution semantics. Since SPUs rely on a publish/subscribe system for event dissemination, we discuss quality of service requirements in the context of event processing. SPUs rely on input in form of events; in event-based systems, however, event production is logically decoupled, i.e., event producers are not aware of the event consumers. This influences the system development process and requires an appropriate methodology. Fur this purpose we present a requirements engineering approach that takes the specifics of event-based applications into account. The integration of events with business processes leads to new business opportunities. SPUs can encapsulate event processing at the abstraction level of business functions and enable a seamless integration with business processes. For this integration, we introduce extensions to the business process modeling notations BPMN and EPCs to model SPUs. We also present a model-to-execute workflow for SPU-containing process models and implementation with business process modeling software. The SPU container model itself is language-agnostic; thus, we present Eventlets as SPU implementation based on Java Enterprise technology. Eventlets are executed inside a distributed middleware and follow a lifecycle. They reduce the development effort of scalable event processing applications as we show in our evaluation. Since the SPU container model introduces an additional layer of abstraction we analyze the overhead in terms of performance and show that Eventlets can compete with traditional event processing approaches in terms of performance. SPUs can be used to process sensitive data, e.g., in health care environments. Thus, privacy protection is an important requirement for certain use cases and we sketch the application of a privacy-preserving event dissemination scheme to protect event consumers and producers from curious brokers. We also quantify the resulting overhead introduced by a privacy-preserving brokering scheme in an evaluation
    corecore