768 research outputs found

    funcX: A Federated Function Serving Fabric for Science

    Full text link
    Exploding data volumes and velocities, new computational methods and platforms, and ubiquitous connectivity demand new approaches to computation in the sciences. These new approaches must enable computation to be mobile, so that, for example, it can occur near data, be triggered by events (e.g., arrival of new data), be offloaded to specialized accelerators, or run remotely where resources are available. They also require new design approaches in which monolithic applications can be decomposed into smaller components, that may in turn be executed separately and on the most suitable resources. To address these needs we present funcX---a distributed function as a service (FaaS) platform that enables flexible, scalable, and high performance remote function execution. funcX's endpoint software can transform existing clouds, clusters, and supercomputers into function serving systems, while funcX's cloud-hosted service provides transparent, secure, and reliable function execution across a federated ecosystem of endpoints. We motivate the need for funcX with several scientific case studies, present our prototype design and implementation, show optimizations that deliver throughput in excess of 1 million functions per second, and demonstrate, via experiments on two supercomputers, that funcX can scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap with arXiv:1908.0490

    Microservices-based IoT Applications Scheduling in Edge and Fog Computing: A Taxonomy and Future Directions

    Full text link
    Edge and Fog computing paradigms utilise distributed, heterogeneous and resource-constrained devices at the edge of the network for efficient deployment of latency-critical and bandwidth-hungry IoT application services. Moreover, MicroService Architecture (MSA) is increasingly adopted to keep up with the rapid development and deployment needs of the fast-evolving IoT applications. Due to the fine-grained modularity of the microservices along with their independently deployable and scalable nature, MSA exhibits great potential in harnessing both Fog and Cloud resources to meet diverse QoS requirements of the IoT application services, thus giving rise to novel paradigms like Osmotic computing. However, efficient and scalable scheduling algorithms are required to utilise the said characteristics of the MSA while overcoming novel challenges introduced by the architecture. To this end, we present a comprehensive taxonomy of recent literature on microservices-based IoT applications scheduling in Edge and Fog computing environments. Furthermore, we organise multiple taxonomies to capture the main aspects of the scheduling problem, analyse and classify related works, identify research gaps within each category, and discuss future research directions.Comment: 35 pages, 10 figures, submitted to ACM Computing Survey

    Strategies for including cloud-computing into an engineering modeling workflow

    Get PDF
    With the advent of cloud computing, high-end computing, networking, and storage resources are available on-demand at a relatively low price point. Internet applications in the consumer and increasingly in the enterprise space are making use of these resources to upgrade existing applications and build new ones. This is made possible by building decentralized applications that can be integrated with one another through web-enabled application programming interfaces (APIs). However, in the fields of engineering and computational science, cloud computing resources have been utilized primarily to augment existing high-performance computing hardware, but engineering model integrations still occur by the use of software libraries. In this research, a novel approach is proposed where engineering models are constructed as independent services that publish web-enabled APIs. To enable this, the engineering models are built as stateless microservices that solve a single computational problem. Composite services are then built utilizing these independent component models, much like in the consumer application space. Interactions between component models is orchestrated by a federation management system. This proposed approach is then demonstrated by disaggregating an existing monolithic model for a cookstove into a set of component models. The component models are then reintegrated and compared with the original model for computational accuracy and run-time. Additionally, a novel engineering workflow is proposed that reuses computational data by constructing reduced-order models (ROMs). This framework is evaluated empirically for a number of producers and consumers of engineering models based on computation and data synchronization aspects. The framework is also evaluated by simulating an engineering design workflow with multiple producers and consumers at various stages during the design process. Finally, concepts from the federated system of models and ROMs are combined to propose the concept of a hybrid model (information artefact). The hybrid model is a web-enabled microservice that encapsulates information from multiple engineering models at varying fidelities, and responds to queries based on the best available information. Rules for the construction of hybrid models have been proposed and evaluated in the context of engineering workflows

    Explorar performance com Apollo Federation

    Get PDF
    The growing tendency in cloud-hosted computing and availability supported a shift in soft ware architecture to better take advantage of such technological advancements. As Mono lithic Architecture started evolving and maturing, businesses grew their dependency on soft ware solutions which motivated the shift into Microservice Architecture. The same shift is comparable with the evolution of Monolithic GraphQL solutions which, through its growth and evolution, also required a way forward in solving some of its bot tleneck issues. One of the alternatives, already chosen and proven by some enterprises, is GraphQL Federation. Due to its nobility, there is still a lack of knowledge and testing on the performance of GraphQL Federation architecture and what techniques such as caching strategies, batching and execution strategies impact it. This thesis aims to answer this lack of knowledge by first contextualizing the different as pects of GraphQL and GraphQL Federation and investigating the available and documented enterprise scenarios to extract best practices and to better understand how to prepare such performance evaluation. Next, multiple alternatives underwent the Analytic Hierarchy Process to choose the best way to develop a scenario to enable the performance analysis in a standard and structured way. Following this, the alternative base solutions were analysed and compared to deter mine the best fit for the current thesis. Functional and non-functional requirements were collected along with the rest of the design exercise to enhance the solution to be tested for performance. Finally, after the required development and implementation work was documented, the so lution was tested following the Goal Question Metric methodology and utilizing tools such as JMeter, Prometheus and Grafana to collect and visualize the performance data. It was possible to conclude that indeed different caching, batching and execution strategies have an impact on the GraphQL Federation solution. These impacts do shift between positive (improvements in performance) and negative (performance hindered by strategy) for the different tested strategies.A tendência de crescimento da computação cloud-hosted apoiou uma mudança na arquite tura do software para tirar maior proveito desses avanços tecnológicos. Com a evolução e amadurecimento das arquiteturas monolíticas, as empresas aumentaram sua dependência nas soluções software que motivou a mudança e adoção de arquiteturas de micro serviços. O mesmo se verificou com a evolução das soluções monolíticas GraphQL que, com o seu crescimento e evolução, também requeriam soluções para resolver algumas das suas novas complexidades. Uma das alternativas de resolução, já aplicado e provado na indústria, é o GraphQL Federation. Devido ao seu recente lançamento, ainda não existe um conhecimento sólido na performance de uma arquitetura de GraphQL Federation e que técnicas como estratégias de caching, batching e execution tem impacto sobre a mesma. Esta tese tem como intuito responder a esta falha de conhecimento através de, primeira mente, contextualizar os diferentes aspetos de GraphQL e GraphQL Federations com a investigação de casos de aplicação na indústria, para a extração de boas práticas e compreender o necessário ao desenvolvimento de uma avaliação de performance. De seguida, múltiplas alternativas foram sujeitas ao Analytic Hierarchy Process para escolher a melhor forma de desenvolver um cenário/solução necessária a uma análise de performance normalizada e estruturada. Com isto em mente, as duas soluções base foram analisadas e comparadas para determinar a mais adequada a esta tese. Requisitos funcionais e não funcionais foram recolhidos, assim como todo o restante exercício de design necessário ao desenvolvimento da solução para testes de performance. Finalmente, após a fase de desenvolvimento ser concluída e devidamente documentada, a solução foi testada seguindo a metodologia Goal Question Metric, e aplicando ferramentas como JMeter, Prometheus e Grafana para recolher e visualizar os dados de performance. Foi possível concluir que, de facto, as diferentes estratégias de caching, batching e execution tem impacto numa solução GraphQL Federation. Tais impactos variam entre positivos (com melhorias em termos de performance) e negatives (performance afetada por estratégias) para as diferentes estratégias testadas
    corecore