221 research outputs found

    A Case Study In Software Adaptation

    Get PDF
    We attach a feedback-control-loop infrastructure to an existing target system, to continually monitor and dynamically adapt its activities and performance. (This approach could also be applied to 'new' systems, as an alternative to 'building in' adaptation facilities, but we do not address that here.) Our infrastructure consists of multiple layers with the objectives of 1. probing, measuring and reporting of activity and state during the execution of the target system among its components and connectors; 2. gauging, analysis and interpretation of the reported events; and 3. whenever necessary, feedback onto the probes and gauges, to focus them (e.g., drill deeper), or onto the running target system, to direct its automatic adjustment and reconfiguration. We report on our successful experience using this approach in dynamic adaptation of a large-scale commercial application that requires both coarse and fine grained modifications

    A Case Study In Software Adaptation

    Get PDF
    We attach a feedback-control-loop infrastructure to an existing target system, to continually monitor and dynamically adapt its activities and performance. (This approach could also be applied to 'new' systems, as an alternative to 'building in' adaptation facilities, but we do not address that here.) Our infrastructure consists of multiple layers with the objectives of 1. probing, measuring and reporting of activity and state during the execution of the target system among its components and connectors; 2. gauging, analysis and interpretation of the reported events; and 3. whenever necessary, feedback onto the probes and gauges, to focus them (e.g., drill deeper), or onto the running target system, to direct its automatic adjustment and reconfiguration. We report on our successful experience using this approach in dynamic adaptation of a large-scale commercial application that requires both coarse and fine grained modifications

    Cloud provider independence using DevOps methodologies with Infrastructure-as-Code

    Get PDF
    On choosing cloud computing infrastructure for IT needs there is a risk of becoming dependent and locked-in on a specific cloud provider from which it becomes difficult to switch should an entity decide to move all of the infrastructure resources into a different provider. There’s widespread information available on how to migrate existing infrastructure to the cloud notwithstanding common cloud solutions and providers don't have any clear path or framework for supporting their tenants to migrate off the cloud into another provider or cloud infrastructure with similar service levels should they decide to do so. Under these circumstances it becomes difficult to switch from cloud provider not just because of the technical complexity of recreating the entire infrastructure from scratch and moving related data but also because of the cost it may involve. One possible solution is to evaluate the use of Infrastructure-as-Code languages for defining infrastructure (“Infrastructure-as-Code”) combined with DevOps methodologies and technologies to create a mechanism that helps streamline the migration process between different cloud infrastructure especially if taken into account from the beginning of a project. A well-structured DevOps methodology combined with Infrastructure-as-Code may allow a more integrated control on cloud resources as those can be defined and controlled with specific languages and be submitted to automation processes. Such definitions must take into account what is currently available to support those operations under the chosen cloud infrastructure APIs, always seeking to guarantee the tenant an higher degree of control over its infrastructure and higher level of preparation of the necessary steps for the recreation or migration of such infrastructure should the need arise, somehow integrating cloud resources as part of a development model. The objective of this dissertation is to create a conceptual reference framework that can identify different forms for migration of IT infrastructure while always contemplating a higher provider independence by resorting to such mechanisms, as well as identify possible constraints or obstacles under this approach. Such a framework can be referenced from the beginning of a development project if foreseeable changes in infrastructure or provider are a possibility in the future, taking into account what the API’s provide in order to make such transitions easier.Ao optar-se por infraestruturas de computação em nuvem para soluções de TI existe um risco associado de se ficar dependente de um fornecedor de serviço específico, do qual se torna difícil mudar caso se decida posteriormente movimentar toda essa infraestrutura para um outro fornecedor. Encontra-se disponível extensa documentação sobre como migrar infraestrutura já  existente para modelos de computação em nuvem, de qualquer modo as soluções e os fornecedores de serviço não dispõem de formas ou metodologias claras que suportem os seus clientes em migrações para fora da nuvem, seja para outro fornecedor ou infraestrutura com semelhantes tipos de serviço, caso assim o desejem. Nestas circunstâncias torna-se difícil mudar de fornecedor de serviço não apenas pela complexidade técnica associada à criação de toda a infraestrutura de raiz e movimentação de todos os dados associados a esta mas também devido aos custos que envolve uma operação deste tipo. Uma possível solução é avaliar a utilização de linguagens para definição de infraestrutura como código (“Infrastructure-as-Code”) em conjunção com metodologias e tecnologias “DevOps” de forma a criar um mecanismo que permita flexibilizar um processo de migração entre diferentes infraestruturas de computação em nuvem, especialmente se for contemplado desde o início de um projecto. Uma metodologia “DevOps” devidamente estruturada quando combinada com definição de infraestrutura como código pode permitir um controlo mais integrado de recursos na nuvem uma vez que estes podem ser definidos e controlados através de linguagens específicas e submetidos a processos de automação. Tais definições terão de ter em consideração o que existe disponível para suportar as necessárias operações através das “API’s” das infraestruturas de computação em nuvem, procurando sempre garantir ao utilizador um elevado grau de controlo sobre a sua infraestrutura e um maior nível de preparação dos passos necessários para recriação ou migração da infraestrutura caso essa necessidade surja, integrando de certa forma os recursos de computação em nuvem como parte do modelo de desenvolvimento. Esta dissertação tem como objetivo a criação de um modelo de referência conceptual que identifique formas de migração de infraestruturas de computação procurando ao mesmo tempo uma maior independência do fornecedor de serviço com recurso a tais mecanismos, assim como identificar possíveis constrangimentos ou impedimentos nesta aproximação. Tal modelo poderá ser referenciado desde o início de um projecto de desenvolvimento caso seja necessário contemplar uma possível necessidade futura de alterações ao nível da infraestrutura ou de fornecedor, com base no que as “API’s” disponibilizam, de modo a facilitar essa operação.info:eu-repo/semantics/publishedVersio

    Middleware for large scale in situ analytics workflows

    Get PDF
    The trend to exascale is causing researchers to rethink the entire computa- tional science stack, as future generation machines will contain both diverse hardware environments and run times that manage them. Additionally, the science applications themselves are stepping away from the traditional bulk-synchronous model and are moving towards a more dynamic and decoupled environment where analysis routines are run in situ alongside the large scale simulations. This thesis presents CoApps, a middleware that allows in situ science analytics applications to operate in a location-flexible manner. Additionally, CoApps explores methods to extract information from, and issue management operations to, lower level run times that are managing the diverse hardware expected to be found on next generation exascale machines. This work leverages experience with several extremely scalable applications in materials and fusion, and has been evaluated on machines ranging from local Linux clusters to the supercomputer Titan.Ph.D
    corecore