261 research outputs found

    Gathering solutions and providing APIs for their orchestration to implement continuous software delivery

    Get PDF
    In traditional IT environments, it is common for software updates and new releases to take up to several weeks or even months to be eventually available to end users. Therefore, many IT vendors and providers of software products and services face the challenge of delivering updates considerably more frequently. This is because users, customers, and other stakeholders expect accelerated feedback loops and significantly faster responses to changing demands and issues that arise. Thus, taking this challenge seriously is of utmost economic importance for IT organizations if they wish to remain competitive. Continuous software delivery is an emerging paradigm adopted by an increasing number of organizations in order to address this challenge. It aims to drastically shorten release cycles while ensuring the delivery of high-quality software. Adopting continuous delivery essentially means to make it economical to constantly deliver changes in small batches. Infrequent high-risk releases with lots of accumulated changes are thereby replaced by a continuous stream of small and low-risk updates. To gain from the benefits of continuous delivery, a high degree of automation is required. This is technically achieved by implementing continuous delivery pipelines consisting of different application-specific stages (build, test, production, etc.) to automate most parts of the application delivery process. Each stage relies on a corresponding application environment such as a build environment or production environment. This work presents concepts and approaches to implement continuous delivery pipelines based on systematically gathered solutions to be used and orchestrated as building blocks of application environments. Initially, the presented Gather'n'Deliver method is centered around a shared knowledge base to provide the foundation for gathering, utilizing, and orchestrating diverse solutions such as deployment scripts, configuration definitions, and Cloud services. Several classification dimensions and taxonomies are discussed in order to facilitate a systematic categorization of solutions, in addition to expressing application environment requirements that are satisfied by those solutions. The presented GatherBase framework enables the collaborative and automated gathering of solutions through solution repositories. These repositories are the foundation for building diverse knowledge base variants that provide fine-grained query mechanisms to find and retrieve solutions, for example, to be used as building blocks of specific application environments. Combining and integrating diverse solutions at runtime is achieved by orchestrating their APIs. Since some solutions such as lower-level executable artifacts (deployment scripts, configuration definitions, etc.) do not immediately provide their functionality through APIs, additional APIs need to be supplied. This issue is addressed by different approaches, such as the presented Any2API framework that is intended to generate individual APIs for such artifacts. An integrated architecture in conjunction with corresponding prototype implementations aims to demonstrate the technical feasibility of the presented approaches. Finally, various validation scenarios evaluate the approaches within the scope of continuous delivery and application environments and even beyond

    Implementation of Survey and Evaluation System

    Get PDF
    The goal of this project was to evaluate technologies for a survey solution for HopeHouseOfColorado website and implement the best solution. Thus allow the website administrators the ability to create surveys and manage the collected data. The implemented solution allowed the administrators to present the survey information in an organized and graphical manner. The implemented survey tool improved the survey process at HopeHouse and standardized the data collection format

    Conference Web Site Redesign

    Get PDF
    Sustainable Resources was a non-profit organization based out of Boulder, CO that, at the time of this project hosted yearly conferences to help find solutions to world poverty. Although, the previous web site had a professional appearance, it was not meeting their needs with regards to usability, extensibility, and maintainability, as they had been forced to rely heavily on a transient volunteer IT labor base. The aim of this project was to address their functional and maintenance problems. The application\u27s front-end was built in adherence to the Model -view-Controller (MVC) design pattern and implemented using the Jakarta Struts framework and consisted of, a JSP interface. Hibernate was used as the database persistence layer and the entire data model was completely replaced, as certain data fields were being duplicated in the former system. While giving Sustainable Resources the same functionality they had previously enjoyed, the new application enabled the site\u27s administrators to effectively maintain parts of the application without IT support and supplied them with the necessary documentation to assist future developers with upgrades

    From Facility to Application Sensor Data: Modular, Continuous and Holistic Monitoring with DCDB

    Full text link
    Today's HPC installations are highly-complex systems, and their complexity will only increase as we move to exascale and beyond. At each layer, from facilities to systems, from runtimes to applications, a wide range of tuning decisions must be made in order to achieve efficient operation. This, however, requires systematic and continuous monitoring of system and user data. While many insular solutions exist, a system for holistic and facility-wide monitoring is still lacking in the current HPC ecosystem. In this paper we introduce DCDB, a comprehensive monitoring system capable of integrating data from all system levels. It is designed as a modular and highly-scalable framework based on a plugin infrastructure. All monitored data is aggregated at a distributed noSQL data store for analysis and cross-system correlation. We demonstrate the performance and scalability of DCDB, and describe two use cases in the area of energy management and characterization.Comment: Accepted at the The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) 201

    An inter-cloud architecture for future internet infrastructures

    Get PDF
    In latest years, the concept of interconnecting clouds to allow common service coordination has gained significant attention mainly because of the increasing utilization of cloud resources from Internet users. An efficient common management between different clouds is essential benefit, like boundless elasticity and scalability. Yet, issues related with different standards led to interoperability problems. For this reason, the definition of the open cloud-computing interface defines a set of open community-lead specifications along with a flexible API to build cloud systems. Today, there are cloud systems like OpenStack, OpenNebula, Amazon Web Services and VMWare VCloud that expose APIs for inter-cloud communication. In this work we aim to explore an inter-cloud model by creating a new cloud platform service to act as a mediator among OpenStack, FI-WARE datacenter resource management and Amazon Web Service cloud architectures, therefore to orchestrate communication of various cloud environments. The model is based on the FI-WARE and will be offered as a reusable enabler with an open specification to allow interoperable service coordination

    Scientific Workflows for Metabolic Flux Analysis

    Get PDF
    Metabolic engineering is a highly interdisciplinary research domain that interfaces biology, mathematics, computer science, and engineering. Metabolic flux analysis with carbon tracer experiments (13 C-MFA) is a particularly challenging metabolic engineering application that consists of several tightly interwoven building blocks such as modeling, simulation, and experimental design. While several general-purpose workflow solutions have emerged in recent years to support the realization of complex scientific applications, the transferability of these approaches are only partially applicable to 13C-MFA workflows. While problems in other research fields (e.g., bioinformatics) are primarily centered around scientific data processing, 13C-MFA workflows have more in common with business workflows. For instance, many bioinformatics workflows are designed to identify, compare, and annotate genomic sequences by "pipelining" them through standard tools like BLAST. Typically, the next workflow task in the pipeline can be automatically determined by the outcome of the previous step. Five computational challenges have been identified in the endeavor of conducting 13 C-MFA studies: organization of heterogeneous data, standardization of processes and the unification of tools and data, interactive workflow steering, distributed computing, and service orientation. The outcome of this thesis is a scientific workflow framework (SWF) that is custom-tailored for the specific requirements of 13 C-MFA applications. The proposed approach – namely, designing the SWF as a collection of loosely-coupled modules that are glued together with web services – alleviates the realization of 13C-MFA workflows by offering several features. By design, existing tools are integrated into the SWF using web service interfaces and foreign programming language bindings (e.g., Java or Python). Although the attributes "easy-to-use" and "general-purpose" are rarely associated with distributed computing software, the presented use cases show that the proposed Hadoop MapReduce framework eases the deployment of computationally demanding simulations on cloud and cluster computing resources. An important building block for allowing interactive researcher-driven workflows is the ability to track all data that is needed to understand and reproduce a workflow. The standardization of 13 C-MFA studies using a folder structure template and the corresponding services and web interfaces improves the exchange of information for a group of researchers. Finally, several auxiliary tools are developed in the course of this work to complement the SWF modules, i.e., ranging from simple helper scripts to visualization or data conversion programs. This solution distinguishes itself from other scientific workflow approaches by offering a system of loosely-coupled components that are flexibly arranged to match the typical requirements in the metabolic engineering domain. Being a modern and service-oriented software framework, new applications are easily composed by reusing existing components

    ZoomAzores project: implementation of a WebGIS for Nature and Adventure Tourism.

    Get PDF
    Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de InformaçãoNowadays, the Web offers new ways to make available information to users. This creates new ways and tools that can be used to make available tourist information and promotion of these destinations. The main objective this work is to develop a Web application for the ZoomAzores project. This web application uses dynamic maps and user-generated content features, which are focused on make available useful information for the tourist and the promotion of Nature and Adventure Tourism (NAT) in the archipelago of Azores. The solutions encountered were always determined by the technologies used and from the point of view of tourists visiting the Azores, which frequently doesn’t know about the Azores territory. The ZoomAzores Web application has Geographic Information System (GIS) visualization and navigation capabilities on the Internet, turning it into a WebGIS. It also encompasses the principles of Web 2.0 providing functionalities such as the generation of contents by users. The existing link between the use of dynamic maps and Web 2.0 in the tourism promotion and travel planning tasks seems to be a solid reality putting up new opportunities for the business in tourism.In this work, the development of the ZoomAzores WebGIS is based on the use of Open Standards (OS) and Free Open Source Software (FOSS). The use of OS is a key to the development of a WebGIS application able to interoperate with other systems and then use and consume Web Services (WS) that other systems can offer, concurrently enriching the data sources used. The FOSS technologies allow creates a more low cost solution without licensing cost software. This document exposes some design aspects in the system development and describes some functional and architectural features about the WebGIS ZoomAzores

    Effective use of open source GIS in rural planning in South Africa

    Get PDF
    Syftet med denna studie Àr att ta fram en arbetsmetod för att samla in, lagra och visualisera geografisk data pÄ ett enkelt och billigt sÀtt, samt att man nÄr en stor mÄlgrupp med denna geografiska data. Studien riktar sig frÀmst till arbete som pÄgÄr i utvecklingslÀnder och i miljöer dÀr man jobbar med begrÀnsade resurser. MÄnga organisationer som jobbar under dessa förutsÀttningar har inte rÄd att anvÀnda vanliga GIS-program som exempelvis ArcGIS för att visualisera den geografiska data. MÄnga organisationer runt om i vÀrlden har tillgÄng till geografisk data, men de flesta av dessa har inte kunskap om hur denna kan visualiseras pÄ ett smidigt sÀtt. Det kan dock vara viktigt att visualisera denna geografiska data för exempelvis en hjÀlporganisation. Anledningen till detta Àr att mÄnga av dessa organisationer jobbar med begrÀnsade resurser och att de har nÄgon form av prestationskrav gentemot sina finansiÀrer. Att kunna visualisera geografiska data frÄn olika projekt runt om i vÀrlden skulle kunna vara ett sÀtt att informera berörda parter om hur arbetet gÄr. Denna studie har Àgt rum i Dukudukuskogen som ligger i KwaZulu-Natalprovinsen i Sydafrika. Den har varit finansierad av ett Minor Field Study-stipendium som ges ut av SIDA. Vi har samarbetat med ett lokalt konsultbolag vid namnet Enhanced Strategies. De Àr projektledare i Dukudukuskogen dÀr ett 50-tal projekt, finansierat av regering och hjÀlporganisationer, Àger rum. Syftet har varit att skapa en arbetsmetod som kan hjÀlpa till att visualisera dessa projekt och framstegen i omrÄdet till berörda parter. En arbetsmetod har tagits fram med avsikt att vara sÄ billig och att nÄ en sÄ stor grupp mÀnniskor som möjlig. För att minska kostnaderna i denna arbetsmetod har program baserat pÄ öppen kÀllkod anvÀnts och för att nÄ en stor mÄlgrupp har program inom webb-baserat GIS anvÀnts i sÄ stor utstrÀckning som möjligt. Den framtagna arbetsmetoden beskriver i sex olika steg hur man kan hÀmta, lagra och visualisera geografisk data pÄ en hemsida. Resultatet av denna studie Àr, förutom den framtagna arbetsmetoden, tre stycken exempel dÀr den framtagna arbetsmetoden anvÀnds. Den första avser att vara en allmÀn karta över omrÄdet dÀr anvÀndaren sjÀlv vÀljer vad som ska visualiseras. Det andra exemplet beskriver hur geografisk data kan editeras över webben och dÄ följdaktningen uppdatera en databas pÄ distans. Det sista exemplet avser att beskriva hur arbetsmetoden kan anvÀndas för att visa framstegen i ett specifikt projekt.The purpose of this study is to develop a method on how to collect, store and visualize geographic data in an easy and inexpensive way, and to enable a wide audience to reach the information. The focus of this study is mainly on projects that are taking place in developing countries and in environments where one usually works with limited resources. Many of the organizations that are working in these environments cannot afford commonly used GIS programs like ArcGIS to visualize their geographic data. Organizations that are active in developing countries and are executing projects there have access to geographic data, but many of them have no knowledge on how to visualize these in a good way. For a nongovernmental organization (NGO) it could prove useful to visualize these geographic data for many purposes. One of the reasons for this is that the organizations are working with limited resources and they must be able to show results to their donors. Being able to visualize geographic data from various projects around the world could be one way to inform donors about the work progress. This study took place in Dukuduku forest located in KwaZulu-Natal in South Africa. It has been funded by a Minor Field Study, which is a scholarship given by the Swedish International Development cooperation Agency (SIDA). We also collaborated with a local consulting company named Enhanced Strategies. They are project managers within the Dukuduku forest where around 50 projects, funded by the government and NGOs, are taking place. The aim has been to create a workflow that can help to visualize these projects and the progress in the area to all the concerned parts. A workflow has been developed that describe in six steps how to gather, store and visualize geographic data. The intention with the workflow has been to enable development of cheap products that can reach a group of people as large as possible. Open source programs have been used to reduce the cost and Web GIS has been used to reach a wider audience. The results of this study are, in addition to the workflow, three examples that are based on the workflow. The first intends to be a basic map viewer where the user can decide which layers that should be visualized. The second example describes how geographic data can be edited through a web browser and consequently updating a database remotely. The last example is intended to describe how the workflow can be used for visualize the development in one specific project

    Component-aware Orchestration of Cloud-based Enterprise Applications, from TOSCA to Docker and Kubernetes

    Full text link
    Enterprise IT is currently facing the challenge of coordinating the management of complex, multi-component applications across heterogeneous cloud platforms. Containers and container orchestrators provide a valuable solution to deploy multi-component applications over cloud platforms, by coupling the lifecycle of each application component to that of its hosting container. We hereby propose a solution for going beyond such a coupling, based on the OASIS standard TOSCA and on Docker. We indeed propose a novel approach for deploying multi-component applications on top of existing container orchestrators, which allows to manage each component independently from the container used to run it. We also present prototype tools implementing our approach, and we show how we effectively exploited them to carry out a concrete case study
    • 

    corecore