130 research outputs found
APMEC: An Automated Provisioning Framework for Multi-access Edge Computing
Novel use cases and verticals such as connected cars and human-robot
cooperation in the areas of 5G and Tactile Internet can significantly benefit
from the flexibility and reduced latency provided by Network Function
Virtualization (NFV) and Multi-Access Edge Computing (MEC). Existing frameworks
managing and orchestrating MEC and NFV are either tightly coupled or completely
separated. The former design is inflexible and increases the complexity of one
framework. Whereas, the latter leads to inefficient use of computation
resources because information are not shared. We introduce APMEC, a dedicated
framework for MEC while enabling the collaboration with the management and
orchestration (MANO) frameworks for NFV. The new design allows to reuse
allocated network services, thus maximizing resource utilization. Measurement
results have shown that APMEC can allocate up to 60% more number of network
services. Being developed on top of OpenStack, APMEC is an open source project,
available for collaboration and facilitating further research activities
Quality Assurance of Heterogeneous Applications: The SODALITE Approach
A key focus of the SODALITE project is to assure the quality and performance
of the deployments of applications over heterogeneous Cloud and HPC
environments. It offers a set of tools to detect and correct errors, smells,
and bugs in the deployment models and their provisioning workflows, and a
framework to monitor and refactor deployment model instances at runtime. This
paper presents objectives, designs, early results of the quality assurance
framework and the refactoring framework.Comment: 5 pages. Accepted for the publication. 8th European Conference On
Service-Oriented And Cloud Computing (https://esocc-conf.eu/). EU Trac
Deployment and Operation of Complex Software in Heterogeneous Execution Environments
This open access book provides an overview of the work developed within the SODALITE project, which aims at facilitating the deployment and operation of distributed software on top of heterogeneous infrastructures, including cloud, HPC and edge resources. The experts participating in the project describe how SODALITE works and how it can be exploited by end users. While multiple languages and tools are available in the literature to support DevOps teams in the automation of deployment and operation steps, still these activities require specific know-how and skills that cannot be found in average teams. The SODALITE framework tackles this problem by offering modelling and smart editing features to allow those we call Application Ops Experts to work without knowing low level details about the adopted, potentially heterogeneous, infrastructures. The framework offers also mechanisms to verify the quality of the defined models, generate the corresponding executable infrastructural code, automatically wrap application components within proper execution containers, orchestrate all activities concerned with deployment and operation of all system components, and support on-the-fly self-adaptation and refactoring
Using Semantic Web Technologies to Query and Manage Information within Federated Cyber-Infrastructures
A standardized descriptive ontology supports efficient querying and manipulation of data from heterogeneous sources across boundaries of distributed infrastructures, particularly in federated environments. In this article, we present the Open-Multinet (OMN) set of ontologies, which were designed specifically for this purpose as well as to support management of life-cycles of infrastructure resources. We present their initial application in Future Internet testbeds, their use for representing and requesting available resources, and our experimental performance evaluation of the ontologies in terms of querying and translation times. Our results highlight the value and applicability of Semantic Web technologies in managing resources of federated cyber-infrastructures.EC/FP7/318389/EU/Federation for FIRE/Fed4FIREEC/FP7/732638/EU/Federation for FIRE Plus/Fed4FIREplu
A UML Profile for the Design, Quality Assessment and Deployment of Data-intensive Applications
Big Data or Data-Intensive applications (DIAs) seek to mine, manipulate, extract or otherwise exploit the potential intelligence hidden behind Big Data. However, several practitioner surveys remark that DIAs potential is still untapped because of very difficult and costly design, quality assessment and continuous refinement. To address the above shortcoming, we propose the use of a UML domain-specific modeling language or profile specifically tailored to support the design, assessment and continuous deployment of DIAs. This article illustrates our DIA-specific profile and outlines its usage in the context of DIA performance engineering and deployment. For DIA performance engineering, we rely on the Apache Hadoop technology, while for DIA deployment, we leverage the TOSCA language. We conclude that the proposed profile offers a powerful language for data-intensive software and systems modeling, quality evaluation and automated deployment of DIAs on private or public clouds
Introducing Development Features for Virtualized Network Services
Network virtualization and softwarizing network functions are trends aiming
at higher network efficiency, cost reduction and agility. They are driven by
the evolution in Software Defined Networking (SDN) and Network Function
Virtualization (NFV). This shows that software will play an increasingly
important role within telecommunication services, which were previously
dominated by hardware appliances. Service providers can benefit from this, as
it enables faster introduction of new telecom services, combined with an agile
set of possibilities to optimize and fine-tune their operations. However, the
provided telecom services can only evolve if the adequate software tools are
available. In this article, we explain how the development, deployment and
maintenance of such an SDN/NFV-based telecom service puts specific requirements
on the platform providing it. A Software Development Kit (SDK) is introduced,
allowing service providers to adequately design, test and evaluate services
before they are deployed in production and also update them during their
lifetime. This continuous cycle between development and operations, a concept
known as DevOps, is a well known strategy in software development. To extend
its context further to SDN/NFV-based services, the functionalities provided by
traditional cloud platforms are not yet sufficient. By giving an overview of
the currently available tools and their limitations, the gaps in DevOps for
SDN/NFV services are highlighted. The benefit of such an SDK is illustrated by
a secure content delivery network service (enhanced with deep packet inspection
and elastic routing capabilities). With this use-case, the dynamics between
developing and deploying a service are further illustrated
Persistence and discovery of reusable cloud application topologies
Due to the benefits introduced by the Cloud computing paradigm and the increase of available Cloud services (VM- and non VM-oriented), in the last years the number of application developers strongly supporting a partial or complete migration of application component to Cloud environments has significantly increased. For example, it is possible to host the application's database off-premise (e.g. in a DBaaS solution) while keeping the remaining components (presentation or business logic components) on-premise. However, the previous application deployment is only one possible distribution alternative, and the existence of further alternatives allows the generation of a wide variety of distribution combinations. In addition, the challenges for application developers to efficiently select optimal strategy of application's deployment by considering evolving application performance with fluctuating workload has increased rapidly. How to select, configure and deploy an application optimally to satisfy functional and non-functional requirements of business and operation has been a research area in both academic and industry domains.
In this Master thesis, basing on the approaches proposed in previous work, we first conduct a research on existing approaches and technologies about how to persist, retrieve and build typed graph-based Cloud application topologies leveraging the benefits introduced and developed in graph databases and graph database technologies, respectively. Consequently, we develop the core algorithms for persisting and discovering application topologies focusing on their similar characteristics. Such conceptual models relate to the required structural aspects representing the relationship between the application topologies, their performance aspects, and their evolving workload. As a result of this thesis, a prototypical implementation of a RESTful-based framework to support discovering and building reusable viable topologies of Cloud application w.r.t. evolving functional and non-functional aspects is provided, e.g. taking into account its performance, its corresponding profile and its corresponding evolving workload
- …