32 research outputs found
Intermediador de serviços na Nuvem
Mestrado em Engenharia de Computadores e TelemáticaDe acordo com história dos sistemas informáticos, os engenheiros têm vindo
a remodelar infraestruturas para melhorar a eficiência das organizações, visando
o acesso partilhado a recursos computacionais. O advento da computação
em núvem desencadeou um novo paradigma, proporcionando melhorias
no alojamento e entrega de serviços através da Internet. Quando comparado
com abordagens tradicionais, este apresenta vantajens por disponibilizar
acesso ubíquo, escalável e sob demanda, a determinados conjuntos de recursos
computacionais partilhados.
Ao longo dos últimos anos, observou-se a entrada de novos operadores que
providenciam serviços na núvem, a preços competitivos e diferentes acordos
de nível de serviço (“Service Level Agreements”). Com a adoção crescente
e sem precedentes da computação em núvem, os fornecedores da área estão
se a focar na criação e na disponibilização de novos serviços, com valor
acrescentado para os seus clientes. A competitividade do mercado e a
existência de inúmeras opções de serviços e de modelos de negócio gerou
entropia. Por terem sido criadas diferentes terminologias para conceitos com
o mesmo significado e o facto de existir incompatibilidade de Interfaces de
Programação Aplicacional (“Application Programming Interface”), deu-se uma
restrição de fornecedores de serviços específicos na núvem a utilizadores.
A fragmentação na faturação e na cobrança ocorreu quando os serviços na
núvem passaram a ser contratualizados com diferentes fornecedores. Posto
isto, seria uma mais valia existir uma entidade, que harmonizasse a relação
entre os clientes e os múltiplos fornecedores de serviços na núvem, por meio
de recomendação e auxílio na intermediação.
Esta dissertação propõe e implementa um Intermediador de Serviços na Núvem
focado no auxílio e motivação de programadores para recorrerem às
suas aplicações na núvem. Descrevendo as aplicações de modo facilitado,
um algoritmo inteligente recomendará várias ofertas de serviços na núvem
cumprindo com os requisitos aplicacionais. Desta forma, é prestado aos utilizadores
formas de submissão, gestão, monitorização e migração das suas
aplicações numa núvem de núvens. A interação decorre a partir de uma única
interface de programação que orquestrará todo um processo juntamente com
outros gestores de serviços na núvem. Os utilizadores podem ainda interagir
com o Intermediador de Serviços na Núvem a partir de um portal Web, uma
interface de linha de comandos e bibliotecas cliente.Throughout the history of computer systems, experts have been reshaping IT
infrastructure for improving the efficiency of organizations by enabling shared
access to computational resources. The advent of cloud computing has
sparked a new paradigm providing better hosting and service delivery over the
Internet. It offers advantages over traditional solutions by providing ubiquitous,
scalable and on-demand access to shared pools of computational resources.
Over the course of these last years, we have seen new market players offering
cloud services at competitive prices and different Service Level Agreements.
With the unprecedented increasing adoption of cloud computing, cloud
providers are on the look out for the creation and offering of new and valueadded
services towards their customers. Market competitiveness, numerous
service options and business models led to gradual entropy. Mismatching
cloud terminology got introduced and incompatible APIs locked-in users to
specific cloud service providers. Billing and charging become fragmented
when consuming cloud services from multiple vendors. An entity recommending
cloud providers and acting as an intermediary between the cloud consumer
and providers would harmonize this interaction.
This dissertation proposes and implements a Cloud Service Broker focusing
on assisting and encouraging developers for running their applications on the
cloud. Developers can easily describe their applications, where an intelligent
algorithm will be able to recommend cloud offerings that better suit application
requirements. In this way, users are aided in deploying, managing, monitoring
and migrating their applications in a cloud of clouds. A single API is required
for orchestrating the whole process in tandem with truly decoupled cloud managers.
Users can also interact with the Cloud Service Broker through a Web
portal, a command-line interface, and client libraries
Machine Learning-based Orchestration Solutions for Future Slicing-Enabled Mobile Networks
The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major
interest from both academic and industrial stakeholders.
Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions
and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed.
End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests
such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex
spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects.
To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while
trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis
Integração do paradigma de cloud computing com a infraestrutura de rede do operador
Doutoramento em Engenharia InformáticaThe proliferation of Internet access allows that users have the possibility to use
services available directly through the Internet, which translates in a change of
the paradigm of using applications and in the way of communicating,
popularizing in this way the so-called cloud computing paradigm. Cloud
computing brings with it requirements at two different levels: at the cloud level,
usually relying in centralized data centers, where information technology and
network resources must be able to guarantee the demand of such services;
and at the access level, i.e., depending on the service being consumed,
different quality of service is required in the access network, which is a Network
Operator (NO) domain. In summary, there is an obvious network dependency.
However, the network has been playing a relatively minor role, mostly as a
provider of (best-effort) connectivity within the cloud and in the access network.
The work developed in this Thesis enables for the effective integration of cloud
and NO domains, allowing the required network support for cloud. We propose
a framework and a set of associated mechanisms for the integrated
management and control of cloud computing and NO domains to provide endto-
end services. Moreover, we elaborate a thorough study on the embedding of
virtual resources in this integrated environment. The study focuses on
maximizing the host of virtual resources on the physical infrastructure through
optimal embedding strategies (considering the initial allocation of resources as
well as adaptations through time), while at the same time minimizing the costs
associated to energy consumption, in single and multiple domains.
Furthermore, we explore how the NO can take advantage of the integrated
environment to host traditional network functions. In this sense, we study how
virtual network Service Functions (SFs) should be modelled and managed in a
cloud environment and enhance the framework accordingly.
A thorough evaluation of the proposed solutions was performed in the scope of
this Thesis, assessing their benefits. We implemented proof of concepts to
prove the added value, feasibility and easy deployment characteristics of the
proposed framework. Furthermore, the embedding strategies evaluation has
been performed through simulation and Integer Linear Programming (ILP)
solving tools, and it showed that it is possible to reduce the physical
infrastructure energy consumption without jeopardizing the virtual resources
acceptance. This fact can be further increased by allowing virtual resource
adaptation through time. However, one should have in mind the costs
associated to adaptation processes. The costs can be minimized, but the virtual
resource acceptance can be also reduced. This tradeoff has also been subject
of the work in this Thesis.A proliferação do acesso à Internet permite aos utilizadores usar serviços
disponibilizados diretamente através da Internet, o que se traduz numa
mudança de paradigma na forma de usar aplicações e na forma de comunicar,
popularizando desta forma o conceito denominado de cloud computing. Cloud
computing traz consigo requisitos a dois níveis: ao nível da própria cloud,
geralmente dependente de centros de dados centralizados, onde as
tecnologias de informação e recursos de rede têm que ser capazes de garantir
as exigências destes serviços; e ao nível do acesso, ou seja, dependendo do
serviço que esteja a ser consumido, são necessários diferentes níveis de
qualidade de serviço na rede de acesso, um domínio do operador de rede. Em
síntese, existe uma clara dependência da cloud na rede. No entanto, o papel
que a rede tem vindo a desempenhar neste âmbito é reduzido, sendo
principalmente um fornecedor de conectividade (best-effort) tanto no dominio
da cloud como no da rede de acesso.
O trabalho desenvolvido nesta Tese permite uma integração efetiva dos
domínios de cloud e operador de rede, dando assim à cloud o efetivo suporte
da rede. Para tal, apresentamos uma plataforma e um conjunto de
mecanismos associados para gestão e controlo integrado de domínios cloud
computing e operador de rede por forma a fornecer serviços fim-a-fim. Além
disso, elaboramos um estudo aprofundado sobre o mapeamento de recursos
virtuais neste ambiente integrado. O estudo centra-se na maximização da
incorporação de recursos virtuais na infraestrutura física por meio de
estratégias de mapeamento ótimas (considerando a alocação inicial de
recursos, bem como adaptações ao longo do tempo), enquanto que se
minimizam os custos associados ao consumo de energia. Este estudo é feito
para cenários de apenas um domínio e para cenários com múltiplos domínios.
Além disso, exploramos como o operador de rede pode aproveitar o referido
ambiente integrado para suportar funções de rede tradicionais. Neste sentido,
estudamos como as funções de rede virtualizadas devem ser modeladas e
geridas num ambiente cloud e estendemos a plataforma de acordo com este
conceito.
No âmbito desta Tese foi feita uma avaliação extensa das soluções propostas,
avaliando os seus benefícios. Implementámos provas de conceito por forma a
demonstrar as mais-valias, viabilidade e fácil implantação das soluções
propostas. Além disso, a avaliação das estratégias de mapeamento foi
realizada através de ferramentas de simulação e de programação linear inteira,
mostrando que é possível reduzir o consumo de energia da infraestrutura
física, sem comprometer a aceitação de recursos virtuais. Este aspeto pode
ser melhorado através da adaptação de recursos virtuais ao longo do tempo.
No entanto, deve-se ter em mente os custos associados aos processos de
adaptação. Os custos podem ser minimizados, mas isso implica uma redução
na aceitação de recursos virtuais. Esta compensação foi também um tema
abordado nesta Tese
Recommended from our members
Cloud Broker Based Trust Assessment of Cloud Service Providers
Cloud computing is emerging as the future Internet technology due to its advantages such as sharing of IT resources, unlimited scalability and flexibility and high level of automation. Along the lines of rapid growth, the cloud computing technology also brings in concerns of security, trust and privacy of the applications and data that is hosted in the cloud environment. With large number of cloud service providers available, determining the providers that can be trusted for efficient operation of the service deployed in the provider’s environment is a key requirement for service consumers.
In this thesis, we provide an approach to assess the trustworthiness of the cloud service providers. We propose a trust model that considers real-time cloud transactions to model the trustworthiness of the cloud service providers. The trust model uses the unique uncertainty model used in the representation of opinion. The Trustworthiness of a cloud service provider is modelled using opinion obtained from three different computations, namely (i) compliance of SLA (Service Level Agreement) parameters (ii) service provider satisfaction ratings and (iii) service provider behaviour. In addition to this the trust model is extended to encompass the essential Cloud characteristics, credibility for weighing the feedbacks and filtering mechanisms to filter the dubious feedback providers. The credibility function and the early filtering mechanisms in the extended trust model are shown to assist in the reduction of impact of malicious feedback providers
Les opérateurs sauront-ils survivre dans un monde en constante évolution? Considérations techniques conduisant à des scénarios de rupture
Le secteur des télécommunications passe par une phase délicate en raison de profondes mutations technologiques, principalement motivées par le développement de l'Internet. Elles ont un impact majeur sur l'industrie des télécommunications dans son ensemble et, par conséquent, sur les futurs déploiements des nouveaux réseaux, plateformes et services. L'évolution de l'Internet a un impact particulièrement fort sur les opérateurs des télécommunications (Telcos). En fait, l'industrie des télécommunications est à la veille de changements majeurs en raison de nombreux facteurs, comme par exemple la banalisation progressive de la connectivité, la domination dans le domaine des services de sociétés du web (Webcos), l'importance croissante de solutions à base de logiciels et la flexibilité qu'elles introduisent (par rapport au système statique des opérateurs télécoms). Cette thèse élabore, propose et compare les scénarios possibles basés sur des solutions et des approches qui sont technologiquement viables. Les scénarios identifiés couvrent un large éventail de possibilités: 1) Telco traditionnel; 2) Telco transporteur de Bits; 3) Telco facilitateur de Plateforme; 4) Telco fournisseur de services; 5) Disparition des Telco. Pour chaque scénario, une plateforme viable (selon le point de vue des opérateurs télécoms) est décrite avec ses avantages potentiels et le portefeuille de services qui pourraient être fournisThe telecommunications industry is going through a difficult phase because of profound technological changes, mainly originated by the development of the Internet. They have a major impact on the telecommunications industry as a whole and, consequently, the future deployment of new networks, platforms and services. The evolution of the Internet has a particularly strong impact on telecommunications operators (Telcos). In fact, the telecommunications industry is on the verge of major changes due to many factors, such as the gradual commoditization of connectivity, the dominance of web services companies (Webcos), the growing importance of software based solutions that introduce flexibility (compared to static system of telecom operators). This thesis develops, proposes and compares plausible future scenarios based on future solutions and approaches that will be technologically feasible and viable. Identified scenarios cover a wide range of possibilities: 1) Traditional Telco; 2) Telco as Bit Carrier; 3) Telco as Platform Provider; 4) Telco as Service Provider; 5) Telco Disappearance. For each scenario, a viable platform (from the point of view of telecom operators) is described highlighting the enabled service portfolio and its potential benefitsEVRY-INT (912282302) / SudocSudocFranceF
Digital Transformation
The amount of literature on Digital Transformation is staggering—and it keeps growing. Why, then,
come out with yet another such document? Moreover, any text aiming at explaining the Digital
Transformation by presenting a snapshot is going to become obsolete in a blink of an eye, most likely to
be already obsolete at the time it is first published.
The FDC Initiative on Digital Reality felt there is a need to look at the Digital Transformation from the
point of view of a profound change that is pervading the entire society—a change made possible by
technology and that keeps changing due to technology evolution opening new possibilities but is also a
change happening because it has strong economic reasons. The direction of this change is not easy to
predict because it is steered by a cultural evolution of society, an evolution that is happening in niches
and that may expand rapidly to larger constituencies and as rapidly may fade away. This creation,
selection by experimentation, adoption, and sudden disappearance, is what makes the whole scenario
so unpredictable and continuously changing.The amount of literature on Digital Transformation is staggering—and it keeps growing. Why, then,
come out with yet another such document? Moreover, any text aiming at explaining the Digital
Transformation by presenting a snapshot is going to become obsolete in a blink of an eye, most likely to
be already obsolete at the time it is first published.
The FDC Initiative on Digital Reality felt there is a need to look at the Digital Transformation from the
point of view of a profound change that is pervading the entire society—a change made possible by
technology and that keeps changing due to technology evolution opening new possibilities but is also a
change happening because it has strong economic reasons. The direction of this change is not easy to
predict because it is steered by a cultural evolution of society, an evolution that is happening in niches
and that may expand rapidly to larger constituencies and as rapidly may fade away. This creation,
selection by experimentation, adoption, and sudden disappearance, is what makes the whole scenario
so unpredictable and continuously changing
Business Agility and Information Technology in Service Organizations
Service organizations have to deal with highly uncertain events, both in the internal and external environment. In the academic literature and in practice there is not much knowledge about how to deal with this uncertainty. This PhD dissertation investigates the role and impact of information technologies (IT) on business agility in service organizations. Business agility is a relatively new term defined as the capability of organizations to swiftly change businesses and business processes beyond the normal level of flexibility to effectively manage highly uncertain and unexpected, but potentially consequential internal and external events. Empirical research was carried out via surveys and interviews among managers from 35 organizations in four industries and in three governmental sectors. Four in-depth case studies were carried out within one service organization.
The dissertation has six key findings:
1) In many large service organizations business agility is hampered by a lack of IT agility.
2) Organization and alignment of processes and information systems via the cycle of sensing, responding and learning along with the alignment of business and IT are important conditions for improving business agility performance of service organizations.
3) Standardization of IT capabilities and higher levels of data quality support higher levels of business agility of service organizations.
4) Two knowledge management strategies – codification and personalization -- are identified that can be used to respond to events with different degrees of uncertainty. A codification knowledge management strategy supports the response to events with low levels of uncertainty by exploiting explicit knowledge from organizational memory. A personalization knowledge management strategy drives the response to events with high levels of uncertainty by exploitation of tacit knowledge and social capital.
5) Social capital is an important moderating variable in the relation between IT capabilities and business agility. Social capital can mitigate the lack of IT agility that exists in many service organizations by overcoming information system boundaries and rigidities via human relationships.
6) The combination of sensing, responding and learning capabilities is required to increase all dimensions of business agility performance.
Overall, this research introduces a new approach to analyze and measure business agility. This thesis takes the first steps to develop theoretical knowledge on the conditions under which IT supports higher levels of business agility and business agility performance
A Reference Architecture for Service Lifecycle Management – Construction and Application to Designing and Analyzing IT Support
Service-orientation and the underlying concept of service-oriented architectures are a means to successfully address the need for flexibility and interoperability of software applications, which in turn leads to improved IT support of business processes. With a growing level of diffusion, sophistication and maturity, the number of services and interdependencies is gradually rising. This increasingly requires companies to implement a systematic management of services along their entire lifecycle. Service lifecycle management (SLM), i.e., the management of services from the initiating idea to their disposal, is becoming a crucial success factor.
Not surprisingly, the academic and practice communities increasingly postulate comprehensive IT support for SLM to counteract the inherent complexity. The topic is still in its infancy, with no comprehensive models available that help evaluating and designing IT support in SLM. This thesis presents a reference architecture for SLM and applies it to the evaluation and designing of SLM IT support in companies. The artifact, which largely resulted from consortium research efforts, draws from an extensive analysis of existing SLM applications, case studies, focus group discussions, bilateral interviews and existing literature.
Formal procedure models and a configuration terminology allow adapting and applying the reference architecture to a company’s individual setting. Corresponding usage examples prove its applicability and demonstrate the arising benefits within various SLM IT support design and evaluation tasks. A statistical analysis of the knowledge embodied within the reference data leads to novel, highly significant findings. For example, contemporary standard applications do not yet emphasize the lifecycle concept but rather tend to focus on small parts of the lifecycle, especially on service operation. This forces user companies either into a best-of-breed or a custom-development strategy if they are to implement integrated IT support for their SLM activities. SLM software vendors and internal software development units need to undergo a paradigm shift in order to better reflect the numerous interdependencies and increasing intertwining within services’ lifecycles. The SLM architecture is a first step towards achieving this goal.:Content Overview
List of Figures....................................................................................... xi
List of Tables ...................................................................................... xiv
List of Abbreviations.......................................................................xviii
1 Introduction .................................................................................... 1
2 Foundations ................................................................................... 13
3 Architecture Structure and Strategy Layer .............................. 57
4 Process Layer ................................................................................ 75
5 Information Systems Layer ....................................................... 103
6 Architecture Application and Extension ................................. 137
7 Results, Evaluation and Outlook .............................................. 195
Appendix ..........................................................................................203
References .......................................................................................... 463
Curriculum Vitae.............................................................................. 498
Bibliographic Data............................................................................ 49
Designing Data Spaces
This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I “Foundations and Contexts” provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II “Data Space Technologies” subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various “Use Cases and Data Ecosystems” from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several “Solutions and Applications”, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty
Coastal management and adaptation: an integrated data-driven approach
Coastal regions are some of the most exposed to environmental hazards, yet the coast is the preferred settlement site for a high percentage of the global population, and most major global cities are located on or near the coast. This research adopts a predominantly anthropocentric approach to the analysis of coastal risk and resilience. This centres on the pervasive hazards of coastal flooding and erosion. Coastal management decision-making practices are shown to be reliant on access to current and accurate information. However, constraints have been imposed on information flows between scientists, policy makers and practitioners, due to a lack of awareness and utilisation of available data sources. This research seeks to tackle this issue in evaluating how innovations in the use of data and analytics can be applied to further the application of science within decision-making processes related to coastal risk adaptation. In achieving this aim a range of research methodologies have been employed and the progression of topics covered mark a shift from themes of risk to resilience. The work focuses on a case study region of East Anglia, UK, benefiting from the input of a partner organisation, responsible for the region’s coasts: Coastal Partnership East.
An initial review revealed how data can be utilised effectively within coastal decision-making practices, highlighting scope for application of advanced Big Data techniques to the analysis of coastal datasets. The process of risk evaluation has been examined in detail, and the range of possibilities afforded by open source coastal datasets were revealed. Subsequently, open source coastal terrain and bathymetric, point cloud datasets were identified for 14 sites within the case study area. These were then utilised within a practical application of a geomorphological change detection (GCD) method. This revealed how analysis of high spatial and temporal resolution point cloud data can accurately reveal and quantify physical coastal impacts. Additionally, the research reveals how data innovations can facilitate adaptation through insurance; more specifically how the use of empirical evidence in pricing of coastal flood insurance can result in both communication and distribution of risk.
The various strands of knowledge generated throughout this study reveal how an extensive range of data types, sources, and advanced forms of analysis, can together allow coastal resilience assessments to be founded on empirical evidence. This research serves to demonstrate how the application of advanced data-driven analytical processes can reduce levels of uncertainty and subjectivity inherent within current coastal environmental management practices. Adoption of methods presented within this research could further the possibilities for sustainable and resilient management of the incredibly valuable environmental resource which is the coast