96 research outputs found
A Demo of Application Lifecycle Management for IoT Collaborative Neighborhood in the Fog
International audienceRegarding latency, privacy, resiliency and network scarcity management, only distributed approaches such as proposed by Fog Computing architecture can efficiently address the fantastic growth of the Internet of Things (IoT). IoT applications could be deployed and run hierarchically at different levels in an infrastructure ranging from centralized datacenters to the connected things themselves. Consequently, software entities composing IoT applications could be executed in many different configurations. The heterogeneity of the equipment and devices of the target infrastructure opens opportunities in the placement of the software entities, taking into account their requirements in terms of hardware, cyber-physical interactions and software dependencies. Once the most appropriate place has been found, software entities have to be deployed and run. Container-based virtualization has been considered to overpass the complexity of packaging, deploying and running software entities in a heterogeneous distributed infrastructure at the vicinity of the connected devices. This paper reports a practical experiment presented as a live demo that showcases a " Smart Bell in a Collaborative Neighborhood " IoT application in the Fog. Application Lifecycle Management (ALM) has been put in place based on Docker technologies to deploy and run micro-services in the context of Smart Homes operated by Orange
A Demo of Application Lifecycle Management for IoT Collaborative Neighborhood in the Fog
International audienceRegarding latency, privacy, resiliency and network scarcity management, only distributed approaches such as proposed by Fog Computing architecture can efficiently address the fantastic growth of the Internet of Things (IoT). IoT applications could be deployed and run hierarchically at different levels in an infrastructure ranging from centralized datacenters to the connected things themselves. Consequently, software entities composing IoT applications could be executed in many different configurations. The heterogeneity of the equipment and devices of the target infrastructure opens opportunities in the placement of the software entities, taking into account their requirements in terms of hardware, cyber-physical interactions and software dependencies. Once the most appropriate place has been found, software entities have to be deployed and run. Container-based virtualization has been considered to overpass the complexity of packaging, deploying and running software entities in a heterogeneous distributed infrastructure at the vicinity of the connected devices. This paper reports a practical experiment presented as a live demo that showcases a " Smart Bell in a Collaborative Neighborhood " IoT application in the Fog. Application Lifecycle Management (ALM) has been put in place based on Docker technologies to deploy and run micro-services in the context of Smart Homes operated by Orange
Measuring the Business Value of Cloud Computing
The importance of demonstrating the value achieved from IT investments is long established in the Computer Science (CS) and Information Systems (IS) literature. However, emerging technologies such as the ever-changing complex area of cloud computing present new challenges and opportunities for demonstrating how IT investments lead to business value. Recent reviews of extant literature highlights the need for multi-disciplinary research. This research should explore and further develops the conceptualization of value in cloud computing research. In addition, there is a need for research which investigates how IT value manifests itself across the chain of service provision and in inter-organizational scenarios. This open access book will review the state of the art from an IS, Computer Science and Accounting perspective, will introduce and discuss the main techniques for measuring business value for cloud computing in a variety of scenarios, and illustrate these with mini-case studies
Measuring the Business Value of Cloud Computing
The importance of demonstrating the value achieved from IT investments is long established in the Computer Science (CS) and Information Systems (IS) literature. However, emerging technologies such as the ever-changing complex area of cloud computing present new challenges and opportunities for demonstrating how IT investments lead to business value. Recent reviews of extant literature highlights the need for multi-disciplinary research. This research should explore and further develops the conceptualization of value in cloud computing research. In addition, there is a need for research which investigates how IT value manifests itself across the chain of service provision and in inter-organizational scenarios. This open access book will review the state of the art from an IS, Computer Science and Accounting perspective, will introduce and discuss the main techniques for measuring business value for cloud computing in a variety of scenarios, and illustrate these with mini-case studies
MEC vs MCC: performance analysis of real-time applications
Hoje em dia, numerosas são as aplicações que apresentam um uso intensivo de recursos empurrando os requisitos computacionais e a demanda de energia dos dispositivos para além das suas capacidades. Atentando na arquitetura Mobile Cloud, que disponibiliza plataformas funcionais e aplicações emergentes (como Realidade Aumentada (AR), Realidade Virtual (VR), jogos online em tempo real, etc.), são evidentes estes desafios directamente relacionados com a latência, consumo de energia, e requisitos de privacidade. O Mobile Edge Computing (MEC) é uma tecnologia recente que aborda os obstáculos de desempenho enfrentados pela Mobile Cloud Computing (MCC), procurando solucioná-los O MEC aproxima as funcionalidades de computação e de armazenamento da periferia da rede. Neste trabalho descreve-se a arquitetura MEC assim como os principais tipos soluções para a sua implementação. Apresenta-se a arquitetura de referência da tecnologia cloudlet e uma comparação com o modelo de arquitetura ainda em desenvolvimento e padronização pelo ETSI. Um dos propósitos do MEC é permitir remover dos dispositivos tarefas intensivas das aplicações para melhorar a computação, a capacidade de resposta e a duração da bateria dos dispositivos móveis. O objetivo deste trabalho é estudar, comparar e avaliar o desempenho das arquiteturas MEC e MCC para o provisionamento de tarefas intensivas de aplicações com uso intenso de computação. Os cenários de teste foram configurados utilizando esse tipo de aplicações em ambas as implementações de MEC e MCC. Os resultados do teste deste estudo permitem constatar que o MEC apresenta melhor desempenho do que o MCC relativamente à latência e à qualidade de experiência do utilizador. Além disso, os resultados dos testes permitem quantificar o benefício efetivo tecnologia MEC.Numerous applications, such as Augmented Reality (AR), Virtual Reality (VR), real-time online gaming are resource-intensive applications and consequently, are pushing the computational requirements and energy demands of the mobile devices beyond their capabilities. Despite the fact that mobile cloud architecture has practical and functional platforms, these new emerging applications present several challenges regarding latency, energy consumption, context awareness, and privacy enhancement. Mobile Edge Computing (MEC) is a new resourceful and intermediary technology, that addresses the performance hurdles faced by Mobile Cloud Computing (MCC), and brings computing and storage closer to the network edge. This work introduces the MEC architecture and some of edge computing implementations. It presents the reference architecture of the cloudlet technology and provides a comparison with the architecture model that is under standardization by ETSI. MEC can offload intensive tasks from applications to enhance computation, responsiveness and battery life of the mobile devices. The objective of this work is to study and evaluate the performance of MEC and MCC architectures for provisioning offload intensive tasks from compute-intensive applications. Test scenarios were set up with use cases with this kind of applications for both MEC and MCC implementations. The test results of this study enable to support evidence that the MEC presents better performance than cloud computing regarding latency and user quality of experience. Moreover, the results of the tests enable to quantify the effective benefit of the MEC approach
Narrowband IoT: from the end device to the cloud. An experimental end-to-end study
This thesis is about a novel study and experimentation of a Cloud IoT application, communicating over a NB-IoT Italian network. So far there no been presented studies, which are about the interactions between the NB-IoT network and the cloud. This thesis not only fill this gap but also shows the use of Cognitive Services to interact, through the human voice, with the IoT application. Compared with other types of mobile networks, NB-IoT is the best choice
The Internet of Things, fog and cloud continuum: Integration and challenges
The Internet of Things needs for computing power and storage are expected to remain on the rise in the next decade. Consequently, the amount of data generated by devices at the edge of the network will also grow. While cloud computing has been an established and effective way of acquiring computation and storage as a service to many applications, it may not be suitable to handle the myriad of data from IoT devices and fulfill largely heterogeneous application requirements. Fog computing has been developed to lie between IoT and the cloud, providing a hierarchy of computing power that can collect, aggregate, and process data from/to IoT devices. Combining fog and cloud may reduce data transfers and communication bottlenecks to the cloud and also contribute to reduced latencies, as fog computing resources exist closer to the edge. This paper examines this IoT-Fog-Cloud ecosystem and provides a literature review from different facets of it: how it can be organized, how management is being addressed, and how applications can benefit from it. Lastly, we present challenging issues yet to be addressed in IoT-Fog-Cloud infrastructures
Towards an Automatic Microservices Manager for Hybrid Cloud Edge Environments
Cloud computing came to make computing resources easier to access thus helping a
faster deployment of applications/services benefiting from the scalability provided by
the service providers. It has been registered an exponential growth of the data volume
received by the cloud. This is due to the fact that almost every device used in everyday
life are connected to the internet sharing information in a global scale (ex: smartwatches,
clocks, cars, industrial equipment’s). Increasing the data volume results in an increased
latency in client applications resulting in the degradation of the QoS (Quality of service).
With these problems, hybrid systems were born by integrating the cloud resources
with the various edge devices between the cloud and edge, Fog/Edge computation. These
devices are very heterogeneous, with different resources capabilities (such as memory
and computational power), and geographically distributed.
Software architectures also evolved and microservice architecture emerged to make
application development more flexible and increase their scalability. The Microservices
architecture comprehends decomposing monolithic applications into small services each
one with a specific functionality and that can be independently developed, deployed and
scaled. Due to their small size, microservices are adquate for deployment on Hybrid
Cloud/Edge infrastructures. However, the heterogeneity of those deployment locations
makes microservices’ management and monitoring rather complex. Monitoring, in particular,
is essential when considering that microservices may be replicated and migrated
in the cloud/edge infrastructure.
The main problem this dissertation aims to contribute is to build an automatic system
of microservices management that can be deployed in hybrid infrastructures cloud/fog
computing. Such automatic system will allow edge enabled applications to have an
adaptive deployment at runtime in response to variations inworkloads and computational
resources available. Towards this end, this work is a first step on integrating two existing
projects that combined may support an automatic system. One project does the automatic
management of microservices but uses only an heavy monitor, Prometheus, as a cloud
monitor. The second project is a light adaptive monitor. This thesis integrates the light
monitor into the automatic manager of microservices.A computação na Cloud surgiu como forma de simplificar o acesso aos recursos computacionais,
permitindo um deployment mais rápido das aplicações e serviços como resultado
da escalabilidade suportada pelos provedores de serviços.
Computação na cloud surgiu para facilitar o acesso aos recursos de computação provocando
um facultamento no deployment de aplicações/serviços sendo benéfico para a
escalabilidade fornecida pelos provedores de serviços. Tem-se registado um crescimento
exponencial do volume de data que é recebido pela cloud. Este aumento deve-se ao facto de
quase todos os dispositivos utilizados no nosso quotidiano estarem conectados à internet
(exemplos destes são, relogios, maquinas industriais, carros). Este aumento no volume de
dados resulta num aumento da latência para as aplicações cliente, resultando assim numa
degradação na qualidade de serviço QoS.
Com estes problemas, nasceram os sistemas híbridos, nascidos pela integração dos
recursos de cloud com os variados dispositivos presentes no caminho entre a cloud e
a periferia denominando-se computação na Edge/Fog (Computação na periferia). Estes
dispositivos apresentam uma grande heterogeneidade e são geograficamente muito
distribuídos.
As arquitecturas dos sistemas também evoluíram emergindo a arquitectura de micro
serviços que permitem tornar o desenvolvimento de aplicações não só mais flexivel
como para aumentar a sua escalabilidade. A arquitetura de micro serviços consiste na
decomposição de aplicações monolíticas em pequenos serviços, onde cada um destes
possuí uma funcionalidade específica e que pode ser desenvolvido, lançado e migrado
de forma independente. Devido ao seu tamanho os micro serviços são adequados para
serem lançados em ambientes de infrastructuras híbridas (cloud e periferia). No entanto,
a heterogeneidade da localização para serem lançados torna a gestão e monitorização
de micro serviços bastante mais complexa. A monitorização, em particular, é essencial
quando consideramos que os micro serviços podem ser replicados e migrados nestas
infrastruturas de cloud e periferia (Edge).
O problema abordado nesta dissertação é contribuir para a construção de um sistema
automático de gestão de micro serviços que podem ser lançados em estruturas hibridas.
Este sistema automático irá tornar possível às aplicações que estão na edge possuírem um
deploy adaptativo enquanto estão em execução, como resposta às variações dos recursos
computacionais disponíveis e suas cargas. Para chegar a este fim, este trabalho será o primeiro passo na integração de dois projectos já existentes que, juntos poderão suportar
umsistema automático. Umdeles realiza a gestão automática de micro serviços mas utiliza
apenas o Prometheus como monitor na cloud, enquanto o segundo projecto é um monitor
leve adaptativo. Esta tese integra então um monitor leve com um gestor automático de
micro serviços
- …