2,893 research outputs found
Building Programmable Wireless Networks: An Architectural Survey
In recent times, there have been a lot of efforts for improving the ossified
Internet architecture in a bid to sustain unstinted growth and innovation. A
major reason for the perceived architectural ossification is the lack of
ability to program the network as a system. This situation has resulted partly
from historical decisions in the original Internet design which emphasized
decentralized network operations through co-located data and control planes on
each network device. The situation for wireless networks is no different
resulting in a lot of complexity and a plethora of largely incompatible
wireless technologies. The emergence of "programmable wireless networks", that
allow greater flexibility, ease of management and configurability, is a step in
the right direction to overcome the aforementioned shortcomings of the wireless
networks. In this paper, we provide a broad overview of the architectures
proposed in literature for building programmable wireless networks focusing
primarily on three popular techniques, i.e., software defined networks,
cognitive radio networks, and virtualized networks. This survey is a
self-contained tutorial on these techniques and its applications. We also
discuss the opportunities and challenges in building next-generation
programmable wireless networks and identify open research issues and future
research directions.Comment: 19 page
Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.
A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends
Do we all really know what a fog node is? Current trends towards an open definition
Fog computing has emerged as a promising technology that can bring cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and particularly a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and end up showing how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.Postprint (author's final draft
A Study to Optimize Heterogeneous Resources for Open IoT
Recently, IoT technologies have been progressed, and many sensors and
actuators are connected to networks. Previously, IoT services were developed by
vertical integration style. But now Open IoT concept has attracted attentions
which achieves various IoT services by integrating horizontal separated devices
and services. For Open IoT era, we have proposed the Tacit Computing technology
to discover the devices with necessary data for users on demand and use them
dynamically. We also implemented elemental technologies of Tacit Computing. In
this paper, we propose three layers optimizations to reduce operation cost and
improve performance of Tacit computing service, in order to make as a
continuous service of discovered devices by Tacit Computing. In optimization
process, appropriate function allocation or offloading specific functions are
calculated on device, network and cloud layer before full-scale operation.Comment: 3 pages, 1 figure, 2017 Fifth International Symposium on Computing
and Networking (CANDAR2017), Nov. 201
Virtual sensor networks: collaboration and resource sharing
This thesis contributes to the advancement of the Sensing as a Service (SeaaS),
based on cloud infrastructures, through the development of models and
algorithms that make an efficient use of both sensor and cloud resources while
reducing the delay associated with the data flow between cloud and client
sides, which results into a better quality of experience for users. The first models
and algorithms developed are suitable for the case of mashups being managed
at the client side, and then models and algorithms considering mashups
managed at the cloud were developed. This requires solving multiple problems:
i) clustering of compatible mashup elements; ii) allocation of devices
to clusters, meaning that a device will serve multiple applications/mashups;
iii) reduction of the amount of data flow between workplaces, and associated
delay, which depends on clustering, device allocation and placement of workplaces.
The developed strategies can be adopted by cloud service providers
wishing to improve the performance of their clouds.
Several steps towards an efficient Se-aaS business model were performed.
A mathematical model was development to assess the impact (of resource
allocations) on scalability, QoE and elasticity. Regarding the clustering of
mashup elements, a first mathematical model was developed for the selection
of the best pre-calculated clusters of mashup elements (virtual Things), and
then a second model is proposed for the best virtual Things to be built (non
pre-calculated clusters). Its evaluation is done through heuristic algorithms
having such model as a basis. Such models and algorithms were first developed
for the case of mashups managed at the client side, and after they
were extended for the case of mashups being managed at the cloud. For the
improvement of these last results, a mathematical programming optimization
model was developed that allows optimal clustering and resource allocation
solutions to be obtained. Although this is a computationally difficult
approach, the added value of this process is that the problem is rigorously
outlined, and such knowledge is used as a guide in the development of better
a heuristic algorithm.Esta tese contribui para o avanço tecnológico do modelo de Sensing as a Service
(Se-aaS), baseado em infraestrutura cloud, através do desenvolvimento
de modelos e algoritmos que resolvem o problema da alocação eficiente de
recursos, melhorando os métodos e técnicas atuais e reduzindo os tempos associados
`a transferência dos dados entre a cloud e os clientes, com o objetivo
de melhorar a qualidade da experiência dos seus utilizadores. Os primeiros
modelos e algoritmos desenvolvidos são adequados para o caso em que as
mashups são geridas pela aplicação cliente, e posteriormente foram desenvolvidos
modelos e algoritmos para o caso em que as mashups são geridas
pela cloud. Isto implica ter de resolver múltiplos problemas: i) Construção
de clusters de elementos de mashup compatíveis; ii) Atribuição de dispositivos
físicos aos clusters, acabando um dispositivo físico por servir m´ múltiplas
aplicações/mashups; iii) Redução da quantidade de transferência de dados
entre os diversos locais da cloud, e consequentes atrasos, o que dependente
dos clusters construídos, dos dispositivos atribuídos aos clusters e dos locais
da cloud escolhidos para realizar o processamento necessário. As diferentes
estratégias podem ser adotadas por fornecedores de serviço cloud que queiram
melhorar o desempenho dos seus serviços.(…
- …