7,297 research outputs found
The Programmable City
AbstractThe worldwide proliferation of mobile connected devices has brought about a revolution in the way we live, and will inevitably guide the way in which we design the cities of the future. However, designing city-wide systems poses a new set of challenges in terms of scale, manageability and citizen involvement. Solving these challenges is crucial to making sure that the vision of a programmable Internet of Things (IoT) becomes reality. In this article we will analyse these issues and present a novel programming approach to designing scalable systems for the Internet of Things, with an emphasis on smart city applications, that addresses these issues
Towards Cognitive Self-Management of IoT-Edge-Cloud Continuum based on User Intents
Elasticity of the computing continuum with on demand availability allows for automated provisioning and release of computing resources as needed; however, this self management capability is severely limited due to the lack of knowledge on historical and timely resource utilisation and means for stakeholders to express their needs in a high-level manner. In this paper, we introduce and discuss a new concept – intent-based cognitive continuum for sustainable elasticity.acceptedVersio
End-to-end slices to orchestrate resources and services in the cloud-to-edge continuum
Fog computing, combined with traditional cloud computing, offers an inherently distributed infrastructure – referred to as the cloud-to-edge continuum – that can be used for the execution of low-latency and location-aware IoT services. The management of such an infrastructure is complex: resources in multiple domains need to be accessed by several tenants, while an adequate level of isolation and performance has to be guaranteed. This paper proposes the dynamic allocation of end-to-end slices to perform the orchestration of resources and services in such a scenario. These end-to-end slices require a unified resource management approach that encompasses both data centre and network resources. Currently, fog orchestration is mainly focussed on the management of compute resources, likewise, the slicing domain is specifically centred solely on the creation of isolated network partitions. A unified resource orchestration strategy, able to integrate the selection, configuration and management of compute and network resources, as part of a single abstracted object, is missing. This work aims to minimise the silo-effect, and proposes end-to-end slices as the foundation for the comprehensive orchestration of compute resources, network resources, and services in the cloud-to-edge continuum, as well acting as the basis for a system implementation. The concept of the end-to-end slice is formally described via a graph-based model that allows for dynamic resource discovery, selection and mapping via different algorithms and optimisation goals; and a working system is presented as the way to build slices across multiple domains dynamically, based on that model. These are independently accessible objects that abstract resources of various providers – traded via a Marketplace – with compute slices, allocated using the bare-metal cloud approach, being interconnected to each other via the connectivity of network slices. Experiments, carried out on a real testbed, demonstrate three features of the end-to-end slices: resources can be selected, allocated and controlled in a softwarised fashion; tenants can instantiate distributed IoT services on those resources transparently; the performance of a service is absolutely not affected by the status of other slices that share the same resource infrastructure
Architecture for Enabling Edge Inference via Model Transfer from Cloud Domain in a Kubernetes Environment
The current approaches for energy consumption optimisation in buildings are mainly reactive or focus on scheduling of daily/weekly operation modes in heating. Machine Learning (ML)-based advanced control methods have been demonstrated to improve energy efficiency when compared to these traditional methods. However, placing of ML-based models close to the buildings is not straightforward. Firstly, edge-devices typically have lower capabilities in terms of processing power, memory, and storage, which may limit execution of ML-based inference at the edge. Secondly, associated building information should be kept private. Thirdly, network access may be limited for serving a large number of edge devices. The contribution of this paper is an architecture, which enables training of ML-based models for energy consumption prediction in private cloud domain, and transfer of the models to edge nodes for prediction in Kubernetes environment. Additionally, predictors at the edge nodes can be automatically updated without interrupting operation. Performance results with sensor-based devices (Raspberry Pi 4 and Jetson Nano) indicated that a satisfactory prediction latency (~7–9 s) can be achieved within the research context. However, model switching led to an increase in prediction latency (~9–13 s). Partial evaluation of a Reference Architecture for edge computing systems, which was used as a starting point for architecture design, may be considered as an additional contribution of the paper
Orchestration in the Cloud-to-Things Compute Continuum: Taxonomy, Survey and Future Directions
IoT systems are becoming an essential part of our environment. Smart cities,
smart manufacturing, augmented reality, and self-driving cars are just some
examples of the wide range of domains, where the applicability of such systems
has been increasing rapidly. These IoT use cases often require simultaneous
access to geographically distributed arrays of sensors, and heterogeneous
remote, local as well as multi-cloud computational resources. This gives birth
to the extended Cloud-to-Things computing paradigm. The emergence of this new
paradigm raised the quintessential need to extend the orchestration
requirements i.e., the automated deployment and run-time management) of
applications from the centralised cloud-only environment to the entire spectrum
of resources in the Cloud-to-Things continuum. In order to cope with this
requirement, in the last few years, there has been a lot of attention to the
development of orchestration systems in both industry and academic
environments. This paper is an attempt to gather the research conducted in the
orchestration for the Cloud-to-Things continuum landscape and to propose a
detailed taxonomy, which is then used to critically review the landscape of
existing research work. We finally discuss the key challenges that require
further attention and also present a conceptual framework based on the
conducted analysis.Comment: Journal of Cloud Computing Pages: 2
Towards a Cognitive Compute Continuum: An Architecture for Ad-Hoc Self-Managed Swarms
In this paper we introduce our vision of a Cognitive Computing Continuum to
address the changing IT service provisioning towards a distributed,
opportunistic, self-managed collaboration between heterogeneous devices outside
the traditional data center boundaries. The focal point of this continuum are
cognitive devices, which have to make decisions autonomously using their
on-board computation and storage capacity based on information sensed from
their environment. Such devices are moving and cannot rely on fixed
infrastructure elements, but instead realise on-the-fly networking and thus
frequently join and leave temporal swarms. All this creates novel demands for
the underlying architecture and resource management, which must bridge the gap
from edge to cloud environments, while keeping the QoS parameters within
required boundaries. The paper presents an initial architecture and a resource
management framework for the implementation of this type of IT service
provisioning.Comment: 8 pages, CCGrid 2021 Cloud2Things Worksho
- …