4,257 research outputs found

    Next Generation Cloud Computing: New Trends and Research Directions

    Get PDF
    The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201

    Addressing the Node Discovery Problem in Fog Computing

    Get PDF
    In recent years, the Internet of Things (IoT) has gained a lot of attention due to connecting various sensor devices with the cloud, in order to enable smart applications such as: smart traffic management, smart houses, and smart grids, among others. Due to the growing popularity of the IoT, the number of Internet-connected devices has increased significantly. As a result, these devices generate a huge amount of network traffic which may lead to bottlenecks, and eventually increase the communication latency with the cloud. To cope with such issues, a new computing paradigm has emerged, namely: fog computing. Fog computing enables computing that spans from the cloud to the edge of the network in order to distribute the computations of the IoT data, and to reduce the communication latency. However, fog computing is still in its infancy, and there are still related open problems. In this paper, we focus on the node discovery problem, i.e., how to add new compute nodes to a fog computing system. Moreover, we discuss how addressing this problem can have a positive impact on various aspects of fog computing, such as fault tolerance, resource heterogeneity, proximity awareness, and scalability. Finally, based on the experimental results that we produce by simulating various distributed compute nodes, we show how addressing the node discovery problem can improve the fault tolerance of a fog computing system

    A Decision Framework for Allocation of Constellation-Scale Mission Compute Functionality to Ground and Edge Computing

    Get PDF
    This paper explores constellation-scale architectural trades, highlights dominant factors, and presents a decision framework for migrating or sharing mission compute functionality between ground and space segments. Over recent decades, sophisticated logic has been developed for scheduling and tasking of space assets, as well as processing and exploitation of satellite data, and this software has been traditionally hosted in ground computing. Current efforts exist to migrate this software to ground cloud-based services. The option and motivation to host some of this logic “at the edge” within the space segment has arisen as space assets are proliferated, are interlinked via transport networks, and are networked with multi-domain assets. Examples include edge-based Battle Management, Command, Control, and Communications (BMC3) being developed by the Space Development Agency and future onboard computing for commercial constellations. Edge computing pushes workload, computation, and storage closer to data sources and onto devices at the edge of the network. Potential benefits of edge computing include increased speed of response, system reliability, robustness to disrupted networks, and data security. Yet, space-based edge nodes have disadvantages including power and mass limitations, constant physical motion, difficulty of physical access, and potential vulnerability to attacks. This paper presents a structured decision framework with justifying rationale to provide insights and begin to address a key question of what mission compute functionality should be allocated to the space-based edge , and under what mission or architectural conditions, versus to conventional ground-based systems. The challenge is to identify the Pareto-dominant trades and impacts to mission success. This framework will not exhaustively address all missions, architectures, and CONOPs, however it is intended to provide generalized guidelines and heuristics to support architectural decision-making. Via effects-based simulation and analysis, a set of hypotheses about ground- and edge-based architectures are evaluated and summarized along with prior research. Results for a set of key metrics and decision drivers show that edge computing for specific functionality is quantitatively valuable, especially for interoperable, multi-domain, collaborative assets

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    Can cloudlet coordination support cloud computing infrastructure?

    Get PDF
    Abstract The popularity of mobile applications on smartphones requires mobile devices to perform high-performing processing tasks. The computational resources of these devices are limited due to memory, battery life, heat, and weight dissipation. To overcome the limitations of mobile devices, cloud computing is considered the best solution. The major issues faced by cloud computing are expensive roaming charges and growing demand for radio access. However, some major benefits associated with cloud computing are fast application processing, fast transfer of data, and substantial reduction in the use of mobile resources. This study evaluated the association between the distance of cloud servers and cloudlets with and without coordination and data latency. Fast communication in the cloudlet environment is facilitated by coordinated cloudlets, which have a positive influence on the infrastructure of cloud computing. The coordinated cloudlets can be efficiently accessed in different areas, such as vehicular networks, vehicular fog computing, and fog computing

    QoS-aware service continuity in the virtualized edge

    Get PDF
    5G systems are envisioned to support numerous delay-sensitive applications such as the tactile Internet, mobile gaming, and augmented reality. Such applications impose new demands on service providers in terms of the quality of service (QoS) provided to the end-users. Achieving these demands in mobile 5G-enabled networks represent a technical and administrative challenge. One of the solutions proposed is to provide cloud computing capabilities at the edge of the network. In such vision, services are cloudified and encapsulated within the virtual machines or containers placed in cloud hosts at the network access layer. To enable ultrashort processing times and immediate service response, fast instantiation, and migration of service instances between edge nodes are mandatory to cope with the consequences of user’s mobility. This paper surveys the techniques proposed for service migration at the edge of the network. We focus on QoS-aware service instantiation and migration approaches, comparing the mechanisms followed and emphasizing their advantages and disadvantages. Then, we highlight the open research challenges still left unhandled.publishe

    Exploring a resource allocation security protocol for secure service migration in commercial cloud environments

    Get PDF
    Recently, there has been a significant increase in the popularity of cloud computing systems that offer Cloud services such as Networks, Servers, Storage, Applications, and other available on-demand re-sources or pay-as-you-go systems with different speeds and Qualities of Service. These cloud computing environments share resources by providing virtualization techniques that enable a single user to ac-cess various Cloud Services Thus, cloud users have access to an infi-nite computing resource, allowing them to increase or decrease their resource consumption capacity as needed. However, an increasing number of Commercial Cloud Services are available in the market-place from a wide range of Cloud Service Providers (CSPs). As a result, most CSPs must deal with dynamic resource allocation, in which mobile services migrate from one cloud environment to another to provide heterogeneous resources based on user requirements. A new service framework has been proposed by Sardis about how ser-vices can be migrated in Cloud Infrastructure. However, it does not address security and privacy issues in the migration process. Fur-thermore, there is still a lack of heuristic algorithms that can check requested and available resources to allocate and deallocate before the secure migration begins. The advent of Virtual machine technol-ogy, for example, VMware, and container technology, such as Docker, LXD, and Unikernels has made the migration of services possible. As Cloud services, such as Vehicular Cloud, are now being increasingly offered in highly mobile environments, Y-Comm, a new framework for building future mobile systems, has developed proactive handover to support the mobile user. Though there are many mechanisms in place to provide support for mobile services, one way of addressing the challenges arising because of this emerging application is to move the computing resources closer to the end-users and find how much computing resources should be allocated to meet the performance re-quirements/demands. This work addresses the above challenges by proposing the development of resource allocation security protocols for secure service migration that allow the safe transfer of servers and monitoring of the capacity of requested resources to different Cloud environments. In this thesis, we propose a Resource Allocation Secu-rity Protocol for secure service migration that allows resources to be allocated efficiently is analyzed. In our research, we use two differ-ent formal modelling and verification techniques to verify an abstract protocol and validate the security properties such as secrecy, authen-tication, and key exchange for secure service migration. The new protocol has been verified in AVISPA and ProVerif formal verifier and is being implemented in a new Service Management Framework Prototype to securely manage and allocate resources in Commercial Cloud Environments. And then, a Capability-Based Secure Service Protocol (SSP) was developed to ensure that capability-based service protocol proves secrecy, authentication, and authorization, and that it can be applied to any service. A basic prototype was then devel-oped to test these ideas using a block storage system known as the Network Memory Service. This service was used as the backend of a FUSE filesystem. The results show that this approach can be safely implemented and should perform well in real environments

    Towards characterization of edge-cloud continuum

    Full text link
    Internet of Things and cloud computing are two technological paradigms that reached widespread adoption in recent years. These paradigms are complementary: IoT applications often rely on the computational resources of the cloud to process the data generated by IoT devices. The highly distributed nature of IoT applications and the giant amounts of data involved led to significant parts of computation being moved from the centralized cloud to the edge of the network. This gave rise to new hybrid paradigms, such as edge-cloud computing and fog computing. Recent advances in IoT hardware, combined with the continued increase in complexity and variability of the edge-cloud environment, led to an emergence of a new vision of edge-cloud continuum: the next step of integration between the IoT and the cloud, where software components can seamlessly move between the levels of computational hierarchy. However, as this concept is very new, there is still no established view of what exactly it entails. Several views on the future edge-cloud continuum have been proposed, each with its own set of requirements and expected characteristics. In order to move the discussion of this concept forward, these views need to be put into a coherent picture. In this paper, we provide a review and generalization of the existing literature on edge-cloud continuum, point out its expected features, and discuss the challenges that need to be addressed in order to bring about this envisioned environment for the next generation of smart distributed applications

    An architecture for adaptive task planning in support of IoT-based machine learning applications for disaster scenarios

    Get PDF
    The proliferation of the Internet of Things (IoT) in conjunction with edge computing has recently opened up several possibilities for several new applications. Typical examples are Unmanned Aerial Vehicles (UAV) that are deployed for rapid disaster response, photogrammetry, surveillance, and environmental monitoring. To support the flourishing development of Machine Learning assisted applications across all these networked applications, a common challenge is the provision of a persistent service, i.e., a service capable of consistently maintaining a high level of performance, facing possible failures. To address these service resilient challenges, we propose APRON, an edge solution for distributed and adaptive task planning management in a network of IoT devices, e.g., drones. Exploiting Jackson's network model, our architecture applies a novel planning strategy to better support control and monitoring operations while the states of the network evolve. To demonstrate the functionalities of our architecture, we also implemented a deep-learning based audio-recognition application using the APRON NorthBound interface, to detect human voices in challenged networks. The application's logic uses Transfer Learning to improve the audio classification accuracy and the runtime of the UAV-based rescue operations
    • …
    corecore