347 research outputs found

    Network Function Virtualization over Cloud-Cloud Computing as Business Continuity Solution

    Get PDF
    Cloud computing provides resources by using virtualization technology and a pay-as-you-go cost model. Network Functions Virtualization (NFV) is a concept, which promises to grant network operators the required flexibility to quickly develop and provision new network functions and services, which can be hosted in the cloud. However, cloud computing is subject to failures which emphasizes the need to address user’s availability requirements. Availability refers to the cloud uptime and the cloud capability to operate continuously. Providing highly available services in cloud computing is essential for maintaining customer confidence and satisfaction and preventing revenue losses. Different techniques can be implemented to increase the system’s availability and assure business continuity. This chapter covers cloud computing as business continuity solution and cloud service availability. This chapter also covers the causes of service unavailability and the impact due to service unavailability. Further, this chapter covers various ways to achieve the required cloud service availability

    Toward sustainable data centers: a comprehensive energy management strategy

    Get PDF
    Data centers are major contributors to the emission of carbon dioxide to the atmosphere, and this contribution is expected to increase in the following years. This has encouraged the development of techniques to reduce the energy consumption and the environmental footprint of data centers. Whereas some of these techniques have succeeded to reduce the energy consumption of the hardware equipment of data centers (including IT, cooling, and power supply systems), we claim that sustainable data centers will be only possible if the problem is faced by means of a holistic approach that includes not only the aforementioned techniques but also intelligent and unifying solutions that enable a synergistic and energy-aware management of data centers. In this paper, we propose a comprehensive strategy to reduce the carbon footprint of data centers that uses the energy as a driver of their management procedures. In addition, we present a holistic management architecture for sustainable data centers that implements the aforementioned strategy, and we propose design guidelines to accomplish each step of the proposed strategy, referring to related achievements and enumerating the main challenges that must be still solved.Peer ReviewedPostprint (author's final draft

    Towards a comprehensive framework for the multidisciplinary evaluation of organizational maturity on business continuity program management: a systematic literature review

    Get PDF
    Organizational dependency on Information and Communication Technology (ICT) drives the preparedness challenge to cope with business process disruptions. Business Continuity Management (BCM) encompasses effective planning to enable business functions to resume to an acceptable state of operation within a defined timeframe. This paper presents a systematic literature review that communicates the strategic guidelines to streamline the organizational processes in the BCM program, culminating in the Business Continuity Plan design, according to the organization’s maturity. The systematic literature review methodology follows the Evidence- Based Software Engineering protocol assisted by the Parsifal tool, using the EbscoHost, ScienceDirect, and Scopus databases, ranging from 2000 to February 2021. International Standards and Frameworks guide the BCM program implementation, however, there is a gap in communicating metrics and what needs to be measured in the BCM program. The major paper result is the confirmation of the identified gap, through the analysis of the studies that, according to the BCM components, report strategic guidelines to streamline the BCM program. The analysis quantifies and discusses the contribution of the studies on each BCM component to design a framework supported by metrics, that allows assessing the organization’s preparedness in each BCM component, focusing on Information Systems and ICT strategies.info:eu-repo/semantics/publishedVersio

    Deep Learning for Edge Computing Applications: A State-of-the-Art Survey

    Get PDF
    With the booming development of Internet-of-Things (IoT) and communication technologies such as 5G, our future world is envisioned as an interconnected entity where billions of devices will provide uninterrupted service to our daily lives and the industry. Meanwhile, these devices will generate massive amounts of valuable data at the network edge, calling for not only instant data processing but also intelligent data analysis in order to fully unleash the potential of the edge big data. Both the traditional cloud computing and on-device computing cannot sufficiently address this problem due to the high latency and the limited computation capacity, respectively. Fortunately, the emerging edge computing sheds a light on the issue by pushing the data processing from the remote network core to the local network edge, remarkably reducing the latency and improving the efficiency. Besides, the recent breakthroughs in deep learning have greatly facilitated the data processing capacity, enabling a thrilling development of novel applications, such as video surveillance and autonomous driving. The convergence of edge computing and deep learning is believed to bring new possibilities to both interdisciplinary researches and industrial applications. In this article, we provide a comprehensive survey of the latest efforts on the deep-learning-enabled edge computing applications and particularly offer insights on how to leverage the deep learning advances to facilitate edge applications from four domains, i.e., smart multimedia, smart transportation, smart city, and smart industry. We also highlight the key research challenges and promising research directions therein. We believe this survey will inspire more researches and contributions in this promising field

    Edge computing infrastructure for 5G networks: a placement optimization solution

    Get PDF
    This thesis focuses on how to optimize the placement of the Edge Computing infrastructure for upcoming 5G networks. To this aim, the core contributions of this research are twofold: 1) a novel heuristic called Hybrid Simulated Annealing to tackle the NP-hard nature of the problem and, 2) a framework called EdgeON providing a practical tool for real-life deployment optimization. In more detail, Edge Computing has grown into a key solution to 5G latency, reliability and scalability requirements. By bringing computing, storage and networking resources to the edge of the network, delay-sensitive applications, location-aware systems and upcoming real-time services leverage the benefits of a reduced physical and logical path between the end-user and the data or service host. Nevertheless, the edge node placement problem raises critical concerns regarding deployment and operational expenditures (i.e., mainly due to the number of nodes to be deployed), current backhaul network capabilities and non-technical placement limitations. Common approaches to the placement of edge nodes are based on: Mobile Edge Computing (MEC), where the processing capabilities are deployed at the Radio Access Network nodes and Facility Location Problem variations, where a simplistic cost function is used to determine where to optimally place the infrastructure. However, these methods typically lack the flexibility to be used for edge node placement under the strict technical requirements identified for 5G networks. They fail to place resources at the network edge for 5G ultra-dense networking environments in a network-aware manner. This doctoral thesis focuses on rigorously defining the Edge Node Placement Problem (ENPP) for 5G use cases and proposes a novel framework called EdgeON aiming at reducing the overall expenses when deploying and operating an Edge Computing network, taking into account the usage and characteristics of the in-place backhaul network and the strict requirements of a 5G-EC ecosystem. The developed framework implements several placement and optimization strategies thoroughly assessing its suitability to solve the network-aware ENPP. The core of the framework is an in-house developed heuristic called Hybrid Simulated Annealing (HSA), seeking to address the high complexity of the ENPP while avoiding the non-convergent behavior of other traditional heuristics (i.e., when applied to similar problems). The findings of this work validate our approach to solve the network-aware ENPP, the effectiveness of the heuristic proposed and the overall applicability of EdgeON. Thorough performance evaluations were conducted on the core placement solutions implemented revealing the superiority of HSA when compared to widely used heuristics and common edge placement approaches (i.e., a MEC-based strategy). Furthermore, the practicality of EdgeON was tested through two main case studies placing services and virtual network functions over the previously optimally placed edge nodes. Overall, our proposal is an easy-to-use, effective and fully extensible tool that can be used by operators seeking to optimize the placement of computing, storage and networking infrastructure at the users’ vicinity. Therefore, our main contributions not only set strong foundations towards a cost-effective deployment and operation of an Edge Computing network, but directly impact the feasibility of upcoming 5G services/use cases and the extensive existing research regarding the placement of services and even network service chains at the edge

    Cost Effective Cloud Storage Interoperability Between Public Cloud Platforms

    Get PDF
    With recent advancement in technology, cloud storage became cheaper enabling organizations around the world to store more data on the cloud (texts, images, videos, databases etc.), whereas it’s for a backup, archiving or just storing data streams. New digital laws and regulation (eg. General Data Protection Regulation) require these organizations to change their way of processing or handling data, which results usually in a change of cloud providers or adoption of hybrid architecture or multi-cloud one. With the amount of data stored increasing year after year, it becomes difficult for these organizations to change cloud platforms or cloud provider and migrate their data without thinking about the technical complexity, the time and the huge cost it may incur. This article discusses the data migration and interoperability issues between cloud platforms; the proposed approach provides a simple cost-effective migration that would help organizations save time and money in this process based on a  hybrid ontology approach for the brokerage of data transfers.. Keywords-Cloud Computing; Storage; Security; data; migration; cost optimization

    Simplifying Internet of Things (IoT) Data Processing Work ow Composition and Orchestration in Edge and Cloud Datacenters

    Get PDF
    Ph. D. Thesis.Internet of Things (IoT) allows the creation of virtually in nite connections into a global array of distributed intelligence. Identifying a suitable con guration of devices, software and infrastructures in the context of user requirements are fundamental to the success of delivering IoT applications. However, the design, development, and deployment of IoT applications are complex and complicated due to various unwarranted challenges. For instance, addressing the IoT application users' subjective and objective opinions with IoT work ow instances remains a challenge for the design of a more holistic approach. Moreover, the complexity of IoT applications increased exponentially due to the heterogeneous nature of the Edge/Cloud services, utilised to lower latency in data transformation and increase reusability. To address the composition and orchestration of IoT applications in the cloud and edge environments, this thesis presents IoT-CANE (Context Aware Recommendation System) as a high-level uni ed IoT resource con guration recommendation system which embodies a uni ed conceptual model capturing con guration, constraint and infrastructure features of Edge/Cloud together with IoT devices. Second, I present an IoT work ow composition system (IoTWC) to allow IoT users to pipeline their work ows with proposed IoT work ow activity abstract patterns. IoTWC leverages the analytic hierarchy process (AHP) to compose the multi-level IoT work ow that satis es the requirements of any IoT application. Besides, the users are be tted with recommended IoT work ow con gurations using an AHP based multi-level composition framework. The proposed IoTWC is validated on a user case study to evaluate the coverage of IoT work ow activity abstract patterns and a real-world scenario for smart buildings. Last, I propose a fault-tolerant automation deployment IoT framework which captures the IoT work ow plan from IoTWC to deploy in multi-cloud edge environment with a fault-tolerance mechanism. The e ciency and e ectiveness of the proposed fault-tolerant system are evaluated in a real-time water ooding data monitoring and management applicatio

    Towards the Internet of Behaviors in Smart Cities through a Fog-To-Cloud Approach

    Get PDF
    Recent advances in the Internet of Things (IoT) and the rise of the Internet of Behavior (IoB) have made it possible to develop real-time improved traveler assistance tools for mobile phones, assisted by cloud-based machine learning and using fog computing in between the IoT and the Cloud. Within the Horizon2020-funded mF2C project, an Android app has been developed exploiting the proximity marketing concept and covers the essential path through the airport onto the flight, from the least busy security queue through to the time to walk to the gate, gate changes, and other obstacles that airports tend to entertain travelers with. It gives travelers a chance to discover the facilities of the airport, aided by a recommender system using machine learning that can make recommendations and offer vouchers based on the traveler’s preferences or on similarities to other travelers. The system provides obvious benefits to airport planners, not only people tracking in the shops area, but also aggregated and anonymized view, like heat maps that can highlight bottlenecks in the infrastructure, or suggest situations that require intervention, such as emergencies. With the emergence of the COVID-19 pandemic, the tool could be adapted to help in social distancing to guarantee safety. The use of the fog-to-cloud platform and the fulfillment of all centricity and privacy requirements of the IoB give evidence of the impact of the solution. Doi: 10.28991/HIJ-2021-02-04-01 Full Text: PD

    Agent-Based Cloud Resource Management for Secure Cloud Infrastructures

    Get PDF
    The cloud offers clear benefits for computations as well as for storage for diverse application areas. Security concerns are by far the greatest barriers to the wider uptake of cloud computing, particularly for privacy-sensitive applications. The aim of this article is to propose an approach for establishing trust between users and providers of cloud infrastructures (IaaS model) based on certified trusted agents. Such approach would remove barriers that prevent security sensitive applications being moved to the cloud. The core technology encompasses a secure agent platform for providing the execution environment for agents and the secure attested software base which ensures the integrity of the host platform. In this article we describe the motivation, concept, design and initial implementation of these technologies
    • …
    corecore