31,322 research outputs found

    Reliability issues related to the usage of Cloud Computing in Critical Infrastructures

    Get PDF
    The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud computing by critical infrastructure systems, the reliability and continuity of services risks associated with their use by critical systems. Some examples are presented of their use by different critical industries, and even when the use of cloud computing by such systems is not widely extended, there is a future risk that this paper presents. The concepts of macro and micro dependability and the model we introduce are useful for inter-dependency definition and for analyzing the resilience of systems that depend on other systems, specifically in the cloud model

    Aligning Your IT Requirements with Resilient Resources

    Get PDF
    Continuing developments in cloud computing and server virtualization have created new opportunities for universities to dramatically increase the resilience of their critical information systems. These two technologies represent a powerful new IT trend that is also accompanied by new risks that are not always obvious. Developing IT strategies that incorporate cloud-based services (such as Email, File Storage, LMS, etc…) can greatly increase the resilience of universities by relocating these services to another geographic region. However, existing regulations and policies for data security and retention are not always compatible with otherwise attractive cloud-based solutions. Knowing the right questions is the critical pre-requisite to choosing the right cloud-based solution. Server virtualization is usually adopted for the potential cost savings. Unfortunately, this sometimes results in less reliable systems. The cause is very simple. It is much easier and less expensive to deploy server virtualization in a non-redundant fashion. This increases the risk of outages due to single points of failure. The loss of a single physical server that hosts a single application results in the loss of that application. If that same physical server is hosting a dozen virtual servers, then the loss is greatly multiplied. Redundant virtual infrastructures do no incur this risk. In fact, they increase resilience by eliminating the dependence of any application to a single piece of hardware, while still providing significant cost savings

    Enhancing Cloud Security and Privacy : Time for a New Approach?

    Get PDF

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Software Defined Networks based Smart Grid Communication: A Comprehensive Survey

    Get PDF
    The current power grid is no longer a feasible solution due to ever-increasing user demand of electricity, old infrastructure, and reliability issues and thus require transformation to a better grid a.k.a., smart grid (SG). The key features that distinguish SG from the conventional electrical power grid are its capability to perform two-way communication, demand side management, and real time pricing. Despite all these advantages that SG will bring, there are certain issues which are specific to SG communication system. For instance, network management of current SG systems is complex, time consuming, and done manually. Moreover, SG communication (SGC) system is built on different vendor specific devices and protocols. Therefore, the current SG systems are not protocol independent, thus leading to interoperability issue. Software defined network (SDN) has been proposed to monitor and manage the communication networks globally. This article serves as a comprehensive survey on SDN-based SGC. In this article, we first discuss taxonomy of advantages of SDNbased SGC.We then discuss SDN-based SGC architectures, along with case studies. Our article provides an in-depth discussion on routing schemes for SDN-based SGC. We also provide detailed survey of security and privacy schemes applied to SDN-based SGC. We furthermore present challenges, open issues, and future research directions related to SDN-based SGC.Comment: Accepte
    corecore