35,514 research outputs found

    On Cyber-Physical Security of Smart Grid: Data Integrity Attacks and Experiment Platform

    Get PDF
    A Smart Grid is a digitally enabled electric power grid that integrates the computation and communication technologies from cyber world with the sensors and actuators from physical world. Due to the system complexity, typically the high cohesion of communication and power system, the Smart Grid innovation introduces new and fundamentally different security vulnerabilities and risks. In this work, two important research aspects about cyber-physical security of Smart Grid are addressed: (i) The construction, impact and countermeasure of data integrity attacks; and (ii) The design and implementation of general cyber-physical security experiment platform. For data integrity attacks: based on the system model of state estimation process in Smart Grid, firstly, a data integrity attack model is formulated, such that the attackers can generate financial benefits from the real-time electrical market operations. Then, to reduce the required knowledge about the targeted power system when launching attacks, an online attack approach is proposed, such that the attacker is able to construct the desired attacks without the network information of power system. Furthermore, a network information attacking strategy is proposed, in which the most vulnerable meters can be directly identified and the desired measurement perturbations can be achieved by strategically manipulating the network information. Besides the attacking strategies, corresponding countermeasures based on the sparsity of attack vectors and robust state estimator are provided respectively. For the experiment platform: ScorePlus, a software-hardware hybrid and federated experiment environment for Smart Grid is presented. ScorePlus incorporates both software emulator and hardware testbed, such that they all follow the same architecture, and the same Smart Grid application program can be tested on either of them without any modification; ScorePlus provides a federated environment such that multiple software emulators and hardware testbeds at different locations are able to connect and form a unified Smart Grid system; ScorePlus software is encapsulated as a resource plugin in OpenStack cloud computing platform, such that it supports massive deployments with large scale test cases in cloud infrastructure

    Cloudbus Toolkit for Market-Oriented Cloud Computing

    Full text link
    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape

    A proposed case for the cloud software engineering in security

    Get PDF
    This paper presents Cloud Software Engineering in Security (CSES) proposal that combines the benefits from each of good software engineering process and security. While other literature does not provide a proposal for Cloud security as yet, we use Business Process Modeling Notation (BPMN) to illustrate the concept of CSES from its design, implementation and test phases. BPMN can be used to raise alarm for protecting Cloud security in a real case scenario in real-time. Results from BPMN simulations show that a long execution time of 60 hours is required to protect real-time security of 2 petabytes (PB). When data is not in use, BPMN simulations show that the execution time for all data security rapidly falls off. We demonstrate a proposal to deal with Cloud security and aim to improve its current performance for Big Data

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Machine Learning-Based Elastic Cloud Resource Provisioning in the Solvency II Framework

    Get PDF
    The Solvency II Directive (Directive 2009/138/EC) is a European Directive issued in November 2009 and effective from January 2016, which has been enacted by the European Union to regulate the insurance and reinsurance sector through the discipline of risk management. Solvency II requires European insurance companies to conduct consistent evaluation and continuous monitoring of risks—a process which is computationally complex and extremely resource-intensive. To this end, companies are required to equip themselves with adequate IT infrastructures, facing a significant outlay. In this paper we present the design and the development of a Machine Learning-based approach to transparently deploy on a cloud environment the most resource-intensive portion of the Solvency II-related computation. Our proposal targets DISAR®, a Solvency II-oriented system initially designed to work on a grid of conventional computers. We show how our solution allows to reduce the overall expenses associated with the computation, without hampering the privacy of the companies’ data (making it suitable for conventional public cloud environments), and allowing to meet the strict temporal requirements required by the Directive. Additionally, the system is organized as a self-optimizing loop, which allows to use information gathered from actual (useful) computations, thus requiring a shorter training phase. We present an experimental study conducted on Amazon EC2 to assess the validity and the efficiency of our proposal

    Personal Volunteer Computing

    Full text link
    We propose personal volunteer computing, a novel paradigm to encourage technical solutions that leverage personal devices, such as smartphones and laptops, for personal applications that require significant computations, such as animation rendering and image processing. The paradigm requires no investment in additional hardware, relying instead on devices that are already owned by users and their community, and favours simple tools that can be implemented part-time by a single developer. We show that samples of personal devices of today are competitive with a top-of-the-line laptop from two years ago. We also propose new directions to extend the paradigm

    Attributes of Big Data Analytics for Data-Driven Decision Making in Cyber-Physical Power Systems

    Get PDF
    Big data analytics is a virtually new term in power system terminology. This concept delves into the way a massive volume of data is acquired, processed, analyzed to extract insight from available data. In particular, big data analytics alludes to applications of artificial intelligence, machine learning techniques, data mining techniques, time-series forecasting methods. Decision-makers in power systems have been long plagued by incapability and weakness of classical methods in dealing with large-scale real practical cases due to the existence of thousands or millions of variables, being time-consuming, the requirement of a high computation burden, divergence of results, unjustifiable errors, and poor accuracy of the model. Big data analytics is an ongoing topic, which pinpoints how to extract insights from these large data sets. The extant article has enumerated the applications of big data analytics in future power systems through several layers from grid-scale to local-scale. Big data analytics has many applications in the areas of smart grid implementation, electricity markets, execution of collaborative operation schemes, enhancement of microgrid operation autonomy, management of electric vehicle operations in smart grids, active distribution network control, district hub system management, multi-agent energy systems, electricity theft detection, stability and security assessment by PMUs, and better exploitation of renewable energy sources. The employment of big data analytics entails some prerequisites, such as the proliferation of IoT-enabled devices, easily-accessible cloud space, blockchain, etc. This paper has comprehensively conducted an extensive review of the applications of big data analytics along with the prevailing challenges and solutions
    • …
    corecore