2,933 research outputs found

    New directions for remote data integrity checking of cloud storage

    Get PDF
    Cloud storage services allow data owners to outsource their data, and thus reduce their workload and cost in data storage and management. However, most data owners today are still reluctant to outsource their data to the cloud storage providers (CSP), simply because they do not trust the CSPs, and have no confidence that the CSPs will secure their valuable data. This dissertation focuses on Remote Data Checking (RDC), a collection of protocols which can allow a client (data owner) to check the integrity of data outsourced at an untrusted server, and thus to audit whether the server fulfills its contractual obligations. Robustness has not been considered for the dynamic RDCs in the literature. The R-DPDP scheme being designed is the first RDC scheme that provides robustness and, at the same time, supports dynamic data updates, while requiring small, constant, client storage. The main challenge that has to be overcome is to reduce the client-server communication during updates under an adversarial setting. A security analysis for R-DPDP is provided. Single-server RDCs are useful to detect server misbehavior, but do not have provisions to recover damaged data. Thus in practice, they should be extended to a distributed setting, in which the data is stored redundantly at multiple servers. The client can use RDC to check each server and, upon having detected a corrupted server, it can repair this server by retrieving data from healthy servers, so that the reliability level can be maintained. Previously, RDC has been investigated for replication-based and erasure coding-based distributed storage systems. However, RDC has not been investigated for network coding-based distributed storage systems that rely on untrusted servers. RDC-NC is the first RDC scheme for network coding-based distributed storage systems to ensure data remain intact when faced with data corruption, replay, and pollution attacks. Experimental evaluation shows that RDC-NC is inexpensive for both the clients and the servers. The setting considered so far outsources the storage of the data, but the data owner is still heavily involved in the data management process (especially during the repair of damaged data). A new paradigm is proposed, in which the data owner fully outsources both the data storage and the management of the data. In traditional distributed RDC schemes, the repair phase imposes a significant burden on the client, who needs to expend a significant amount of computation and communication, thus, it is very difficult to keep the client lightweight. A new self-repairing concept is developed, in which the servers are responsible to repair the corruption, while the client acts as a lightweight coordinator during repair. To realize this new concept, two novel RDC schemes, RDC-SR and ERDC-SR, are designed for replication-based distributed storage systems, which enable Server-side Repair and minimize the load on the client side. Version control systems (VCS) provide the ability to track and control changes made to the data over time. The changes are usually stored in a VCS repository which, due to its massive size, is often hosted at an untrusted CSP. RDC can be used to address concerns about the untrusted nature of the VCS server by allowing a data owner to periodically check that the server continues to store the data. The RDC-AVCS scheme being designed relies on RDC to ensure all the data versions are retrievable from the untrusted server over time. The RDC-AVCS prototype built on top of Apache SVN only incurs a modest decrease in performance compared to a regular (non-secure) SVN system

    Qualitative and Quantitative Security Analyses for ZigBee Wireless Sensor Networks

    Get PDF

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Intellectual Feature Ranking Model with Correlated Feature Set based Malware Detection in Cloud environment using Machine Learning

    Get PDF
    Malware detection for cloud systems has been studied extensively, and many different approaches have been developed and implemented in an effort to stay ahead of this ever-evolving threat. Malware refers to any programme or defect that is designed to duplicate itself or cause damage to the system's hardware or software. These attacks are designed specifically to cause harm to operational systems, but they are invisible to the human eye. One of the most exciting developments in data storage and service delivery today is cloud computing. There are significant benefits to be gained over more conventional protection methods by making use of this fast evolving technology to protect computer-based systems from cyber-related threats. Assets to be secured may reside in any networked computing environment, including but not limited to Cyber Physical Systems (CPS), critical systems, fixed and portable computers, mobile devices, and the Internet of Things (IoT). Malicious software or malware refers to any programme that intentionally compromises a computer system in order to compromise its security, privacy, or availability. A cloud-based intelligent behavior analysis model for malware detection system using feature set is proposed to identify the ever-increasing malware attacks. The suggested system begins by collecting malware samples from several virtual machines, from which unique characteristics can be extracted easily. Then, the malicious and safe samples are separated using the features provided to the learning-based and rule-based detection agents. To generate a relevant feature set for accurate malware detection, this research proposes an Intellectual Feature Ranking Model with Correlated Feature Set (IFR-CFS) model using enhanced logistic regression model for accurate detection of malware in the cloud environment. The proposed model when compared to the traditional feature selection model, performs better in generation of feature set for accurate detection of malware

    Activity Report 2021 : Automatic Control, Lund University

    Get PDF

    ANOMALY INFERENCE BASED ON HETEROGENEOUS DATA SOURCES IN AN ELECTRICAL DISTRIBUTION SYSTEM

    Get PDF
    Harnessing the heterogeneous data sets would improve system observability. While the current metering infrastructure in distribution network has been utilized for the operational purpose to tackle abnormal events, such as weather-related disturbance, the new normal we face today can be at a greater magnitude. Strengthening the inter-dependencies as well as incorporating new crowd-sourced information can enhance operational aspects such as system reconfigurability under extreme conditions. Such resilience is crucial to the recovery of any catastrophic events. In this dissertation, it is focused on the anomaly of potential foul play within an electrical distribution system, both primary and secondary networks as well as its potential to relate to other feeders from other utilities. The distributed generation has been part of the smart grid mission, the addition can be prone to electronic manipulation. This dissertation provides a comprehensive establishment in the emerging platform where the computing resources have been ubiquitous in the electrical distribution network. The topics covered in this thesis is wide-ranging where the anomaly inference includes load modeling and profile enhancement from other sources to infer of topological changes in the primary distribution network. While metering infrastructure has been the technological deployment to enable remote-controlled capability on the dis-connectors, this scholarly contribution represents the critical knowledge of new paradigm to address security-related issues, such as, irregularity (tampering by individuals) as well as potential malware (a large-scale form) that can massively manipulate the existing network control variables, resulting into large impact to the power grid

    Modeling and simulation of hydrokinetic composite turbine system

    Get PDF
    The utilization of kinetic energy from the river is promising as an attractive alternative to other available renewable energy resources. Hydrokinetic turbine systems are advantageous over traditional dam based hydropower systems due to zero-head and mobility. The objective of this study is to design and analyze hydrokinetic composite turbine system in operation. Fatigue study and structural optimization of composite turbine blades were conducted. System level performance of the composite hydrokinetic turbine was evaluated. A fully-coupled blade element momentum-finite element method algorithm has been developed to compute the stress response of the turbine blade subjected to hydrodynamic and buoyancy loadings during operation. Loadings on the blade were validated with commercial software simulation results. Reliability-based fatigue life of the designed composite blade was investigated. A particle swarm based structural optimization model was developed to optimize the weight and structural performance of laminated composite hydrokinetic turbine blades. The online iterative optimization process couples the three-dimensional comprehensive finite element model of the blade with real-time particle swarm optimization (PSO). The composite blade after optimization possesses much less weight and better load-carrying capability. Finally, the model developed has been extended to design and evaluate the performance of a three-blade horizontal axis hydrokinetic composite turbine system. Flow behavior around the blade and power/power efficiency of the system was characterized by simulation. Laboratory water tunnel testing was performed and simulation results were validated by experimental findings. The work performed provides a valuable procedure for the design and analysis of hydrokinetic composite turbine systems --Abstract, page iv

    The Internet of Everything

    Get PDF
    In the era before IoT, the world wide web, internet, web 2.0 and social media made people’s lives comfortable by providing web services and enabling access personal data irrespective of their location. Further, to save time and improve efficiency, there is a need for machine to machine communication, automation, smart computing and ubiquitous access to personal devices. This need gave birth to the phenomenon of Internet of Things (IoT) and further to the concept of Internet of Everything (IoE)

    Cyber-Physical Security of Power Distribution Systems

    Get PDF
    Smart grids have been witnessing continuous and rapid radical developments in the recent years. With the aim towards a more sustainable energy system, the share of distributed generation resources is ever-increasing and transforming the traditional operations of the power grids. Along with these allocated resources, an ensemble of smart measurement devices, multiple communication layers, sophisticated distributed control techniques and interconnection of system equipment represent the pillars that support the modernization of these power networks. This progress has undoubtedly enabled a more efficient and accurate operation of the power networks. At the same time, it has created vulnerability points and challenges that endanger the safety and security of the smart grids operation. The cyber-physical security of smart grids has consequently become a priority and a major challenge to ensure a reliable and safe operation of the power grid. The resiliency of the grid depends on our ability to design smart grid that can withstand threats and be able to mitigate against different attack scenarios. Cyber-physical security is currently an active area of research, and threats that target critical operation components have been classified and investigated in the literature. However, many of the research efforts have focused on the threats on the transmission level, with the intention of extending the protection, detection and mitigation strategies to the distribution level. Nevertheless, many of the performed analysis is not suitable for Power Distribution Systems (PDS) due to the inherently different characteristics of these systems. This thesis first investigates and addresses the stealthy False Data Injection (FDI) attacks on the PDS, which target the Distribution Systems Optimal Power (DSOPF) Flow and are not detectable by traditional Bad Data Detection (BDD) methods. The attacks formulation is based on the Branch Current State Estimation (BCSE), which allows separation of the phases, thus full analysis on the unbalanced three-phase system is performed. In specific, it is shown how an adversary, having access to system measurements and topology, is able to maximize the system losses. By launching FDI attacks that target the Distribution Systems State Estimation (DSSE), the adversary constructs the attack vectors that drive the objective function in the opposite direction of optimality. Optimal attack strategy effects is investigated. The results demonstrate the increase in system losses after corrupting the measurements. Second, a machine learning technique is proposed as a protection measure against the cyber-physical threats to detect the FDI attacks. Although FDI vectors cannot be detected by conventional BDD techniques, exploiting the historical data enables a more thorough analysis and a better detection advantage of anomalies in the measurements. Recurrent Neural Networks (RNN) is applied on the stream of data measurements to identify any anomaly, which represents a compromised measurement, by analyzing multiple points across the measurement vector and multiple time steps. The temporal correlation of data points is the basis of identifying attack vectors. The results of the RNN model indicate an overall strong ability to detect the stealthy attacks
    • …
    corecore