156 research outputs found

    Detection of Malware Attacks on Virtual Machines for a Self-Heal Approach in Cloud Computing using VM Snapshots

    Get PDF
    Cloud Computing strives to be dynamic as a service oriented architecture. The services in the SoA are rendered in terms of private, public and in many other commercial domain aspects. These services should be secured and thus are very vital to the cloud infrastructure. In order, to secure and maintain resilience in the cloud, it not only has to have the ability to identify the known threats but also to new challenges that target the infrastructure of a cloud. In this paper, we introduce and discuss a detection method of malwares from the VM logs and corresponding VM snapshots are classified into attacked and non-attacked VM snapshots. As snapshots are always taken to be a backup in the backup servers, especially during the night hours, this approach could reduce the overhead of the backup server with a self-healing capability of the VMs in the local cloud infrastructure. A machine learning approach at the hypervisor level is projected, the features being gathered from the API calls of VM instances in the IaaS level of cloud service. Our proposed scheme can have a high detection accuracy of about 93% while having the capability to classify and detect different types of malwares with respect to the VM snapshots. Finally the paper exhibits an algorithm using snapshots to detect and thus to self-heal using the monitoring components of a particular VM instances applied to cloud scenarios. The self-healing approach with machine learning algorithms can determine new threats with some prior knowledge of its functionality

    Detection of Malware Attacks on Virtual Machines for a Self-Heal Approach in Cloud Computing using VM Snapshots

    Get PDF
    Cloud Computing strives to be dynamic as a service oriented architecture. The services in the SoA are rendered in terms of private, public and in many other commercial domain aspects. These services should be secured and thus are very vital to the cloud infrastructure. In order, to secure and maintain resilience in the cloud, it not only has to have the ability to identify the known threats but also to new challenges that target the infrastructure of a cloud. In this paper, we introduce and discuss a detection method of malwares from the VM logs and corresponding VM snapshots are classified into attacked and non-attacked VM snapshots. As snapshots are always taken to be a backup in the backup servers, especially during the night hours, this approach could reduce the overhead of the backup server with a self-healing capability of the VMs in the local cloud infrastructure. A machine learning approach at the hypervisor level is projected, the features being gathered from the API calls of VM instances in the IaaS level of cloud service. Our proposed scheme can have a high detection accuracy of about 93% while having the capability to classify and detect different types of malwares with respect to the VM snapshots. Finally the paper exhibits an algorithm using snapshots to detect and thus to self-heal using the monitoring components of a particular VM instances applied to cloud scenarios. The self-healing approach with machine learning algorithms can determine new threats with some prior knowledge of its functionality

    Assessing and augmenting SCADA cyber security: a survey of techniques

    Get PDF
    SCADA systems monitor and control critical infrastructures of national importance such as power generation and distribution, water supply, transportation networks, and manufacturing facilities. The pervasiveness, miniaturisations and declining costs of internet connectivity have transformed these systems from strictly isolated to highly interconnected networks. The connectivity provides immense benefits such as reliability, scalability and remote connectivity, but at the same time exposes an otherwise isolated and secure system, to global cyber security threats. This inevitable transformation to highly connected systems thus necessitates effective security safeguards to be in place as any compromise or downtime of SCADA systems can have severe economic, safety and security ramifications. One way to ensure vital asset protection is to adopt a viewpoint similar to an attacker to determine weaknesses and loopholes in defences. Such mind sets help to identify and fix potential breaches before their exploitation. This paper surveys tools and techniques to uncover SCADA system vulnerabilities. A comprehensive review of the selected approaches is provided along with their applicability

    Teollisen ohjausjärjestelmän koventaminen ja arkkitehtuuri virtualisoidussa ympäristössä

    Get PDF
    Virtualization is widely used in traditional ICT in order to share hardware resources between separate software applications while also creating isolation. This makes it possible to more efficiently utilize hardware resources as isolation doesn't require running software on separate hardware servers. Virtualization offers features like fault tolerance and the ability to create easily managed test environments. Such features are also desirable in designing and maintaining automation systems. Industrial control systems and their requirements differ significantly from traditional ICT, however. Security and reliability are of critical concern in ICS, and the effects of introducing new technology need to be thoroughly considered. Many practices that may be well-established and trusted in ICT can't be used directly in ICS, if at all. Industrial automation uses highly specialized solutions, and security measures can hinder or prevent system performance. This thesis presents the main challenges and solutions related to using virtualization in industrial automation, with a focus on security and hardening. The virtualization platform used is VMware's vSphere 6.5, and thus the practical recommendations are aimed at VMware products. Much of the general design and security principles are also applicable in environments using different virtualization software. Automation systems are complex, and maintaining virtualization adds its own operational workload. Available scripting languages and programming interfaces are researched to find ways to decrease this workload by automating some of the maintenance tasks. Automation systems are very heterogeneous and the integration of virtualization needs a lot of additional case specific consideration and practical work. Still, many of the established ICT solutions addressing virtualization security and hardening problems are found suitable for use in the ICS domain with some special considerations. Using the available VMware APIs and scripting solutions, practical tools automating security checks and hardening of virtual environments was developed

    IPCFA: A Methodology for Acquiring Forensically-Sound Digital Evidence in the Realm of IAAS Public Cloud Deployments

    Get PDF
    Cybercrimes and digital security breaches are on the rise: savvy businesses and organizations of all sizes must ready themselves for the worst. Cloud computing has become the new normal, opening even more doors for cybercriminals to commit crimes that are not easily traceable. The fast pace of technology adoption exceeds the speed by which the cybersecurity community and law enforcement agencies (LEAs) can invent countermeasures to investigate and prosecute such criminals. While presenting defensible digital evidence in courts of law is already complex, it gets more complicated if the crime is tied to public cloud computing, where storage, network, and computing resources are shared and dispersed over multiple geographical areas. Investigating such crimes involves collecting evidence data from the public cloud that is court-sound. Digital evidence court admissibility in the U.S. is governed predominantly by the Federal Rules of Evidence and Federal Rules of Civil Procedures. Evidence authenticity can be challenged by the Daubert test, which evaluates the forensic process that took place to generate the presented evidence. Existing digital forensics models, methodologies, and processes have not adequately addressed crimes that take place in the public cloud. It was only in late 2020 that the Scientific Working Group on Digital Evidence (SWGDE) published a document that shed light on best practices for collecting evidence from cloud providers. Yet SWGDE’s publication does not address the gap between the technology and the legal system when it comes to evidence admissibility. The document is high level with more focus on law enforcement processes such as issuing a subpoena and preservation orders to the cloud provider. This research proposes IaaS Public Cloud Forensic Acquisition (IPCFA), a methodology to acquire forensic-sound evidence from public cloud IaaS deployments. IPCFA focuses on bridging the gap between the legal and technical sides of evidence authenticity to help produce admissible evidence that can withstand scrutiny in U.S. courts. Grounded in design research science (DSR), the research is rigorously evaluated using two hypothetical scenarios for crimes that take place in the public cloud. The first scenario takes place in AWS and is hypothetically walked-thru. The second scenario is a demonstration of IPCFA’s applicability and effectiveness on Azure Cloud. Both cases are evaluated using a rubric built from the federal and civil digital evidence requirements and the international best practices for iv digital evidence to show the effectiveness of IPCFA in generating cloud evidence sound enough to be considered admissible in court

    Internet of Things Applications - From Research and Innovation to Market Deployment

    Get PDF
    The book aims to provide a broad overview of various topics of Internet of Things from the research, innovation and development priorities to enabling technologies, nanoelectronics, cyber physical systems, architecture, interoperability and industrial applications. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from technology to international cooperation and the global "state of play".The book builds on the ideas put forward by the European research Cluster on the Internet of Things Strategic Research Agenda and presents global views and state of the art results on the challenges facing the research, development and deployment of IoT at the global level. Internet of Things is creating a revolutionary new paradigm, with opportunities in every industry from Health Care, Pharmaceuticals, Food and Beverage, Agriculture, Computer, Electronics Telecommunications, Automotive, Aeronautics, Transportation Energy and Retail to apply the massive potential of the IoT to achieving real-world solutions. The beneficiaries will include as well semiconductor companies, device and product companies, infrastructure software companies, application software companies, consulting companies, telecommunication and cloud service providers. IoT will create new revenues annually for these stakeholders, and potentially create substantial market share shakeups due to increased technology competition. The IoT will fuel technology innovation by creating the means for machines to communicate many different types of information with one another while contributing in the increased value of information created by the number of interconnections among things and the transformation of the processed information into knowledge shared into the Internet of Everything. The success of IoT depends strongly on enabling technology development, market acceptance and standardization, which provides interoperability, compatibility, reliability, and effective operations on a global scale. The connected devices are part of ecosystems connecting people, processes, data, and things which are communicating in the cloud using the increased storage and computing power and pushing for standardization of communication and metadata. In this context security, privacy, safety, trust have to be address by the product manufacturers through the life cycle of their products from design to the support processes. The IoT developments address the whole IoT spectrum - from devices at the edge to cloud and datacentres on the backend and everything in between, through ecosystems are created by industry, research and application stakeholders that enable real-world use cases to accelerate the Internet of Things and establish open interoperability standards and common architectures for IoT solutions. Enabling technologies such as nanoelectronics, sensors/actuators, cyber-physical systems, intelligent device management, smart gateways, telematics, smart network infrastructure, cloud computing and software technologies will create new products, new services, new interfaces by creating smart environments and smart spaces with applications ranging from Smart Cities, smart transport, buildings, energy, grid, to smart health and life. Technical topics discussed in the book include: • Introduction• Internet of Things Strategic Research and Innovation Agenda• Internet of Things in the industrial context: Time for deployment.• Integration of heterogeneous smart objects, applications and services• Evolution from device to semantic and business interoperability• Software define and virtualization of network resources• Innovation through interoperability and standardisation when everything is connected anytime at anyplace• Dynamic context-aware scalable and trust-based IoT Security, Privacy framework• Federated Cloud service management and the Internet of Things• Internet of Things Application

    Application of service composition mechanisms to Future Networks architectures and Smart Grids

    Get PDF
    Aquesta tesi gira entorn de la hipòtesi de la metodologia i mecanismes de composició de serveis i com es poden aplicar a diferents camps d'aplicació per a orquestrar de manera eficient comunicacions i processos flexibles i sensibles al context. Més concretament, se centra en dos camps d'aplicació: la distribució eficient i sensible al context de contingut multimèdia i els serveis d'una xarxa elèctrica intel·ligent. En aquest últim camp es centra en la gestió de la infraestructura, cap a la definició d'una Software Defined Utility (SDU), que proposa una nova manera de gestionar la Smart Grid amb un enfocament basat en programari, que permeti un funcionament molt més flexible de la infraestructura de xarxa elèctrica. Per tant, revisa el context, els requisits i els reptes, així com els enfocaments de la composició de serveis per a aquests camps. Fa especial èmfasi en la combinació de la composició de serveis amb arquitectures Future Network (FN), presentant una proposta de FN orientada a serveis per crear comunicacions adaptades i sota demanda. També es presenten metodologies i mecanismes de composició de serveis per operar sobre aquesta arquitectura, i posteriorment, es proposa el seu ús (en conjunció o no amb l'arquitectura FN) en els dos camps d'estudi. Finalment, es presenta la investigació i desenvolupament realitzat en l'àmbit de les xarxes intel·ligents, proposant diverses parts de la infraestructura SDU amb exemples d'aplicació de composició de serveis per dissenyar seguretat dinàmica i flexible o l'orquestració i gestió de serveis i recursos dins la infraestructura de l'empresa elèctrica.Esta tesis gira en torno a la hipótesis de la metodología y mecanismos de composición de servicios y cómo se pueden aplicar a diferentes campos de aplicación para orquestar de manera eficiente comunicaciones y procesos flexibles y sensibles al contexto. Más concretamente, se centra en dos campos de aplicación: la distribución eficiente y sensible al contexto de contenido multimedia y los servicios de una red eléctrica inteligente. En este último campo se centra en la gestión de la infraestructura, hacia la definición de una Software Defined Utility (SDU), que propone una nueva forma de gestionar la Smart Grid con un enfoque basado en software, que permita un funcionamiento mucho más flexible de la infraestructura de red eléctrica. Por lo tanto, revisa el contexto, los requisitos y los retos, así como los enfoques de la composición de servicios para estos campos. Hace especial hincapié en la combinación de la composición de servicios con arquitecturas Future Network (FN), presentando una propuesta de FN orientada a servicios para crear comunicaciones adaptadas y bajo demanda. También se presentan metodologías y mecanismos de composición de servicios para operar sobre esta arquitectura, y posteriormente, se propone su uso (en conjunción o no con la arquitectura FN) en los dos campos de estudio. Por último, se presenta la investigación y desarrollo realizado en el ámbito de las redes inteligentes, proponiendo varias partes de la infraestructura SDU con ejemplos de aplicación de composición de servicios para diseñar seguridad dinámica y flexible o la orquestación y gestión de servicios y recursos dentro de la infraestructura de la empresa eléctrica.This thesis revolves around the hypothesis the service composition methodology and mechanisms and how they can be applied to different fields of application in order to efficiently orchestrate flexible and context-aware communications and processes. More concretely, it focuses on two fields of application that are the context-aware media distribution and smart grid services and infrastructure management, towards a definition of a Software-Defined Utility (SDU), which proposes a new way of managing the Smart Grid following a software-based approach that enable a much more flexible operation of the power infrastructure. Hence, it reviews the context, requirements and challenges of these fields, as well as the service composition approaches. It makes special emphasis on the combination of service composition with Future Network (FN) architectures, presenting a service-oriented FN proposal for creating context-aware on-demand communication services. Service composition methodology and mechanisms are also presented in order to operate over this architecture, and afterwards, proposed for their usage (in conjunction or not with the FN architecture) in the deployment of context-aware media distribution and Smart Grids. Finally, the research and development done in the field of Smart Grids is depicted, proposing several parts of the SDU infrastructure, with examples of service composition application for designing dynamic and flexible security for smart metering or the orchestration and management of services and data resources within the utility infrastructure

    A FORENSICALLY-ENABLED IAAS CLOUD COMPUTING ARCHITECTURE

    Get PDF
    Cloud computing has been advancing at an intense pace. It has become one of the most important research topics in computer science and information systems. Cloud computing offers enterprise-scale platforms in a short time frame with little effort. Thus, it delivers significant economic benefits to both commercial and public entities. Despite this, the security and subsequent incident management requirements are major obstacles to adopting the cloud. Current cloud architectures do not support digital forensic investigators, nor comply with today’s digital forensics procedures – largely due to the fundamental dynamic nature of the cloud. When an incident has occurred, an organization-based investigation will seek to provide potential digital evidence while minimising the cost of the investigation. Data acquisition is the first and most important process within digital forensics – to ensure data integrity and admissibility. However, access to data and the control of resources in the cloud is still very much provider-dependent and complicated by the very nature of the multi-tenanted operating environment. Thus, investigators have no option but to rely on the Cloud Service Providers (CSPs) to acquire evidence for them. Due to the cost and time involved in acquiring the forensic image, some cloud providers will not provide evidence beyond 1TB despite a court order served on them. Assuming they would be willing or are required to by law, the evidence collected is still questionable as there is no way to verify the validity of evidence and whether evidence has already been lost. Therefore, dependence on the CSPs is considered one of the most significant challenges when investigators need to acquire evidence in a timely yet forensically sound manner from cloud systems. This thesis proposes a novel architecture to support a forensic acquisition and analysis of IaaS cloud-base systems. The approach, known as Cloud Forensic Acquisition and Analysis System (Cloud FAAS), is based on a cluster analysis of non-volatile memory that achieves forensically reliable images at the same level of integrity as the normal “gold standard” computer forensic acquisition procedures with the additional capability to reconstruct the image at any point in time. Cloud FAAS fundamentally, shifts access of the data back to the data owner rather than relying on a third party. In this manner, organisations are free to undertaken investigations at will requiring no intervention or cooperation from the cloud provider. The novel architecture is validated through a proof-of-concept prototype. A series of experiments are undertaken to illustrate and model how Cloud FAAS is capable of providing a richer and more complete set of admissible evidence than what current CSPs are able to provide. Using Cloud FAAS, investigators have the ability to obtain a forensic image of the system after, just prior to or hours before the incident. Therefore, this approach can not only create images that are forensically sound but also provide access to deleted and more importantly overwritten files – which current computer forensic practices are unable to achieve. This results in an increased level of visibility for the forensic investigator and removes any limitations that data carving and fragmentation may introduce. In addition, an analysis of the economic overhead of operating Cloud FAAS is performed. This shows the level of disk change that occurs is well with acceptable limits and is relatively small in comparison to the total volume of memory available. The results show Cloud FAAS has both a technical and economic basis for solving investigations involving cloud computing.Saudi Governmen

    Data security in cloud storage services

    Get PDF
    Cloud Computing is considered to be the next-generation architecture for ICT where it moves the application software and databases to the centralized large data centers. It aims to offer elastic IT services where clients can benefit from significant cost savings of the pay-per-use model and can easily scale up or down, and do not have to make large investments in new hardware. However, the management of the data and services in this cloud model is under the control of the provider. Consequently, the cloud clients have less control over their outsourced data and they have to trust cloud service provider to protect their data and infrastructure from both external and internal attacks. This is especially true with cloud storage services. Nowadays, users rely on cloud storage as it offers cheap and unlimited data storage that is available for use by multiple devices (e.g. smart phones, tablets, notebooks, etc.). Besides famous cloud storage providers, such as Amazon, Google, and Microsoft, more and more third-party cloud storage service providers are emerging. These services are dedicated to offering more accessible and user friendly storage services to cloud customers. Examples of these services include Dropbox, Box.net, Sparkleshare, UbuntuOne or JungleDisk. These cloud storage services deliver a very simple interface on top of the cloud storage provided by storage service providers. File and folder synchronization between different machines, sharing files and folders with other users, file versioning as well as automated backups are the key functionalities of these emerging cloud storage services. Cloud storage services have changed the way users manage and interact with data outsourced to public providers. With these services, multiple subscribers can collaboratively work and share data without concerns about their data consistency, availability and reliability. Although these cloud storage services offer attractive features, many customers have not adopted these services. Since data stored in these services is under the control of service providers resulting in confidentiality and security concerns and risks. Therefore, using cloud storage services for storing valuable data depends mainly on whether the service provider can offer sufficient security and assurance to meet client requirements. From the way most cloud storage services are constructed, we can notice that these storage services do not provide users with sufficient levels of security leading to an inherent risk on users\u27 data from external and internal attacks. These attacks take the form of: data exposure (lack of data confidentiality); data tampering (lack of data integrity); and denial of data (lack of data availability) by third parties on the cloud or by the cloud provider himself. Therefore, the cloud storage services should ensure the data confidentiality in the following state: data in motion (while transmitting over networks), data at rest (when stored at provider\u27s disks). To address the above concerns, confidentiality and access controllability of outsourced data with strong cryptographic guarantee should be maintained. To ensure data confidentiality in public cloud storage services, data should be encrypted data before it is outsourced to these services. Although, users can rely on client side cloud storage services or software encryption tools for encrypting user\u27s data; however, many of these services fail to achieve data confidentiality. Box, for example, does not encrypt user files via SSL and within Box servers. Client side cloud storage services can intentionally/unintentionally disclose user decryption keys to its provider. In addition, some cloud storage services support convergent encryption for encrypting users\u27 data exposing it to “confirmation of a file attack. On the other hand, software encryption tools use full-disk encryption (FDE) which is not feasible for cloud-based file sharing services, because it encrypts the data as virtual hard disks. Although encryption can ensure data confidentiality; however, it fails to achieve fine-grained access control over outsourced data. Since, public cloud storage services are managed by un-trusted cloud service provider, secure and efficient fine-grained access control cannot be realized through these services as these policies are managed by storage services that have full control over the sharing process. Therefore, there is not any guarantee that they will provide good means for efficient and secure sharing and they can also deduce confidential information about the outsourced data and users\u27 personal information. In this work, we would like to improve the currently employed security measures for securing data in cloud store services. To achieve better data confidentiality for data stored in the cloud without relying on cloud service providers (CSPs) or putting any burden on users, in this thesis, we designed a secure cloud storage system framework that simultaneously achieves data confidentiality, fine-grained access control on encrypted data and scalable user revocation. This framework is built on a third part trusted (TTP) service that can be employed either locally on users\u27 machine or premises, or remotely on top of cloud storage services. This service shall encrypts users data before uploading it to the cloud and decrypts it after downloading from the cloud; therefore, it remove the burden of storing, managing and maintaining encryption/decryption keys from data owner\u27s. In addition, this service only retains user\u27s secret key(s) not data. Moreover, to ensure high security for these keys, it stores them on hardware device. Furthermore, this service combines multi-authority ciphertext policy attribute-based encryption (CP-ABE) and attribute-based Signature (ABS) for achieving many-read-many-write fine-grained data access control on storage services. Moreover, it efficiently revokes users\u27 privileges without relying on the data owner for re-encrypting massive amounts of data and re-distributing the new keys to the authorized users. It removes the heavy computation of re-encryption from users and delegates this task to the cloud service provider (CSP) proxy servers. These proxy servers achieve flexible and efficient re-encryption without revealing underlying data to the cloud. In our designed architecture, we addressed the problem of ensuring data confidentiality against cloud and against accesses beyond authorized rights. To resolve these issues, we designed a trusted third party (TTP) service that is in charge of storing data in an encrypted format in the cloud. To improve the efficiency of the designed architecture, the service allows the users to choose the level of severity of the data and according to this level different encryption algorithms are employed. To achieve many-read-many-write fine grained access control, we merge two algorithms (multi-authority ciphertext policy attribute-based encryption (MA- CP-ABE) and attribute-based Signature (ABS)). Moreover, we support two levels of revocation: user and attribute revocation so that we can comply with the collaborative environment. Last but not least, we validate the effectiveness of our design by carrying out a detailed security analysis. This analysis shall prove the correctness of our design in terms of data confidentiality each stage of user interaction with the cloud
    corecore