7 research outputs found

    Progger: an efficient, tamper-evident kernel-space logger for cloud data provenance tracking

    Get PDF
    Cloud data provenance, or "what has happened to my data in the cloud", is a critical data security component which addresses pressing data accountability and data governance issues in cloud computing systems. In this paper, we present Progger (Provenance Logger), a kernel-space logger which potentially empowers all cloud stakeholders to trace their data. Logging from the kernel space empowers security analysts to collect provenance from the lowest possible atomic data actions, and enables several higher-level tools to be built for effective end-to-end tracking of data provenance. Within the last few years, there has been an increasing number of proposed kernel space provenance tools but they faced several critical data security and integrity problems. Some of these prior tools' limitations include (1) the inability to provide log tamper-evidence and prevention of fake/manual entries, (2) accurate and granular timestamp synchronisation across several machines, (3) log space requirements and growth, and (4) the efficient logging of root usage of the system. Progger has resolved all these critical issues, and as such, provides high assurance of data security and data activity audit. With this in mind, the paper will discuss these elements of high-assurance cloud data provenance, describe the design of Progger and its efficiency, and present compelling results which paves the way for Progger being a foundation tool used for data activity tracking across all cloud systems

    Scientific Workflow Repeatability through Cloud-Aware Provenance

    Full text link
    The transformations, analyses and interpretations of data in scientific workflows are vital for the repeatability and reliability of scientific workflows. This provenance of scientific workflows has been effectively carried out in Grid based scientific workflow systems. However, recent adoption of Cloud-based scientific workflows present an opportunity to investigate the suitability of existing approaches or propose new approaches to collect provenance information from the Cloud and to utilize it for workflow repeatability in the Cloud infrastructure. The dynamic nature of the Cloud in comparison to the Grid makes it difficult because resources are provisioned on-demand unlike the Grid. This paper presents a novel approach that can assist in mitigating this challenge. This approach can collect Cloud infrastructure information along with workflow provenance and can establish a mapping between them. This mapping is later used to re-provision resources on the Cloud. The repeatability of the workflow execution is performed by: (a) capturing the Cloud infrastructure information (virtual machine configuration) along with the workflow provenance, and (b) re-provisioning the similar resources on the Cloud and re-executing the workflow on them. The evaluation of an initial prototype suggests that the proposed approach is feasible and can be investigated further.Comment: 6 pages; 5 figures; 3 tables in Proceedings of the Recomputability 2014 workshop of the 7th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2014). London December 201

    Simulating of erosion modeling using ANSYS fluid dynamics

    Get PDF
    The micromechanical process of solid particle erosion can be affected by a number of factors, including impact angle, flow geometry, and particle size and shape. Erosion can also be affected by fluid properties, flow conditions, and the material comprising the impact surface. Of these several different potential impacting factors, the most critical ones for initiating erosion are particle size and matter, carrier phase viscosity, pipe diameter, velocity, and total flow rate of the second phase. Three turbulence models which are heavily dependent on flow velocities and fluid properties in their environment are k-epsilon (k-Δ), k-omega (k-ω) and The Shear Stress Transport Model (sst). More extreme erosion generally occurs in gas-solid flow for geometries which experience rapid alterations in flow direction (e.g., in valves and tees) because of unstable flow and local turbulence. The present study provides results from computational fluid dynamics (CFD) simulations that feature dilute water-solid flows in complex pipelines, highlighting the dynamic behavior displayed by the flows’ entrained solid particles. Specifically, the impact of fluid velocities in relation to erosion location is tested on sand particles measuring 10, 70, 100 and 200 microns. For the CFD analysis testing, liquid velocities of 20, 25, 30, 35 and 40 m/s are applied. The difference is evident between velocities of 20 m/s and 40 m/s, giving an erosion rate of 1.73 x10⁻⁎ kg/mÂČ.s and 2.11x10⁻³ kg/mÂČ.s, respectively, when the particle solid is 20

    Scientific workflow execution reproducibility using cloud-aware provenance

    Get PDF
    Scientific experiments and projects such as CMS and neuGRIDforYou (N4U) are annually producing data of the order of Peta-Bytes. They adopt scientific workflows to analyse this large amount of data in order to extract meaningful information. These workflows are executed over distributed resources, both compute and storage in nature, provided by the Grid and recently by the Cloud. The Cloud is becoming the playing field for scientists as it provides scalability and on-demand resource provisioning. Reproducing a workflow execution to verify results is vital for scientists and have proven to be a challenge. As per a study (Belhajjame et al. 2012) around 80% of workflows cannot be reproduced, and 12% of them are due to the lack of information about the execution environment. The dynamic and on-demand provisioning capability of the Cloud makes this more challenging. To overcome these challenges, this research aims to investigate how to capture the execution provenance of a scientific workflow along with the resources used to execute the workflow in a Cloud infrastructure. This information will then enable a scientist to reproduce workflow-based scientific experiments on the Cloud infrastructure by re-provisioning the similar resources on the Cloud.Provenance has been recognised as information that helps in debugging, verifying and reproducing a scientific workflow execution. Recent adoption of Cloud-based scientific workflows presents an opportunity to investigate the suitability of existing approaches or to propose new approaches to collect provenance information from the Cloud and to utilize it for workflow reproducibility on the Cloud. From literature analysis, it was found that the existing approaches for Grid or Cloud do not provide detailed resource information and also do not present an automatic provenance capturing approach for the Cloud environment. To mitigate the challenges and fulfil the knowledge gap, a provenance based approach, ReCAP, has been proposed in this thesis. In ReCAP, workflow execution reproducibility is achieved by (a) capturing the Cloud-aware provenance (CAP), b) re-provisioning similar resources on the Cloud and re-executing the workflow on them and (c) by comparing the provenance graph structure including the Cloud resource information, and outputs of workflows. ReCAP captures the Cloud resource information and links it with the workflow provenance to generate Cloud-aware provenance. The Cloud-aware provenance consists of configuration parameters relating to hardware and software describing a resource on the Cloud. This information once captured aids in re-provisioning the same execution infrastructure on the Cloud for workflow re-execution. Since resources on the Cloud can be used in static or dynamic (i.e. destroyed when a task is finished) manner, this presents a challenge for the devised provenance capturing approach. In order to deal with these scenarios, different capturing and mapping approaches have been presented in this thesis. These mapping approaches work outside the virtual machine and collect resource information from the Cloud middleware, thus they do not affect job performance. The impact of the collected Cloud resource information on the job as well as on the workflow execution has been evaluated through various experiments in this thesis. In ReCAP, the workflow reproducibility isverified by comparing the provenance graph structure, infrastructure details and the output produced by the workflows. To compare the provenance graphs, the captured provenance information including infrastructure details is translated to a graph model. These graphs of original execution and the reproduced execution are then compared in order to analyse their similarity. In this regard, two comparison approaches have been presented that can produce a qualitative analysis as well as quantitative analysis about the graph structure. The ReCAP framework and its constituent components are evaluated using different scientific workflows such as ReconAll and Montage from the domains of neuroscience (i.e. N4U) and astronomy respectively. The results have shown that ReCAP has been able to capture the Cloud-aware provenance and demonstrate the workflow execution reproducibility by re-provisioning the same resources on the Cloud. The results have also demonstrated that the provenance comparison approaches can determine the similarity between the two given provenance graphs. The results of workflow output comparison have shown that this approach is suitable to compare the outputs of scientific workflows, especially for deterministic workflows

    Trust enhanced security in SaaS cloud computing

    Full text link
    Trust problem in Software as a Service Cloud Computing is a broad range of a Data Owner’s concerns about the data in the Cloud. The Data Owner’s concerns about the data arise from the way the data is handled in locations and machines that are unknown to the Data Owner

    Tracking of data leaving the cloud

    No full text
    Data leakages out of cloud computing environments are fundamental cloud security concerns for both the end-users and the cloud service providers. A literature survey of the existing technologies revealed the inadequacies of current technologies and the need for a new methodology. This position paper discusses the requirements and proposes a novel auditing methodology that enables tracking of data transferred out of Clouds. Initial results from our prototypes are reported. This research is aligned to our vision that by providing transparency, accountability and audit trails for all data events within and out of the Cloud, trust and confidence can be instilled into the industry as users will get to know what exactly is going on with their data in and out of the Cloud

    Tracking of data leaving the cloud

    No full text
    Data leakages out of cloud computing environments are fundamental cloud security concerns for both the end-users and the cloud service providers. A literature survey of the existing technologies revealed the inadequacies of current technologies and the need for a new methodology. This position paper discusses the requirements and proposes a novel auditing methodology that enables tracking of data transferred out of Clouds. Initial results from our prototypes are reported. This research is aligned to our vision that by providing transparency, accountability and audit trails for all data events within and out of the Cloud, trust and confidence can be instilled into the industry as users will get to know what exactly is going on with their data in and out of the Cloud
    corecore