222 research outputs found

    Insight from a Docker Container Introspection

    Get PDF
    Large-scale adoption of virtual containers has stimulated concerns by practitioners and academics about the viability of data acquisition and reliability due to the decreasing window to gather relevant data points. These concerns prompted the idea that introspection tools, which are able to acquire data from a system as it is running, can be utilized as both an early warning system to protect that system and as a data capture system that collects data that would be valuable from a digital forensic perspective. An exploratory case study was conducted utilizing a Docker engine and Prometheus as the introspection tool. The research contribution of this research is two-fold. First, it provides empirical support for the idea that introspection tools can be utilized to ascertain differences between pristine and infected containers. Second, it provides the ground work for future research conducting an analysis of large-scale containerized applications in a virtual cloud

    Container and VM Visualization for Rapid Forensic Analysis

    Get PDF
    Cloud-hosted software such as virtual machines and containers are notoriously difficult to access, observe, and inspect during ongoing security events. This research describes a new, out-of-band forensic tool for rapidly analyzing cloud based software. The proposed tool renders two-dimensional visualizations of container contents and virtual machine disk images. The visualizations can be used to identify container / VM contents, pinpoint instances of embedded malware, and find modified code. The proposed new forensic tool is compared against other forensic tools in a double-blind experiment. The results confirm the utility of the proposed tool. Implications and future research directions are also described

    Performance modelling and optimization for video-analytic algorithms in a cloud-like environment using machine learning

    Get PDF
    CCTV cameras produce a large amount of video surveillance data per day, and analysing them require the use of significant computing resources that often need to be scalable. The emergence of the Hadoop distributed processing framework has had a significant impact on various data intensive applications as the distributed computed based processing enables an increase of the processing capability of applications it serves. Hadoop is an open source implementation of the MapReduce programming model. It automates the operation of creating tasks for each function, distribute data, parallelize executions and handles machine failures that reliefs users from the complexity of having to manage the underlying processing and only focus on building their application. It is noted that in a practical deployment the challenge of Hadoop based architecture is that it requires several scalable machines for effective processing, which in turn adds hardware investment cost to the infrastructure. Although using a cloud infrastructure offers scalable and elastic utilization of resources where users can scale up or scale down the number of Virtual Machines (VM) upon requirements, a user such as a CCTV system operator intending to use a public cloud would aspire to know what cloud resources (i.e. number of VMs) need to be deployed so that the processing can be done in the fastest (or within a known time constraint) and the most cost effective manner. Often such resources will also have to satisfy practical, procedural and legal requirements. The capability to model a distributed processing architecture where the resource requirements can be effectively and optimally predicted will thus be a useful tool, if available. In literature there is no clear and comprehensive modelling framework that provides proactive resource allocation mechanisms to satisfy a user's target requirements, especially for a processing intensive application such as video analytic. In this thesis, with the hope of closing the above research gap, novel research is first initiated by understanding the current legal practices and requirements of implementing video surveillance system within a distributed processing and data storage environment, since the legal validity of data gathered or processed within such a system is vital for a distributed system's applicability in such domains. Subsequently the thesis presents a comprehensive framework for the performance ii modelling and optimization of resource allocation in deploying a scalable distributed video analytic application in a Hadoop based framework, running on virtualized cluster of machines. The proposed modelling framework investigates the use of several machine learning algorithms such as, decision trees (M5P, RepTree), Linear Regression, Multi Layer Perceptron(MLP) and the Ensemble Classifier Bagging model, to model and predict the execution time of video analytic jobs, based on infrastructure level as well as job level parameters. Further in order to propose a novel framework for the allocate resources under constraints to obtain optimal performance in terms of job execution time, we propose a Genetic Algorithms (GAs) based optimization technique. Experimental results are provided to demonstrate the proposed framework's capability to successfully predict the job execution time of a given video analytic task based on infrastructure and input data related parameters and its ability determine the minimum job execution time, given constraints of these parameters. Given the above, the thesis contributes to the state-of-art in distributed video analytics, design, implementation, performance analysis and optimisation

    Digital forensic readiness in operational cloud leveraging ISO/IEC 27043 guidelines on security monitoring

    Get PDF
    An increase in the use of cloud computing technologies by organizations has led to cybercriminals targeting cloud environments to orchestrate malicious attacks. Conversely, this has led to the need for proactive approaches through the use of digital forensic readiness (DFR). Existing studies have attempted to develop proactive prototypes using diverse agent-based solutions that are capable of extracting a forensically sound potential digital evidence. As a way to address this limitation and further evaluate the degree of PDE relevance in an operational platform, this study sought to develop a prototype in an operational cloud environment to achieve DFR in the cloud. The prototype is deployed and executed in cloud instances hosted on OpenStack: the operational cloud environment. The experiments performed in this study show that it is viable to attain DFR in an operational cloud platform. Further observations show that the prototype is capable of harvesting digital data from cloud instances and store the data in a forensic sound database. The prototype also prepares the operational cloud environment to be forensically ready for digital forensic investigations without alternating the functionality of the OpenStack cloud architecture by leveraging the ISO/IEC 27043 guidelines on security monitoring.https://wileyonlinelibrary.com/journal/spy2Computer Scienc

    Frameup: An Incriminatory Attack on Storj: A Peer to Peer Blockchain Enabled Distributed Storage System

    Get PDF
    In this work we present a primary account of frameup, an incriminatory attack made possible because of existing implementations in distributed peer to peer storage. The frameup attack shows that an adversary has the ability to store unencrypted data on the hard drives of people renting out their hard drive space. This is important to forensic examiners as it opens the door for possibly framing an innocent victim. Our work employs Storj as an example technology, due to its popularity and market size. Storj is a blockchain enabled system that allows people to rent out their hard drive space to other users around the world by employing a cryptocurrency token that is used to pay for the services rendered. It uses blockchain features like a transaction ledger, public/private key encryption, and cryptographic hash functions – but this work is not centered around blockchain. Our work discusses two frameup attacks, a preliminary and an optimized attack, both of which take advantage of Storj\u27s implementation. Results illustrate that Storj allows a potential adversary to store incriminating unencrypted files, or parts of files that are viewable on people\u27s systems when renting out their unused hard drive space. We offer potential solutions to mitigate our discovered attacks, a developed tool to review if a person has been a victim of a frameup attack, and a mechanism for showing that the files were stored on a hard drive without the renter\u27s knowledge. Our hope is that this work will inspire future security and forensics research directions in the exploration of distributed peer to peer storage systems that embrace blockchain and cryptocurrency tokens

    Provenance Analysis in Virtualized Environments

    Get PDF
    With the unprecedented need for remote working and virtual retail, there has been a worldwide surge in the adoption of cloud and edge computing. On the other hand, the significant reliance on virtual services has rendered the underlying virtualized environments supporting those services an attractive target for cyber criminals. There exist provenance-based solutions for identifying the root causes of security incidents and threat prevention by tracing the relationships between events at lower abstraction levels (e.g., system calls of an operating system). However, the sheer scale of virtualized environments means that such solutions would generate impractically large and complex provenance graphs for human analysts to interpret, especially in the context of virtualized environments with tens of thousands of users and inter-connected resources. Moreover, most intended user actions (e.g., creating a virtual function) generate a large number of events at lower abstraction levels, while it is typically challenging to associate those triggered operations to the intended actions of users, which further hinders understanding the provenance graphs. Finally, most works rely on human analysts to interpret provenance graphs into human-readable forensic reports. Therefore, the main focus of this thesis is to facilitate the investigation and prevention of security incidents through practical provenance-based solutions in virtualized environments such as clouds and network functions virtualization (NFV). First, we propose a cloud management-level provenance model to facilitate forensic investigations by capturing the dependencies between cloud management operations, instead of low-level system calls. Based on this model, we design a framework to construct management-level provenance graphs and prune operations that are irrelevant to detected security incidents. Second, we propose an approach preventing security incidents in clouds based on the management-level provenance graph. Third, we propose the first multi-level provenance system for NFV built for capturing the relationship between management operations across different levels of the NFV stack, and increasing the interpretability of the logged information by leveraging the inherent cross-level dependencies. Fourth, we propose a solution to bridge the gap between human understanding of natural languages and data provenance by automatically generating forensic reports explaining the root cause of security incidents based on the provenance graphs

    A cloud-based remote sensing data production system

    Get PDF
    The data processing capability of existing remote sensing system has not kept pace with the amount of data typically received and need to be processed. Existing product services are not capable of providing users with a variety of remote sensing data sources for selection, either. Therefore, in this paper, we present a product generation programme using multisource remote sensing data, across distributed data centers in a cloud environment, so as to compensate for the low productive efficiency, less types and simple services of the existing system. The programme adopts “master–slave” architecture. Specifically, the master center is mainly responsible for the production order receiving and parsing, as well as task and data scheduling, results feedback, and so on; the slave centers are the distributed remote sensing data centers, which storage one or more types of remote sensing data, and mainly responsible for production task execution. In general, each production task only runs on one data center, and the data scheduling among centers adopts a “minimum data transferring” strategy. The logical workflow of each production task is organized based on knowledge base, and then turned into the actual executed workflow by Kepler. In addition, the scheduling strategy of each production task mainly depends on the Ganglia monitoring results, thus the computing resources can be allocated or expanded adaptively. Finally, we evaluated the proposed programme using test experiments performed at global, regional and local areas, and the results showed that our proposed cloud-based remote sensing production system could deal with massive remote sensing data and different products generating, as well as on-demand remote sensing computing and information service

    Digital forensics cloud log unification: Implementing CADF in Apache CloudStack

    Get PDF
    Cloud computing is an important step in our era, delivering many advantages in business and our daily life. However, as every new technology, various challenges are brought into light with one of them being the misuse of Cloud computing environments for criminal activities. As such, Cloud service providers have to establish adequate forensic capabilities in order to support forensics investigations in the event of illegal activities in the cloud. In order to help forensics investigations, this paper deals with log format unification in cloud platforms using Distributed Management Task Force's (DMTF) Cloud Auditing Data Federation (CADF) standard. CADF event logging is utilised in the widely used OpenStack, and we have modified the Apache CloudStack platform to become forensically sound. Furthermore, we investigated the existing CloudStack platform along with the proposed CADF event model implemented, with regards to the principles of the Association of Chief Police Officers (ACPO) on handling digital evidence. The results are provided in this paper as well as an automated parsing tool/CADF event consumer, named C.Lo.D, which is freely available and can be downloaded from Github

    A forensic acquisition and analysis system for IaaS

    Get PDF
    Cloud computing is a promising next-generation computing paradigm that offers significant economic benefits to both commercial and public entities. Furthermore, cloud computing provides accessibility, simplicity, and portability for its customers. Due to the unique combination of characteristics that cloud computing introduces (including on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service), digital investigations face various technical, legal, and organizational challenges to keep up with current developments in the field of cloud computing. There are a wide variety of issues that need to be resolved in order to perform a proper digital investigation in the cloud environment. This paper examines the challenges in cloud forensics that are identified in the current research literature, alongside exploring the existing proposals and technical solutions addressed in the respective research. The open problems that need further effort are highlighted. As a result of the analysis of literature, it is found that it would be difficult, if not impossible, to perform an investigation and discovery in the cloud environment without relying on cloud service providers (CSPs). Therefore, dependence on the CSPs is ranked as the greatest challenge when investigators need to acquire evidence in a timely yet forensically sound manner from cloud systems. Thus, a fully independent model requires no intervention or cooperation from the cloud provider is proposed. This model provides a different approach to a forensic acquisition and analysis system (FAAS) in an Infrastructure as a Service model. FAAS seeks to provide a richer and more complete set of admissible evidences than what current CSPs provide, with no requirement for CSP involvement or modification to the CSP’s underlying architecture
    corecore