1,653 research outputs found

    Calm before the storm: the challenges of cloud computing in digital forensics

    Get PDF
    Cloud computing is a rapidly evolving information technology (IT) phenomenon. Rather than procure, deploy and manage a physical IT infrastructure to host their software applications, organizations are increasingly deploying their infrastructure into remote, virtualized environments, often hosted and managed by third parties. This development has significant implications for digital forensic investigators, equipment vendors, law enforcement, as well as corporate compliance and audit departments (among others). Much of digital forensic practice assumes careful control and management of IT assets (particularly data storage) during the conduct of an investigation. This paper summarises the key aspects of cloud computing and analyses how established digital forensic procedures will be invalidated in this new environment. Several new research challenges addressing this changing context are also identified and discussed

    A Revised Forensic Process for Aligning the Investigation Process with the Design of Forensic-Enabled Cloud Services

    Get PDF
    © Springer Nature Switzerland AG 2020. The design and implementation of cloud services, without taking under consideration the forensic requirements and the investigation process, makes the acquisition and examination of data, complex and demanding. The evidence gathered from the cloud may not become acceptable and admissible in the court. A literature gap in supporting software engineers so as to elicit and model forensic-related requirements exists. In order to fill the gap, software engineers should develop cloud services in a forensically sound manner. In this paper, a brief description of the cloud forensic-enabled framework is presented (adding some new elements) so as to understand the role of the design of forensic-enabled cloud services in a cloud forensic investigation. A validation of the forensic requirements is also produced by aligning the stages of cloud forensic investigation process with the framework’s forensic requirements. In this way, on one hand, a strong relationship is built between these two elements and emphasis is given to the role of the forensic requirements and their necessity in supporting the investigation process. On the other hand, the alignment assists towards the identification of the degree of the forensic readiness of a cloud service against a forensic investigation

    An extensive research survey on data integrity and deduplication towards privacy in cloud storage

    Get PDF
    Owing to the highly distributed nature of the cloud storage system, it is one of the challenging tasks to incorporate a higher degree of security towards the vulnerable data. Apart from various security concerns, data privacy is still one of the unsolved problems in this regards. The prime reason is that existing approaches of data privacy doesn't offer data integrity and secure data deduplication process at the same time, which is highly essential to ensure a higher degree of resistance against all form of dynamic threats over cloud and internet systems. Therefore, data integrity, as well as data deduplication is such associated phenomena which influence data privacy. Therefore, this manuscript discusses the explicit research contribution toward data integrity, data privacy, and data deduplication. The manuscript also contributes towards highlighting the potential open research issues followed by a discussion of the possible future direction of work towards addressing the existing problems

    KFREAIN: Design of A Kernel-Level Forensic Layer for Improving Real-Time Evidence Analysis Performance in IoT Networks

    Get PDF
    An exponential increase in number of attacks in IoT Networks makes it essential to formulate attack-level mitigation strategies. This paper proposes design of a scalable Kernel-level Forensic layer that assists in improving real-time evidence analysis performance to assist in efficient pattern analysis of the collected data samples. It has an inbuilt Temporal Blockchain Cache (TBC), which is refreshed after analysis of every set of evidences. The model uses a multidomain feature extraction engine that combines lightweight Fourier, Wavelet, Convolutional, Gabor, and Cosine feature sets that are selected by a stochastic Bacterial Foraging Optimizer (BFO) for identification of high variance features. The selected features are processed by an ensemble learning (EL) classifier that use low complexity classifiers reducing the energy consumption during analysis by 8.3% when compared with application-level forensic models. The model also showcased 3.5% higher accuracy, 4.9% higher precision, and 4.3% higher recall of attack-event identification when compared with standard forensic techniques. Due to kernel-level integration, the model is also able to reduce the delay needed for forensic analysis on different network types by 9.5%, thus making it useful for real-time & heterogenous network scenarios

    Methodology for the Automated Metadata-Based Classification of Incriminating Digital Forensic Artefacts

    Full text link
    The ever increasing volume of data in digital forensic investigation is one of the most discussed challenges in the field. Usually, most of the file artefacts on seized devices are not pertinent to the investigation. Manually retrieving suspicious files relevant to the investigation is akin to finding a needle in a haystack. In this paper, a methodology for the automatic prioritisation of suspicious file artefacts (i.e., file artefacts that are pertinent to the investigation) is proposed to reduce the manual analysis effort required. This methodology is designed to work in a human-in-the-loop fashion. In other words, it predicts/recommends that an artefact is likely to be suspicious rather than giving the final analysis result. A supervised machine learning approach is employed, which leverages the recorded results of previously processed cases. The process of features extraction, dataset generation, training and evaluation are presented in this paper. In addition, a toolkit for data extraction from disk images is outlined, which enables this method to be integrated with the conventional investigation process and work in an automated fashion

    An Automated Approach for Digital Forensic Analysis of Heterogeneous Big Data

    Get PDF
    The major challenges with big data examination and analysis are volume, complex interdependence across content, and heterogeneity. The examination and analysis phases are considered essential to a digital forensics process. However, traditional techniques for the forensic investigation use one or more forensic tools to examine and analyse each resource. In addition, when multiple resources are included in one case, there is an inability to cross-correlate findings which often leads to inefficiencies in processing and identifying evidence. Furthermore, most current forensics tools cannot cope with large volumes of data. This paper develops a novel framework for digital forensic analysis of heterogeneous big data. The framework mainly focuses upon the investigations of three core issues: data volume, heterogeneous data and the investigators cognitive load in understanding the relationships between artefacts. The proposed approach focuses upon the use of metadata to solve the data volume problem, semantic web ontologies to solve the heterogeneous data sources and artificial intelligence models to support the automated identification and correlation of artefacts to reduce the burden placed upon the investigator to understand the nature and relationship of the artefacts

    CeFF: A Frameword for Forensics Enabled Cloud Investigation

    Get PDF
    Today, cloud computing has developed a transformative model for the organization, business, governments that brings huge potentials and turn into popular for pay as you go, on-demand service, scalability and efficient services. However, cloud computing has made the concern for forensic data because of the architecture of cloud system is not measured appropriately. Due to the distributed nature of the cloud system, many aspects relating to the forensic investigation such as data collection, data storage, crime target, data violation are difficult to achieve. Investigating the incidents in the cloud environment is a challenging task because the forensics investigator still needs to relay on the third party such as cloud service provider for performing their investigation tasks. It makes the overall forensic process difficult to complete with a duration and presented it to the court. Recently, there are some cloud forensics studies to address the challenges such as evidence collection, data acquisition, identifying the incidents and so on. However, still, there is a research gap in terms of consistency of analysing forensic evidence from distributed environment and methodology to analyse the forensic data in the cloud. This thesis contributes towards the direction of addressing the research gaps. In particular, this work proposes a forensic investigation framework CeFF: A framework for forensics enabled cloud investigation to investigate evidence in the cloud computing environment. The framework includes a set of concepts from organisational, technical and legal perspectives, which gives a holistic view of analysing cybercrime from organisation context where the crime has occurred through technical context and legal impact. The CeFF also includes a systematic process that uses the concept for performing the investigation. The cloud-enabled forensics framework meets all the forensics related requirement such as data collection, examination, presents the report, and identifies the potential risks that can consider while investigating the evidence in the cloud-computing environment. Finally, the proposed CeFF is applied to a real-life example to validate its applicability. The result shows that CeFF supports analysing the forensic data for a crime occurred in cloud-based system in a systematic way

    A Novel User Oriented Network Forensic Analysis Tool

    Get PDF
    In the event of a cybercrime, it is necessary to examine the suspect’s digital device(s) in a forensic fashion so that the culprit can be presented in court along with the extracted evidence(s). But, factors such as existence and availability of anti-forensic tools/techniques and increasing replacement of hard disk drives with solid state disks have the ability to eradicate critical evidences and/or ruin their integrity. Therefore, having an alternative source of evidence with a lesser chance of being tampered with can be beneficial for the investigation. The organisational network traffic can fit into this role as it is an independent source of evidence and will contain a copy of all online user activities. Limitations of prevailing network traffic analysis techniques – packet based and flow based – are reflected as certain challenges in the investigation. The enormous volume and increasing encrypted nature of traffic, the dynamic nature of IP addresses of users’ devices, and the difficulty in extracting meaningful information from raw traffic are among those challenges. Furthermore, current network forensic tools, unlike the sophisticated computer forensic tools, are limited in their capability to exhibit functionalities such as collaborative working, visualisation, reporting and extracting meaningful user-level information. These factors increase the complexity of the analysis, and the time and effort required from the investigator. The research goal was set to design a system that can assist in the investigation by minimising the effects of the aforementioned challenges, thereby reducing the cognitive load on the investigator, which, the researcher thinks, can take the investigator one step closer to the culprit. The novelty of this system comes from a newly proposed interaction based analysis approach, which will extract online user activities from raw network metadata. Practicality of the novel interaction-based approach was tested by designing an experimental methodology, which involved an initial phase of the researcher looking to identify unique signatures for activities performed on popular Internet applications (BBC, Dropbox, Facebook, Hotmail, Google Docs, Google Search, Skype, Twitter, Wikipedia, and YouTube) from the researcher’s own network metadata. With signatures obtained, the project moved towards the second phase of the experiment in which a much larger dataset (network traffic collected from 27 users for over 2 months) was analysed. Results showed that it is possible to extract unique signature of online user activities from raw network metadata. However, due to the complexities of the applications, signatures were not found for some activities. The interaction-based approach was able to reduce the data volume by eliminating the noise (machine to machine communication packets) and to find a way around the encryption issue by using only the network metadata. A set of system requirements were generated, based on which a web based, client-server architecture for the proposed system (i.e. the User-Oriented Network Forensic Analysis Tool) was designed. The system functions in a case management premise while minimising the challenges that were identified earlier. The system architecture led to the development of a functional prototype. An evaluation of the system by academic experts from the field acted as a feedback mechanism. While the evaluators were satisfied with the system’s capability to assist in the investigation and meet the requirements, drawbacks such as inability to analyse real-time traffic and meeting the HCI standards were pointed out. The future work of the project will involve automated signature extraction, real-time processing and facilitation of integrated visualisation
    • …
    corecore