16 research outputs found

    CloudMe forensics : a case of big-data investigation

    Get PDF
    The significant increase in the volume, variety and velocity of data complicates cloud forensic efforts, as such big data will, at some point, become computationally expensive to be fully extracted and analyzed in a timely manner. Thus, it is important for a digital forensic practitioner to have a well-rounded knowledge about the most relevant data artefacts that could be forensically recovered from the cloud product under investigation. In this paper, CloudMe, a popular cloud storage service, is studied. The types and locations of the artefacts relating to the installation and uninstallation of the client application, logging in and out, and file synchronization events from the computer desktop and mobile clients are described. Findings from this research will pave the way towards the development of tools and techniques (e.g. data mining techniques) for cloud-enabled big data endpoint forensics investigation

    Experience Constructing the Artifact Genome Project (AGP): Managing the Domain\u27s Knowledge One Artifact at a Time

    Get PDF
    While various tools have been created to assist the digital forensics community with acquiring, processing, and organizing evidence and indicating the existence of artifacts, very few attempts have been made to establish a centralized system for archiving artifacts. The Artifact Genome Project (AGP) has aimed to create the largest vetted and freely available digital forensics repository for Curated Forensic Artifacts (CuFAs). This paper details the experience of building, implementing, and maintaining such a system by sharing design decisions, lessons learned, and future work. We also discuss the impact of AGP in both the professional and academic realms of digital forensics. Our work shows promise in the digital forensics academic community to champion the effort in curating digital forensic artifacts by integrating AGP into courses, research endeavors, and collaborative projects

    Greening cloud-enabled big data storage forensics : Syncany as a case study

    Get PDF
    The pervasive nature of cloud-enabled big data storage solutions introduces new challenges in the identification, collection, analysis, preservation and archiving of digital evidences. Investigation of such complex platforms to locate and recover traces of criminal activities is a time-consuming process. Hence, cyber forensics researchers are moving towards streamlining the investigation process by locating and documenting residual artefacts (evidences) of forensic value of users’ activities on cloud-enabled big data platforms in order to reduce the investigation time and resources involved in a real-world investigation. In this paper, we seek to determine the data remnants of forensic value from Syncany private cloud storage service, a popular storage engine for big data platforms. We demonstrate the types and the locations of the artefacts that can be forensically recovered. Findings from this research contribute to an in-depth understanding of cloud-enabled big data storage forensics, which can result in reduced time and resources spent in real-world investigations involving Syncany-based cloud platforms

    Forensic investigation of cooperative storage cloud service: Symform as a case study

    Get PDF
    Researchers envisioned Storage as a Service (StaaS) as an effective solution to the distributed management of digital data. Cooperative storage cloud forensic is relatively new and is an under-explored area of research. Using Symform as a case study, we seek to determine the data remnants from the use of cooperative cloud storage services. In particular, we consider both mobile devices and personal computers running various popular operating systems, namely Windows 8.1, Mac OS X Mavericks 10.9.5, Ubuntu 14.04.1 LTS, iOS 7.1.2, and Android KitKat 4.4.4. Potential artefacts recovered during the research include data relating to the installation and uninstallation of the cloud applications, log-in to and log-out from Symform account using the client application, file synchronization as well as their time stamp information. This research contributes to an in-depth understanding of the types of terrestrial artifacts that are likely to remain after the use of cooperative storage cloud on client devices

    API-Based Acquisition of Evidence from Cloud Storage Providers

    Get PDF
    Cloud computing and cloud storage services, in particular, pose a new challenge to digital forensic investigations. Currently, evidence acquisition for such services still follows the traditional approach of collecting artifacts on a client device. In this work, we show that such an approach not only requires upfront substantial investment in reverse engineering each service, but is also inherently incomplete as it misses prior versions of the artifacts, as well as cloud-only artifacts that do not have standard serialized representations on the client. In this work, we introduce the concept of API-based evidence acquisition for cloud services, which addresses these concerns by utilizing the officially supported API of the service. To demonstrate the utility of this approach, we present a proof-of-concept acquisition tool, kumodd, which can acquire evidence from four major cloud storage providers: Google Drive, Microsoft One, Dropbox, and Box. The implementation provides both command-line and web user interfaces, and can be readily incorporated into established forensic processes

    Forensic investigation of P2P cloud storage services and backbone for IoT networks : BitTorrent Sync as a case study

    Get PDF
    Cloud computing has been regarded as the technology enabler for the Internet of Things (IoT). To ensure the most effective collection of IoT-based evidence, it is vital for forensic practitioners to possess a contemporary understanding of the artefacts from different cloud services. In this paper, we seek to determine the data remnants from the use of BitTorrent Sync version 2.0. Findings from our research using mobile and computer devices running Windows 8.1, Mac OS X Mavericks 10.9.5, Ubuntu 14.04.1 LTS, iOS 7.1.2, and Android KitKat 4.4.4 suggested that artefacts relating to the installation, uninstallation, log-in, log-off, and file synchronisation could be recovered, which are potential sources of IoT forensics. We also present a forensically sound investigation methodology for BitTorrent Sync

    Forensic Analysis of G Suite Collaborative Protocols

    Get PDF
    Widespread adoption of cloud services is fundamentally changing the way IT services are delivered and how data is stored. Current forensic tools and techniques have been slow to adapt to new challenges and demands of collecting and analyzing cloud artifacts. Traditional methods focusing only on client data collection are incomplete, as the client may have only a (partial) snapshot and misses cloud-native artifacts that may contain valuable historical information. In this work, we demonstrate the importance of recovering and analyzing cloud-native artifacts using G Suite as a case study. We develop a tool that extracts and processes the history of Google Documents and Google Slides by reverse engineering the web applications private protocol. Combined with previous work that has focused on API-based acquisition of cloud drives, this presents a more complete solution to cloud forensics, and is generalizable to any cloud service that maintains a detailed log of revisions

    Enhanced forensic process model in cloud environment

    Get PDF
    Digital forensics practitioners have used conventional digital forensics process models to investigate cloud security incidents. Presently, there is a lack of an agreed upon or a standard process model in cloud forensics. Besides, literature has shown that there is an explicit need for consumers to collect evidence for due-diligence or legal reasons. Furthermore, a consumer oriented cloud forensics process model is yet to be found in the literature. This has created a lack of consumer preparedness for cloud incident investigations and dependency on providers for evidence collection. This research addressed these limitations by developing a cloud forensic process model. A design science research methodology was employed to develop the model. A set of requirements believed to be solutions for the challenges reported in three survey papers were applied in this research. These requirements were mapped to existing cloud forensic process models to further explicate the weaknesses. A set of process models suitable for the extraction of necessary processes was selected based on the requirements, and these selected models constituted the cloud forensic process model. The processes were consolidated and the model was proposed to alleviate dependency on the provider problem. In this model, three digital forensic types including forensic readiness, live forensics and postmortem forensic investigations were considered. Besides, a Cloud-Forensic-as-a-Service model that produces evidence trusted by both consumers and providers through a conflict resolution protocol was also designed. To evaluate the utility and usability of the model, a plausible case scenario was investigated. For validation purposes, the cloud forensic process model together with its implementation in the case scenario and set of requirements were presented to a group of experts for evaluation. Effectiveness of the requirements was rated positive by the experts. The findings of the research indicated that the model can be used for cloud investigation and is rated easy to be used and adopted by consumers

    Integrated examination and analysis model for improving mobile cloud forensic investigation

    Get PDF
    Advanced forensic techniques become inevitable to investigate the malicious activities in Cloud-based Mobile Applications (CMA). It is challenging to analyse the casespecific evidential artifact from the Mobile Cloud Computing (MCC) environment under forensically sound conditions. The Mobile Cloud Investigation (MCI) encounters many research issues in tracing and fine-tuning the relevant evidential artifacts from the MCC environment. This research proposes an integrated Examination and Analysis (EA) model for a generalised application architecture of CMA deployable on the public cloud to trace the case-specific evidential artifacts. The proposed model effectively validates MCI and enhances the accuracy and speed of the investigation. In this context, proposing Forensic Examination and Analysis Methodology using Data mining (FED) and Forensic Examination and analysis methodology using Data mining and Optimization (FEDO) models address these issues. The FED incorporates key sub-phases such as timeline analysis, hash filtering, data carving, and data transformation to filter out case-specific artifacts. The Long Short-Term Memory (LSTM) assisted forensic methodology decides the amount of potential information to be retained for further investigation and categorizes the forensic evidential artifacts for the relevancy of the crime event. Finally, the FED model constructs the forensic evidence taxonomy and maintains the precision and recall above 85% for effective decision-making. FEDO facilitates cloud evidence by examining the key features and indexing the evidence. The FEDO incorporates several sub-phases to precisely handle the evidence, such as evidence indexing, crossreferencing, and keyword searching. It analyses the temporal and geographic information and performs cross-referencing to fine-tune the evidence towards the casespecific evidence. FEDO models the Linearly Decreasing Weight (LDW) strategy based Particle Swarm Optimization (PSO) algorithm on the case-specific evidence to improve the searching capability of the investigation across the massive MCC environment. FEDO delivers the evidence tracing rate at 90%, and thus the integrated EA ensures improved MCI performance
    corecore