3 research outputs found

    Integrated examination and analysis model for improving mobile cloud forensic investigation

    Get PDF
    Advanced forensic techniques become inevitable to investigate the malicious activities in Cloud-based Mobile Applications (CMA). It is challenging to analyse the casespecific evidential artifact from the Mobile Cloud Computing (MCC) environment under forensically sound conditions. The Mobile Cloud Investigation (MCI) encounters many research issues in tracing and fine-tuning the relevant evidential artifacts from the MCC environment. This research proposes an integrated Examination and Analysis (EA) model for a generalised application architecture of CMA deployable on the public cloud to trace the case-specific evidential artifacts. The proposed model effectively validates MCI and enhances the accuracy and speed of the investigation. In this context, proposing Forensic Examination and Analysis Methodology using Data mining (FED) and Forensic Examination and analysis methodology using Data mining and Optimization (FEDO) models address these issues. The FED incorporates key sub-phases such as timeline analysis, hash filtering, data carving, and data transformation to filter out case-specific artifacts. The Long Short-Term Memory (LSTM) assisted forensic methodology decides the amount of potential information to be retained for further investigation and categorizes the forensic evidential artifacts for the relevancy of the crime event. Finally, the FED model constructs the forensic evidence taxonomy and maintains the precision and recall above 85% for effective decision-making. FEDO facilitates cloud evidence by examining the key features and indexing the evidence. The FEDO incorporates several sub-phases to precisely handle the evidence, such as evidence indexing, crossreferencing, and keyword searching. It analyses the temporal and geographic information and performs cross-referencing to fine-tune the evidence towards the casespecific evidence. FEDO models the Linearly Decreasing Weight (LDW) strategy based Particle Swarm Optimization (PSO) algorithm on the case-specific evidence to improve the searching capability of the investigation across the massive MCC environment. FEDO delivers the evidence tracing rate at 90%, and thus the integrated EA ensures improved MCI performance

    HESML: A scalable ontology-based semantic similarity measures library with a set of reproducible experiments and a replication dataset

    Get PDF
    This work is a detailed companion reproducibility paper of the methods and experiments proposed by Lastra-Díaz and García-Serrano in (2015, 2016) [56–58], which introduces the following contributions: (1) a new and efficient representation model for taxonomies, called PosetHERep, which is an adaptation of the half-edge data structure commonly used to represent discrete manifolds and planar graphs; (2) a new Java software library called the Half-Edge Semantic Measures Library (HESML) based on PosetHERep, which implements most ontology-based semantic similarity measures and Information Content (IC) models reported in the literature; (3) a set of reproducible experiments on word similarity based on HESML and ReproZip with the aim of exactly reproducing the experimental surveys in the three aforementioned works; (4) a replication framework and dataset, called WNSimRep v1, whose aim is to assist the exact replication of most methods reported in the literature; and finally, (5) a set of scalability and performance benchmarks for semantic measures libraries. PosetHERep and HESML are motivated by several drawbacks in the current semantic measures libraries, especially the performance and scalability, as well as the evaluation of new methods and the replication of most previous methods. The reproducible experiments introduced herein are encouraged by the lack of a set of large, self-contained and easily reproducible experiments with the aim of replicating and confirming previously reported results. Likewise, the WNSimRep v1 dataset is motivated by the discovery of several contradictory results and difficulties in reproducing previously reported methods and experiments. PosetHERep proposes a memory-efficient representation for taxonomies which linearly scales with the size of the taxonomy and provides an efficient implementation of most taxonomy-based algorithms used by the semantic measures and IC models, whilst HESML provides an open framework to aid research into the area by providing a simpler and more efficient software architecture than the current software libraries. Finally, we prove the outperformance of HESML on the state-of-the-art libraries, as well as the possibility of significantly improving their performance and scalability without caching using PosetHERep

    PRIORITISATION IN DIGITAL FORENSICS: A CASE STUDY OF ABU DHABI POLICE

    Get PDF
    The main goal of this research is to investigate prioritization process in digital forensics departments in law enforcement organizations. This research is motivated by the fact that case prioritisation plays crucial role to achieve efficient operations in digital forensics departments. Recent years have witnessed the widespread use of digital devices in every aspect of human life, around the globe. One of these aspects is crime. These devices have became an essential part of every investigation in almost all cases handled by police. The reason behind their importance lies in their ability to store huge amounts of data that can be utilized by investigators to solve cases under consideration. Thus, involving Digital Forensics departments, though often over-burdened and under-resourced, is becoming a compulsory to achieve successful investigations. Increasing the effectiveness of these departments requires improving their processes including case prioritisation. Existing literature focuses on prioritisation process within the context of crime scene triage. The main research problem in literature is prioritising existing digital devices found in crime scene in a way that leads to successful digital forensics. On the other hand, the research problem in this thesis focuses on prioritisation of cases rather than digital devices belonging to a specific case. Normally, Digital Forensics cases are prioritised based on several factors where influence of officers handling the case play one of the most important roles. Therefore, this research investigates how perception of different individuals in law enforcement organization may affect case prioritisation for the Digital Forensics department. To address this prioritisation problem, the research proposes the use of maturity models and machine learning. A questionnaire was developed and distributed among officers in Abu Dhabi Police. The main goal of this questionnaire is to measure perception regarding digital forensics among employees in Abu Dhabi police. Response of the subjects were divided into two sets. The first set represents responses of subjects who are experts in DF; while the other set includes the remaining subjects. Responses in the first set were averaged to produce a benchmark of the optimal questionnaire answers. Then, a reliability measure is proposed to summarize each subject’s perception. Data obtained from the reliability measurement were used in machine learning models, so that the process is automated. Results of data analysis confirmed the severity of problem where the proposed prioritisation process can be a very effective solution as seen in the results provided in this thesis
    corecore