392 research outputs found

    An Automated Approach for Digital Forensic Analysis of Heterogeneous Big Data

    Get PDF
    The major challenges with big data examination and analysis are volume, complex interdependence across content, and heterogeneity. The examination and analysis phases are considered essential to a digital forensics process. However, traditional techniques for the forensic investigation use one or more forensic tools to examine and analyse each resource. In addition, when multiple resources are included in one case, there is an inability to cross-correlate findings which often leads to inefficiencies in processing and identifying evidence. Furthermore, most current forensics tools cannot cope with large volumes of data. This paper develops a novel framework for digital forensic analysis of heterogeneous big data. The framework mainly focuses upon the investigations of three core issues: data volume, heterogeneous data and the investigators cognitive load in understanding the relationships between artefacts. The proposed approach focuses upon the use of metadata to solve the data volume problem, semantic web ontologies to solve the heterogeneous data sources and artificial intelligence models to support the automated identification and correlation of artefacts to reduce the burden placed upon the investigator to understand the nature and relationship of the artefacts

    Dealing with temporal inconsistency in automated computer forensic profiling

    Get PDF
    Computer profiling is the automated forensic examination of a computer system in order to provide a human investigator with a characterisation of the activities that have taken place on that system. As part of this process, the logical components of the computer system – components such as users, files and applications - are enumerated and the relationships between them discovered and reported. This information is enriched with traces of historical activity drawn from system logs and from evidence of events found in the computer file system. A potential problem with the use of such information is that some of it may be inconsistent and contradictory thus compromising its value. This work examines the impact of temporal inconsistency in such information and discusses two types of temporal inconsistency that may arise – inconsistency arising out of the normal errant behaviour of a computer system, and inconsistency arising out of deliberate tampering by a suspect – and techniques for dealing with inconsistencies of the latter kind. We examine the impact of deliberate tampering through experiments conducted with prototype computer profiling software. Based on the results of these experiments, we discuss techniques which can be employed in computer profiling to deal with such temporal inconsistencies

    Distributed Digital Forensics on Pre-existing Internal Networks

    Get PDF
    Today\u27s large datasets are a major hindrance on digital investigations and have led to a substantial backlog of media that must be examined. While this media sits idle, its relevant investigation must sit idle inducing investigative time lag. This study created a client/server application architecture that operated on an existing pool of internally networked Windows 7 machines. This distributed digital forensic approach helps to address scalability concerns with other approaches while also being financially feasible. Text search runtimes and match counts were evaluated using several scenarios including a 100 GB image with prefabricated data. When compared to FTK 4.1, a 125 times speed up was experienced in the best case while a three times speed up was experienced in the worst case. These rapid search times nearly irrationalize the need to utilize long indexing processes to analyze digital evidence allowing for faster digital investigations

    Data reduction and data mining framework for digital forensic evidence: storage, intelligence, review and archive

    Get PDF
    With the volume of digital forensic evidence rapidly increasing, this paper proposes a data reduction and data mining framework that incorporates a process of reducing data volume by focusing on a subset of information. Foreword The volume of digital forensic evidence is rapidly increasing, leading to large backlogs. In this paper, a Digital Forensic Data Reduction and Data Mining Framework is proposed. Initial research with sample data from South Australia Police Electronic Crime Section and Digital Corpora Forensic Images using the proposed framework resulted in significant reduction in the storage requirements—the reduced subset is only 0.196 percent and 0.75 percent respectively of the original data volume. The framework outlined is not suggested to replace full analysis, but serves to provide a rapid triage, collection, intelligence analysis, review and storage methodology to support the various stages of digital forensic examinations. Agencies that can undertake rapid assessment of seized data can more effectively target specific criminal matters. The framework may also provide a greater potential intelligence gain from analysis of current and historical data in a timely manner, and the ability to undertake research of trends over time

    Rethinking Digital Forensics

    Get PDF
    © IAER 2019In the modern socially-driven, knowledge-based virtual computing environment in which organisations are operating, the current digital forensics tools and practices can no longer meet the need for scientific rigour. There has been an exponential increase in the complexity of the networks with the rise of the Internet of Things, cloud technologies and fog computing altering business operations and models. Adding to the problem are the increased capacity of storage devices and the increased diversity of devices that are attached to networks, operating autonomously. We argue that the laws and standards that have been written, the processes, procedures and tools that are in common use are increasingly not capable of ensuring the requirement for scientific integrity. This paper looks at a number of issues with current practice and discusses measures that can be taken to improve the potential of achieving scientific rigour for digital forensics in the current and developing landscapePeer reviewe

    A Hybrid Methodology Approach for Fraud Detection Using Event Correlation Approach

    Get PDF
    To effectively investigate mass of events oriented data, automated methods for extracting event records and then classifying events and patterns of events into higher level terminology and vocabulary are necessary. Semantically rich representation model and automated methods of correlating event information expressed in such models are becoming a necessity. The Event Correlation for Forensics (ECF) framework was developed with the strategic objective “to develop a means by which a consolidated repository of event information can be constituted and then queried in order to provide an investigator with post hoc event correlation. Key words: Semantics, Correlation, Digital Forensic DOI: 10.17762/ijritcc2321-8169.15083

    Big data techniques for wind turbine condition monitoring

    Get PDF
    The continual development of sensor and storage technology has led to a dramatic increase in volumes of data being captured for condition monitoring and machine health assessment. Beyond wind energy, many sectors are dealing with the same issue, and these large, complex data sets have been termed ‘Big Data’. Big Data may be defined as having three dimensions: volume, velocity, and variety. This paper discusses the application of Big Data practices for use in wind turbine condition monitoring, with reference to a deployed system capturing 2 TB of data per month

    Toward music-based data discrimination for cybercrime investigations

    Get PDF
    In this paper we describe an approach to data interpretation in which ‘raw’ data is analysed quantitatively in terms of textual content and the results of this analysis ‘converted’ to music. The purpose of this work is to investigate the viability of projecting complex text-based data, via textual analysis, to a musical rendering as a means for discriminating data sets ‘by ear’. This has the potential of allowing non-domain experts to make distinctions between sets of data based upon their listening skills. We present this work as a research agenda, since it is based upon earlier exploration of the underlying concept of mapping textual analyses to music, and explore possible areas of application in the domains of information security and digital forensics

    Remote sensing information sciences research group

    Get PDF
    Research conducted under this grant was used to extend and expand existing remote sensing activities at the University of California, Santa Barbara in the areas of georeferenced information systems, matching assisted information extraction from image data and large spatial data bases, artificial intelligence, and vegetation analysis and modeling. The research thrusts during the past year are summarized. The projects are discussed in some detail
    • …
    corecore