15,648 research outputs found

    The Potential for cross-drive analysis using automated digital forensic timelines

    Get PDF
    Cross-Drive Analysis (CDA) is a technique designed to allow an investigator to “simultaneously consider information from across a corpus of many data sources”. Existing approaches include multi-drive correlation using text searching, e.g. email addresses, message IDs, credit card numbers or social security numbers. Such techniques have the potential to identify drives of interest from a large set, provide additional information about events that occurred on a single disk, and potentially determine social network membership. Another analysis technique that has significantly advanced in recent years is the use of timelines. Tools currently exist that can extract dates and times from the file system metadata (i.e. MACE times) and also examine the content of certain file types and extract metadata from within. This approach provides a great deal of data that can assist with an investigation, but also compounds the problem of having too much data to examine. A recent paper adds an additional timeline analysis capability, by automatically producing a high-level summary of the activity on a computer system, by combining sets of low-level events into high-level events, for example reducing a setupapi event and several events from the Windows Registry to a single event of ‘a USB stick was connected’. This paper provides an investigation into the extent to which events in such a high-level timeline have the properties suitable to assist with Cross-Drive Analysis. The paper provides several examples that use timelines generated from multiple disk images, including USB stick connections, Skype calls, and access to files on a memory card

    EviPlant: An efficient digital forensic challenge creation, manipulation and distribution solution

    Full text link
    Education and training in digital forensics requires a variety of suitable challenge corpora containing realistic features including regular wear-and-tear, background noise, and the actual digital traces to be discovered during investigation. Typically, the creation of these challenges requires overly arduous effort on the part of the educator to ensure their viability. Once created, the challenge image needs to be stored and distributed to a class for practical training. This storage and distribution step requires significant time and resources and may not even be possible in an online/distance learning scenario due to the data sizes involved. As part of this paper, we introduce a more capable methodology and system as an alternative to current approaches. EviPlant is a system designed for the efficient creation, manipulation, storage and distribution of challenges for digital forensics education and training. The system relies on the initial distribution of base disk images, i.e., images containing solely base operating systems. In order to create challenges for students, educators can boot the base system, emulate the desired activity and perform a "diffing" of resultant image and the base image. This diffing process extracts the modified artefacts and associated metadata and stores them in an "evidence package". Evidence packages can be created for different personae, different wear-and-tear, different emulated crimes, etc., and multiple evidence packages can be distributed to students and integrated into the base images. A number of additional applications in digital forensic challenge creation for tool testing and validation, proficiency testing, and malware analysis are also discussed as a result of using EviPlant.Comment: Digital Forensic Research Workshop Europe 201

    A user-oriented network forensic analyser: the design of a high-level protocol analyser

    Get PDF
    Network forensics is becoming an increasingly important tool in the investigation of cyber and computer-assisted crimes. Unfortunately, whilst much effort has been undertaken in developing computer forensic file system analysers (e.g. Encase and FTK), such focus has not been given to Network Forensic Analysis Tools (NFATs). The single biggest barrier to effective NFATs is the handling of large volumes of low-level traffic and being able to exact and interpret forensic artefacts and their context – for example, being able extract and render application-level objects (such as emails, web pages and documents) from the low-level TCP/IP traffic but also understand how these applications/artefacts are being used. Whilst some studies and tools are beginning to achieve object extraction, results to date are limited to basic objects. No research has focused upon analysing network traffic to understand the nature of its use – not simply looking at the fact a person requested a webpage, but how long they spend on the application and what interactions did they have with whilst using the service (e.g. posting an image, or engaging in an instant message chat). This additional layer of information can provide an investigator with a far more rich and complete understanding of a suspect’s activities. To this end, this paper presents an investigation into the ability to derive high-level application usage characteristics from low-level network traffic meta-data. The paper presents a three application scenarios – web surfing, communications and social networking and demonstrates it is possible to derive the user interactions (e.g. page loading, chatting and file sharing ) within these systems. The paper continues to present a framework that builds upon this capability to provide a robust, flexible and user-friendly NFAT that provides access to a greater range of forensic information in a far easier format

    Rethinking Digital Forensics

    Get PDF
    © IAER 2019In the modern socially-driven, knowledge-based virtual computing environment in which organisations are operating, the current digital forensics tools and practices can no longer meet the need for scientific rigour. There has been an exponential increase in the complexity of the networks with the rise of the Internet of Things, cloud technologies and fog computing altering business operations and models. Adding to the problem are the increased capacity of storage devices and the increased diversity of devices that are attached to networks, operating autonomously. We argue that the laws and standards that have been written, the processes, procedures and tools that are in common use are increasingly not capable of ensuring the requirement for scientific integrity. This paper looks at a number of issues with current practice and discusses measures that can be taken to improve the potential of achieving scientific rigour for digital forensics in the current and developing landscapePeer reviewe
    corecore