425 research outputs found

    A Comprehensive Analysis of the Role of Artificial Intelligence and Machine Learning in Modern Digital Forensics and Incident Response

    Full text link
    In the dynamic landscape of digital forensics, the integration of Artificial Intelligence (AI) and Machine Learning (ML) stands as a transformative technology, poised to amplify the efficiency and precision of digital forensics investigations. However, the use of ML and AI in digital forensics is still in its nascent stages. As a result, this paper gives a thorough and in-depth analysis that goes beyond a simple survey and review. The goal is to look closely at how AI and ML techniques are used in digital forensics and incident response. This research explores cutting-edge research initiatives that cross domains such as data collection and recovery, the intricate reconstruction of cybercrime timelines, robust big data analysis, pattern recognition, safeguarding the chain of custody, and orchestrating responsive strategies to hacking incidents. This endeavour digs far beneath the surface to unearth the intricate ways AI-driven methodologies are shaping these crucial facets of digital forensics practice. While the promise of AI in digital forensics is evident, the challenges arising from increasing database sizes and evolving criminal tactics necessitate ongoing collaborative research and refinement within the digital forensics profession. This study examines the contributions, limitations, and gaps in the existing research, shedding light on the potential and limitations of AI and ML techniques. By exploring these different research areas, we highlight the critical need for strategic planning, continual research, and development to unlock AI's full potential in digital forensics and incident response. Ultimately, this paper underscores the significance of AI and ML integration in digital forensics, offering insights into their benefits, drawbacks, and broader implications for tackling modern cyber threats

    Automated Identification and Reconstruction of YouTube Video Access

    Get PDF
    YouTube is one of the most popular video-sharing websites on the Internet, allowing users to upload, view and share videos with other users all over the world. YouTube contains many different types of videos, from homemade sketches to instructional and educational tutorials, and therefore attracts a wide variety of users with different interests. The majority of YouTube visits are perfectly innocent, but there may be circumstances where YouTube video access is related to a digital investigation, e.g. viewing instructional videos on how to perform potentially unlawful actions or how to make unlawful articles. When a user accesses a YouTube video through their browser, certain digital artefacts relating to that video access may be left on their system in a number of different locations. However, there has been very little research published in the area of YouTube video artefacts. The paper discusses the identification of some of the artefacts that are left by the Internet Explorer web browser on a Windows system after accessing a YouTube video. The information that can be recovered from these artefacts can include the video ID, the video name and possibly a cached copy of the video itself. In addition to identifying the artefacts that are left, the paper also investigates how these artefacts can be brought together and analysed to infer specifics about the user’s interaction with the YouTube website, for example whether the video was searched for or visited as a result of a suggestion after viewing a previous video. The result of this research is a Python based prototype that will analyse a mounted disk image, automatically extract the artefacts related to YouTube visits and produce a report summarising the YouTube video accesses on a system

    Trends and concerns in digital cartography

    Get PDF
    CISRG discussion paper ;

    Advanced Techniques for Improving the Efficacy of Digital Forensics Investigations

    Get PDF
    Digital forensics is the science concerned with discovering, preserving, and analyzing evidence on digital devices. The intent is to be able to determine what events have taken place, when they occurred, who performed them, and how they were performed. In order for an investigation to be effective, it must exhibit several characteristics. The results produced must be reliable, or else the theory of events based on the results will be flawed. The investigation must be comprehensive, meaning that it must analyze all targets which may contain evidence of forensic interest. Since any investigation must be performed within the constraints of available time, storage, manpower, and computation, investigative techniques must be efficient. Finally, an investigation must provide a coherent view of the events under question using the evidence gathered. Unfortunately the set of currently available tools and techniques used in digital forensic investigations does a poor job of supporting these characteristics. Many tools used contain bugs which generate inaccurate results; there are many types of devices and data for which no analysis techniques exist; most existing tools are woefully inefficient, failing to take advantage of modern hardware; and the task of aggregating data into a coherent picture of events is largely left to the investigator to perform manually. To remedy this situation, we developed a set of techniques to facilitate more effective investigations. To improve reliability, we developed the Forensic Discovery Auditing Module, a mechanism for auditing and enforcing controls on accesses to evidence. To improve comprehensiveness, we developed ramparser, a tool for deep parsing of Linux RAM images, which provides previously inaccessible data on the live state of a machine. To improve efficiency, we developed a set of performance optimizations, and applied them to the Scalpel file carver, creating order of magnitude improvements to processing speed and storage requirements. Last, to facilitate more coherent investigations, we developed the Forensic Automated Coherence Engine, which generates a high-level view of a system from the data generated by low-level forensics tools. Together, these techniques significantly improve the effectiveness of digital forensic investigations conducted using them

    An Investigation into the identification, reconstruction, and evidential value of thumbnail cache file fragments in unallocated space

    Get PDF
    ©Cranfield UniversityThis thesis establishes the evidential value of thumbnail cache file fragments identified in unallocated space. A set of criteria to evaluate the evidential value of thumbnail cache artefacts were created by researching the evidential constraints present in Forensic Computing. The criteria were used to evaluate the evidential value of live system thumbnail caches and thumbnail cache file fragments identified in unallocated space. Thumbnail caches can contain visual thumbnails and associated metadata which may be useful to an analyst during an investigation; the information stored in the cache may provide information on the contents of files and any user or system behaviour which interacted with the file. There is a standard definition of the purpose of a thumbnail cache, but not the structure or implementation; this research has shown that this has led to some thumbnail caches storing a variety of other artefacts such as network place names. The growing interest in privacy and security has led to an increase in user’s attempting to remove evidence of their activities; information removed by the user may still be available in unallocated space. This research adapted popular methods for the identification of contiguous files to enable the identification of single cluster sized fragments in Windows 7, Ubuntu, and Kubuntu. Of the four methods tested, none were able to identify each of the classifications with no false positive results; this result led to the creation of a new approach which improved the identification of thumbnail cache file fragments. After the identification phase, further research was conducted into the reassembly of file fragments; this reassembly was based solely on the potential thumbnail cache file fragments and structural and syntactical information. In both the identification and reassembly phases of this research image only file fragments proved the most challenging resulting in a potential area of continued future research. Finally this research compared the evidential value of live system thumbnail caches with identified and reassembled fragments. It was determined that both types of thumbnail cache artefacts can provide unique information which may assist with a digital investigation. ii This research has produced a set of criteria for determining the evidential value of thumbnail cache artefacts; it has also identified the structure and related user and system behaviour of popular operating system thumbnail cache implementations. This research has also adapted contiguous file identification techniques to single fragment identification and has developed an improved method for thumbnail cache file fragment identification. Finally this research has produced a proof of concept software tool for the automated identification and reassembly of thumbnail cache file fragments

    The Advanced Framework for Evaluating Remote Agents (AFERA): A Framework for Digital Forensic Practitioners

    Get PDF
    Digital forensics experts need a dependable method for evaluating evidence-gathering tools. Limited research and resources challenge this process and the lack of multi-endpoint data validation hinders reliability in distributed digital forensics. A framework was designed to evaluate distributed agent-based forensic tools while enabling practitioners to self-evaluate and demonstrate evidence reliability as required by the courts. Grounded in Design Science, the framework features guidelines, data, criteria, and checklists. Expert review enhances its quality and practicality

    Web browsing automation for applications quality control

    Get PDF
    Context: Quality control comprises the set of activities aimed to evaluate that software meets its specification and delivers the functionality expected by the consumers. These activities are often removed in the development process and, as a result, the final software product usually lacks quality. Objective: We propose a set of techniques to automate the quality control for web applications from the client-side, guiding the process by functional and nonfunctional requirements (performance, security, compatibility, usability and accessibility). Method: The first step to achieve automation is to define the structure of the web navigation. Existing software artifacts in the phase of analysis and design are reused. Then, the independent paths of navigation are found, and each path is traversed automatically using real browsers while different kinds of assessments are carried out. Results: The processes and methods proposed in this paper have been implemented by means of a reference architecture and open source tools. A laboratory experiment and an industrial case study have been performed in order to validate the proposal. Conclusion: The definition of navigation paths is a rich approach to model web applications. Grey-box (black-box and white-box) methods have been proved to be very valuable for web assessment. The Chinese Postman Problem (CPP) is an optimal way to find the independent paths in a web navigation modeled as a directed graph
    • …
    corecore