125 research outputs found

    Correlating Orphaned Windows Registry Data Structures

    Get PDF
    Recently, it has been shown that deleted entries of the Microsoft Windows registry (keys) may still reside in the system files once the entries have been deleted from the active database. Investigating the complete keys in context may be extremely important from both a Forensic Investigation point of view and a legal point of view where a lack of context can bring doubt to an argument. In this paper we formalise the registry behaviour and show how a retrieved value may not maintain a relation to the part of the registry it belonged to and hence lose that context. We define registry orphans and elaborate on how they can be created inadvertently during software uninstallation and other system processes. We analyse the orphans and attempt to reconstruct them automatically. We adopt a data mining approach and introduce a set of attributes that can be applied by the forensic investigator to match values to their parents. The heuristics are encoded in a Decision Tree that can discriminate between keys and select those which most likely owned a particular orphan value

    Correlating Orphaned Windows Registry Data Structures

    Get PDF
    Recently, it has been shown that deleted entries of the Microsoft Windows registry (keys) may still reside in the system files once the entries have been deleted from the active database. Investigating the complete keys in context may be extremely important from both a Forensic Investigation point of view and a legal point of view where a lack of context can bring doubt to an argument. In this paper we formalise the registry behaviour and show how a retrieved value may not maintain a relation to the part of the registry it belonged to and hence lose that context. We define registry orphans and elaborate on how they can be created inadvertently during software uninstallation and other system processes. We analyse the orphans and attempt to reconstruct them automatically. We adopt a data mining approach and introduce a set of attributes that can be applied by the forensic investigator to match values to their parents. The heuristics are encoded in a Decision Tree that can discriminate between keys and select those which most likely owned a particular orphan value. Keywords: Windows Registry, Data Structures, Retrieval, Orphans, Correlatio

    Table of Contents

    Get PDF

    Back Matter

    Get PDF

    Front Matter

    Get PDF

    Automated Digital Forensic Triage: Rapid Detection of Anti-Forensic Tools

    Get PDF
    We live in the information age. Our world is interconnected by digital devices and electronic communication. As such, criminals are finding opportunities to exploit our information rich electronic data. In 2014, the estimated annual cost from computer-related crime was more than 800 billion dollars. Examples include the theft of intellectual property, electronic fraud, identity theft and the distribution of illicit material. Digital forensics grew out of necessity to combat computer crime and involves the investigation and analysis of electronic data after a suspected criminal act. Challenges in digital forensics exist due to constant changes in technology. Investigation challenges include exponential growth in the number of cases and the size of targets; for example, forensic practitioners must analyse multi-terabyte cases comprised of numerous digital devices. A variety of applied challenges also exist, due to continual technological advancements; for example, anti-forensic tools, including the malicious use of encryption or data wiping tools, hinder digital investigations by hiding or removing the availability of evidence. In response, the objective of the research reported here was to automate the effective and efficient detection of anti-forensic tools. A design science research methodology was selected as it provides an applied research method to design, implement and evaluate an innovative Information Technology (IT) artifact to solve a specified problem. The research objective require that a system be designed and implemented to perform automated detection of digital artifacts (e.g., data files and Windows Registry entries) on a target data set. The goal of the system is to automatically determine if an anti-forensic tool is present, or absent, in order to prioritise additional in-depth investigation. The system performs rapid forensic triage, suitable for execution against multiple investigation targets, providing an analyst with high-level information regarding potential malicious anti-forensic tool usage. The system is divided into two main stages: 1) Design and implementation of a solution to automate creation of an application profile (application software reference set) of known unique digital artifacts; and 2) Digital artifact matching between the created reference set and a target data set. Two tools were designed and implemented: 1) A live differential analysis tool, named LiveDiff, to reverse engineer application software with a specific emphasis on digital forensic requirements; 2) A digital artifact matching framework, named Vestigium, to correlate digital artifact metadata and detect anti-forensic tool presence. In addition, a forensic data abstraction, named Application Profile XML (APXML), was designed to store and distribute digital artifact metadata. An associated Application Programming Interface (API), named apxml.py, was authored to provide automated processing of APXML documents. Together, the tools provided an automated triage system to detect anti-forensic tool presence on an investigation target. A two-phase approach was employed in order to assess the research products. The first phase of experimental testing involved demonstration in a controlled laboratory environment. First, the LiveDiff tool was used to create application profiles for three anti-forensic tools. The automated data collection and comparison procedure was more effective and efficient than previous approaches. Two data reduction techniques were tested to remove irrelevant operating system noise: application profile intersection and dynamic blacklisting were found to be effective in this regard. Second, the profiles were used as input to Vestigium and automated digital artifact matching was performed against authored known data sets. The results established the desired system functionality and demonstration then led to refinements of the system, as per the cyclical nature of design science. The second phase of experimental testing involved evaluation using two additional data sets to establish effectiveness and efficiency in a real-world investigation scenario. First, a public data set was subjected to testing to provide research reproducibility, as well as to evaluate system effectiveness in a variety of complex detection scenarios. Results showed the ability to detect anti-forensic tools using a different version than that included in the application profile and on a different Windows operating system version. Both are scenarios where traditional hash set analysis fails. Furthermore, Vestigium was able to detect residual and deleted information, even after a tool had been uninstalled by the user. The efficiency of the system was determined and refinements made, resulting in an implementation that can meet forensic triage requirements. Second, a real-world data set was constructed using a collection of second-hand hard drives. The goal was to test the system using unpredictable and diverse data to provide more robust findings in an uncontrolled environment. The system detected one anti-forensic tool on the data set and processed all input data successfully without error, further validating system design and implementation. The key outcome of this research is the design and implementation of an automated system to detect anti-forensic tool presence on a target data set. Evaluation suggested the solution was both effective and efficient, adhering to forensic triage requirements. Furthermore, techniques not previously utilised in forensic analysis were designed and applied throughout the research: dynamic blacklisting and profile intersection removed irrelevant operating system noise from application profiles; metadata matching methods resulted in efficient digital artifact detection and path normalisation aided full path correlation in complex matching scenarios. The system was subjected to rigorous experimental testing on three data sets that comprised more than 10 terabytes of data. The ultimate outcome is a practically implemented solution that has been executed on hundreds of forensic disk images, thousands of Windows Registry hives, more than 10 million data files, and approximately 50 million Registry entries. The research has resulted in the design of a scalable triage system implemented as a set of computer forensic tools

    Keystroke dynamics as a biometric

    No full text
    Modern computer systems rely heavily on methods of authentication and identity verification to protect sensitive data. One of the most robust protective techniques involves adding a layer of biometric analysis to other security mechanisms, as a means of establishing the identity of an individual beyond reasonable doubt. In the search for a biometric technique which is both low-cost and transparent to the end user, researchers have considered analysing the typing patterns of keyboard users to determine their characteristic timing signatures.Previous research into keystroke analysis has either required fixed performance of known keyboard input or relied on artificial tests involving the improvisation of a block of text for analysis. I is proposed that this is insufficient to determine the nature of unconstrained typing in a live computing environment. In an attempt to assess the utility of typing analysis for improving intrusion detection on computer systems, we present the notion of ‘genuinely free text’ (GFT). Through the course of this thesis, we discuss the nature of GFT and attempt to address whether it is feasible to produce a lightweight software platform for monitoring GFT keystroke biometrics, while protecting the privacy of users.The thesis documents in depth the design, development and deployment of the multigraph-based BAKER software platform, a system for collecting statistical GFT data from live environments. This software platform has enabled the collection of an extensive set of keystroke biometric data for a group of participating computer users, the analysis of which we also present here. Several supervised learning techniques were used to demonstrate that the richness of keystroke information gathered from BAKER is indeed sufficient to recommend multigraph keystroke analysis, as a means of augmenting computer security. In addition, we present a discussion of the feasibility of applying data obtained from GFT profiles in circumventing traditional static and free text analysis biometrics

    Introductory Computer Forensics

    Get PDF
    INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic

    An Evaluation of Forensic Tools for Linux : Emphasizing EnCase and PyFlag

    Get PDF
    Denne masteroppgaven gir en vurdering og sammenligning av flere datakriminaltekniske verktøy, med et spesielt fokus på to spesifikke verktøy. Det første kalles EnCase Forensics og er et kommersielt tilgjengelig verktøy som blir benyttet av politi og myndigheter flere steder i verden. Det andre kalles PyFlag og er et open source alternativ som ble benyttet i det vinnende bidraget til Digital Forensics Research Workshop (DFRWS) i 2008. Selv om verktøyene blir evaluert i sin helhet, vil hovedfokuset ligge på viktig søkefunksjonalitet. Tatt i betraktning at mesteparten av forskningen innen området er basert på Microsoft Windows plattformen, mens mindre forskning har blitt utført angående analyse av Linux systemer, så undersøker vi disse verktøyene hovedsakelig i et Linux miljø. Med disse verktøyene utfører vi datakriminalteknisk utvinning og analyse av realistiske data. I tillegg benyttes et verktøy med navn dd, for å utvinne data fra Linux. Denne masteroppgaven inneholder spesifiserte testprosedyrer, problemer vi støtte på under selve testingen, og de endelige resultatene
    corecore