47 research outputs found

    Forensic Artifact Finder (ForensicAF): An Approach & Tool for Leveraging Crowd-Sourced Curated Forensic Artifacts

    Get PDF
    Current methods for artifact analysis and understanding depend on investigator expertise. Experienced and technically savvy examiners spend a lot of time reverse engineering applications while attempting to find crumbs they leave behind on systems. This takes away valuable time from the investigative process, and slows down forensic examination. Furthermore, when specific artifact knowledge is gained, it stays within the respective forensic units. To combat these challenges, we present ForensicAF, an approach for leveraging curated, crowd-sourced artifacts from the Artifact Genome Project (AGP). The approach has the overarching goal of uncovering forensically relevant artifacts from storage media. We explain our approach and construct it as an Autopsy Ingest Module. Our implementation focused on both File and Registry artifacts. We evaluated ForensicAF using systematic and random sampling experiments. While ForensicAF showed consistent results with registry artifacts across all experiments, it also revealed that deeper folder traversal yields more File Artifacts during data source ingestion. When experiments were conducted on case scenario disk images without apriori knowledge, ForensicAF uncovered artifacts of forensic relevance that help in solving those scenarios. We contend that ForensicAF is a promising approach for artifact extraction from storage media, and its utility will advance as more artifacts are crowd-sourced by AGP

    Whitelisting System State in Windows Forensic Memory Visualizations

    Get PDF
    Examiners in the field of digital forensics regularly encounter enormous amounts of data and must identify the few artifacts of evidentiary value. One challenge these examiners face is manual reconstruction of complex datasets with both hierarchical and associative relationships. The complexity of this data requires significant knowledge, training, and experience to correctly and efficiently examine. Current methods provide text-based representations or low-level visualizations, but levee the task of maintaining global context of system state on the examiner. This research presents a visualization tool that improves analysis methods through simultaneous representation of the hierarchical and associative relationships and local detailed data within a single page application. A novel whitelisting feature further improves analysis by eliminating items of less interest from view. Results from a pilot study demonstrate that the visualization tool can assist examiners to more accurately and quickly identify artifacts of interest

    Whitelisting System State In Windows Forensic Memory Visualizations

    Get PDF
    Examiners in the field of digital forensics regularly encounter enormous amounts of data and must identify the few artifacts of evidentiary value. The most pressing challenge these examiners face is manual reconstruction of complex datasets with both hierarchical and associative relationships. The complexity of this data requires significant knowledge, training, and experience to correctly and efficiently examine. Current methods provide primarily text-based representations or low-level visualizations, but levee the task of maintaining global context of system state on the examiner. This research presents a visualization tool that improves analysis methods through simultaneous representation of the hierarchical and associative relationships and local detailed data within a single page application. A novel whitelisting feature further improves analysis by eliminating items of little interest from view, allowing examiners to identify artifacts more quickly and accurately. Results from two pilot studies demonstrates that the visualization tool can assist examiners to more accurately and quickly identify artifacts of interest

    Methodology for the Automated Metadata-Based Classification of Incriminating Digital Forensic Artefacts

    Full text link
    The ever increasing volume of data in digital forensic investigation is one of the most discussed challenges in the field. Usually, most of the file artefacts on seized devices are not pertinent to the investigation. Manually retrieving suspicious files relevant to the investigation is akin to finding a needle in a haystack. In this paper, a methodology for the automatic prioritisation of suspicious file artefacts (i.e., file artefacts that are pertinent to the investigation) is proposed to reduce the manual analysis effort required. This methodology is designed to work in a human-in-the-loop fashion. In other words, it predicts/recommends that an artefact is likely to be suspicious rather than giving the final analysis result. A supervised machine learning approach is employed, which leverages the recorded results of previously processed cases. The process of features extraction, dataset generation, training and evaluation are presented in this paper. In addition, a toolkit for data extraction from disk images is outlined, which enables this method to be integrated with the conventional investigation process and work in an automated fashion

    Statistic Whitelisting for Enterprise Network Incident Response

    Get PDF
    This research seeks to satisfy the need for the rapid evaluation of enterprise network hosts in order to identify items of significance through the introduction of a statistic whitelist based on the behavior of the processes on each host. By taking advantage of the repetition of processes and the resources they access, a whitelist can be generated using large quantities of host machines. For each process, the Modules and the TCP & UDP Connections are compared to identify which resources are most commonly accessed by each process. Results show 47% of processes receiving a whitelist score of 75% or greater in the five hosts identified as having the worst overall scores and 60% of processes when the hosts more closely match the hosts used to build the whitelist

    File integrity checking

    Get PDF
    This thesis looks at file execution as an attack vector that leads to the execution of unauthorized code. File integrity checking is examined as a means of removing this attack vector, and the design, implementation, and evaluation of a best-of-breed file integrity checker for the Linux operating system is undertaken. We conclude that the resultant file integrity checker does succeed in removing file execution as an attack vector, does so at a computational cost that is negligible, and displays innovative and useful features that are not currently found in any other Linux file integrity checker

    2. GI FG SIDAR Graduierten-Workshop über Reaktive Sicherheit (SPRING)

    Get PDF
    SPRING ist eine wissenschaftliche Veranstaltung im Bereich der Reaktiven Sicherheit, die Nachwuchswissenschaftlern die Möglichkeit bietet, Ergebnisse eigener Arbeiten zu präsentieren und dabei Kontakte über die eigene Universität hinaus zu knüpfen. Nach der Premiere in Berlin fand Spring in 2007 in Dortmund statt. Die Vorträge deckten ein breites Spektrum ab, von noch laufenden Projekten, die ggf. erstmals einem breiteren Publikum vorgestellt werden, bis zu abgeschlossenen Forschungsarbeiten, die zeitnah auch auf Konferenzen präsentiert wurden bzw.\ werden sollen oder einen Schwerpunkt der eigenen Diplomarbeit oder Dissertation bilden. Die zugehörigen Abstracts sind in diesem technischen Bericht zusammengefaßt und wurden über die Universitätsbibliothek Dortmund elektronisch, zitierfähig und recherchierbar veröffentlicht. In dieser Ausgabe finden sich Beiträge zur den folgenden Themen: Vorfallsbehandlung und Web-Services, Dynamische Umgebungen und Verwundbarkeiten, Anomalie- und Protokollerkennung sowie zu verschiedenen Aspekten der IT-Frühwarnung: Malware-Erkennung und -Analyse, Sensorik und Kooperation

    On the malware detection problem : challenges and novel approaches

    Get PDF
    Orientador: André Ricardo Abed GrégioCoorientador: Paulo Lício de GeusTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba,Inclui referênciasÁrea de concentração: Ciência da ComputaçãoResumo: Software Malicioso (malware) é uma das maiores ameaças aos sistemas computacionais atuais, causando danos à imagem de indivíduos e corporações, portanto requerendo o desenvolvimento de soluções de detecção para prevenir que exemplares de malware causem danos e para permitir o uso seguro dos sistemas. Diversas iniciativas e soluções foram propostas ao longo do tempo para detectar exemplares de malware, de Anti-Vírus (AVs) a sandboxes, mas a detecção de malware de forma efetiva e eficiente ainda se mantém como um problema em aberto. Portanto, neste trabalho, me proponho a investigar alguns desafios, falácias e consequências das pesquisas em detecção de malware de modo a contribuir para o aumento da capacidade de detecção das soluções de segurança. Mais especificamente, proponho uma nova abordagem para o desenvolvimento de experimentos com malware de modo prático mas ainda científico e utilizo-me desta abordagem para investigar quatro questões relacionadas a pesquisa em detecção de malware: (i) a necessidade de se entender o contexto das infecções para permitir a detecção de ameaças em diferentes cenários; (ii) a necessidade de se desenvolver melhores métricas para a avaliação de soluções antivírus; (iii) a viabilidade de soluções com colaboração entre hardware e software para a detecção de malware de forma mais eficiente; (iv) a necessidade de predizer a ocorrência de novas ameaças de modo a permitir a resposta à incidentes de segurança de forma mais rápida.Abstract: Malware is a major threat to most current computer systems, causing image damages and financial losses to individuals and corporations, thus requiring the development of detection solutions to prevent malware to cause harm and allow safe computers usage. Many initiatives and solutions to detect malware have been proposed over time, from AntiViruses (AVs) to sandboxes, but effective and efficient malware detection remains as a still open problem. Therefore, in this work, I propose taking a look on some malware detection challenges, pitfalls and consequences to contribute towards increasing malware detection system's capabilities. More specifically, I propose a new approach to tackle malware research experiments in a practical but still scientific manner and leverage this approach to investigate four issues: (i) the need for understanding context to allow proper detection of localized threats; (ii) the need for developing better metrics for AV solutions evaluation; (iii) the feasibility of leveraging hardware-software collaboration for efficient AV implementation; and (iv) the need for predicting future threats to allow faster incident responses

    Similarity Digest Search: A Survey and Comparative Analysis of Strategies to Perform Known File Filtering Using Approximate Matching

    Get PDF
    Digital forensics is a branch of Computer Science aiming at investigating and analyzing electronic devices in the search for crime evidence. There are several ways to perform this search. Known File Filter (KFF) is one of them, where a list of interest objects is used to reduce/separate data for analysis. Holding a database of hashes of such objects, the examiner performs lookups for matches against the target device. However, due to limitations over hash functions (inability to detect similar objects), new methods have been designed, called approximate matching. This sort of function has interesting characteristics for KFF investigations but suffers mainly from high costs when dealing with huge data sets, as the search is usually done by brute force. To mitigate this problem, strategies have been developed to better perform lookups. In this paper, we present the state of the art of similarity digest search strategies, along with a detailed comparison involving several aspects, as time complexity, memory requirement, and search precision. Our results show that none of the approaches address at least these main aspects. Finally, we discuss future directions and present requirements for a new strategy aiming to fulfill current limitations

    A System for Identification of Potentially Sensitive Data in the Cloud

    Get PDF
    Due to the increasing amount of digital services that handle sensitive data such as personal information, health information or financial information, data breaches are becoming more common and more costly. The cost of an average data breach in 2021 was USD 4.24 million. DLP solutions present a new and developing security paradigm that focuses on preventing data breaches. Unlike traditional security mechanisms, which analyze metadata and access rights, DLP solutions focus on analyzing content. Depending on the type of data is being analyzed, a wide range of data analysis methods can be used, such as regular expressions, fingerprinting methods or statistical methods. While many DLP solutions offer novel approaches in the dimension of data analysis methods, they do not focus on the usability aspect of defining data protection policies. In this thesis we explore the possibility of a solution that supports data protection policy definition using an interpreted DSL. Our solution aims to provide users with the ability to define data protection policies in an easily readable format that is based on the core concepts of the DLP paradigm. However, the interpretation of the DSL incurs certain performance overhead compared to native execution. Due to this, we make suggestions as to how the solution can be further improved upon, allowing it to reach minimal to no performance overhead, while also providing users with a new approach for defining data protection policies
    corecore