4 research outputs found

    Bytewise Approximate Matching: The Good, The Bad, and The Unknown

    Get PDF
    Hash functions are established and well-known in digital forensics, where they are commonly used for proving integrity and file identification (i.e., hash all files on a seized device and compare the fingerprints against a reference database). However, with respect to the latter operation, an active adversary can easily overcome this approach because traditional hashes are designed to be sensitive to altering an input; output will significantly change if a single bit is flipped. Therefore, researchers developed approximate matching, which is a rather new, less prominent area but was conceived as a more robust counterpart to traditional hashing. Since the conception of approximate matching, the community has constructed numerous algorithms, extensions, and additional applications for this technology, and are still working on novel concepts to improve the status quo. In this survey article, we conduct a high-level review of the existing literature from a non-technical perspective and summarize the existing body of knowledge in approximate matching, with special focus on bytewise algorithms. Our contribution allows researchers and practitioners to receive an overview of the state of the art of approximate matching so that they may understand the capabilities and challenges of the field. Simply, we present the terminology, use cases, classification, requirements, testing methods, algorithms, applications, and a list of primary and secondary literature

    A Cyber Forensics Needs Analysis Survey: Revisiting the Domain\u27s Needs a Decade Later

    Get PDF
    The number of successful cyber attacks continues to increase, threatening financial and personal security worldwide. Cyber/digital forensics is undergoing a paradigm shift in which evidence is frequently massive in size, demands live acquisition, and may be insufficient to convict a criminal residing in another legal jurisdiction. This paper presents the findings of the first broad needs analysis survey in cyber forensics in nearly a decade, aimed at obtaining an updated consensus of professional attitudes in order to optimize resource allocation and to prioritize problems and possible solutions more efficiently. Results from the 99 respondents gave compelling testimony that the following will be necessary in the future: 1) better education/training/certification (opportunities, standardization, and skill-sets); 2) support for cloud and mobile forensics; 3) backing for and improvement of open-source tools 3) research on encryption, malware, and trail obfuscation; 4) revised laws (specific, up-to-date, and which protect user privacy); 5) better communication, especially between/with law enforcement (including establishing new frameworks to mitigate problematic communication); 6) more personnel and funding

    CuFA: A More Formal Definition for Digital Forensic Artifacts

    Get PDF
    The term “artifact” currently does not have a formal definition within the domain of cyber/ digital forensics, resulting in a lack of standardized reporting, linguistic understanding between professionals, and efficiency. In this paper we propose a new definition based on a survey we conducted, literature usage, prior definitions of the word itself, and similarities with archival science. This definition includes required fields that all artifacts must have and encompasses the notion of curation. Thus, we propose using a new term e curated forensic artifact (CuFA) e to address items which have been cleared for entry into a CuFA database (one implementation, the Artifact Genome Project, abbreviated as AGP, is under development and briefly outlined). An ontological model encapsulates these required fields while utilizing a lower-level taxonomic schema. We use the Cyber Observable eXpression (CybOX) project due to its rising popularity and rigorous classifications of forensic objects. Additionally, we suggest some improvements on its integration into our model and identify higher-level location categories to illustrate tracing an object from creation through investigative leads. Finally, a step-wise procedure for researching and logging CuFAs is devised to accompany the model
    corecore