29 research outputs found
Windows Driver Memory Analysis: A Reverse Engineering Methodology
In a digital forensics examination, the capture and analysis of volatile data provides significant information on the state of the computer at the time of seizure. Memory analysis is a premier method of discovering volatile digital forensic information. While much work has been done in extracting forensic artifacts from Windows kernel structures, less focus has been paid to extracting information from Windows drivers. There are two reasons for this: (1) source code for one version of the Windows kernel (but not associated drivers) is available for educational use and (2) drivers are generally called asynchronously and contain no exported functions. Therefore, finding the handful of driver functions of interest out of the thousands of candidates makes reverse code engineering problematic at best. Developing a methodology to minimize the effort of analyzing these drivers, finding the functions of interest, and extracting the data structures of interest is highly desirable. This paper provides two contributions. First, it describes a general methodology for reverse code engineering of Windows drivers memory structures. Second it applies the methodology to tcpip.sys, a Windows driver that controls network connectivity. The result is the extraction from tcpip.sys of the data structures needed to determine current network connections and listeners from the 32 and 64 bit versions of Windows Vista and Windows 7. Manipulation (DKOM), tcpip.sys, Windows 7, Windows Vista. 2000 MSC: 60, 490
Forensics Acquisition — Analysis and Circumvention of Samsung Secure Boot enforced Common Criteria Mode
Forensics Acquisition — Analysis and Circumvention of Samsung Secure Boot enforced Common Criteria ModepublishedVersio
Sharpening Your Tools: Updating bulk_extractor for the 2020s
Bulk_extractor is a high-performance digital forensics tool written in C++.
Between 2018 and 2022 we updated the program from C++98 to C++17, performed a
complete code refactoring, and adopted a unit test framework. The new version
typically runs with 75\% more throughput than the previous version, which we
attribute to improved multithreading. We provide lessons and recommendations
for other digital forensics tool maintainers
The Amorphous Nature of Hackers: An Exploratory Study
In this work, we aim to better understand outsider perspectives of the hacker community through a series of situation based survey questions. By doing this, we hope to gain insight into the overall reputation of hackers from participants in a wide range of technical and non-technical backgrounds. This is important to digital forensics since convicted hackers will be tried by people, each with their own perception of who hackers are. Do cyber crimes and national security issues negatively affect people’s perceptions of hackers? Does hacktivism and information warfare positively affect people’s perception of hackers? Do individual personality factors affect one’s perception of hackers? To answer these questions in a systematic manner, we created two hypotheses. The first hypothesis tested participants’ response in 9 scenarios whereas the second hypothesis tested the participants’ response based on their scores on the Neuroticism-Extraversion-Openness Inventory (NEO) personality subscale. In brief, our results were indicative of how personality traits could influence perceptions of hackers and hacktivism. Possibilities for future research and implications for legal and criminal justice policy are discussed
Robust PDF Files Forensics Using Coding Style
Identifying how a file has been created is often interesting in security. It
can be used by both attackers and defenders. Attackers can exploit this
information to tune their attacks and defenders can understand how a malicious
file has been created after an incident. In this work, we want to identify how
a PDF file has been created. This problem is important because PDF files are
extremely popular: many organizations publish PDF files online and malicious
PDF files are commonly used by attackers. Our approach to detect which software
has been used to produce a PDF file is based on coding style: given patterns
that are only created by certain PDF producers. We have analyzed the coding
style of 900 PDF files produced using 11 PDF producers on 3 different Operating
Systems. We have obtained a set of 192 rules which can be used to identify 11
PDF producers. We have tested our detection tool on 508836 PDF files published
on scientific preprints servers. Our tool is able to detect certain producers
with an accuracy of 100%. Its overall detection is still high (74%). We were
able to apply our tool to identify how online PDF services work and to spot
inconsistency
Identifying Authorship Style in Malicious Binaries: Techniques, Challenges & Datasets
Attributing a piece of malware to its creator typically requires threat intelligence. Binary attribution increases the level of difficulty as it mostly relies upon the ability to disassemble binaries to identify authorship style. Our survey explores malicious author style and the adversarial techniques used by them to remain anonymous. We examine the adversarial impact on the state-of-the-art methods. We identify key findings and explore the open research challenges. To mitigate the lack of ground truth datasets in this domain, we publish alongside this survey the largest and most diverse meta-information dataset of 15,660 malware labeled to 164 threat actor groups
Analysis of digital evidence in identity theft investigations
Identity Theft could be currently considered as a significant problem in the modern
internet driven era. This type of computer crime can be achieved in a number of
different ways; various statistical figures suggest it is on the increase. It intimidates
individual privacy and self assurance, while efforts for increased security and
protection measures appear inadequate to prevent it. A forensic analysis of the digital
evidence should be able to provide precise findings after the investigation of Identity
Theft incidents. At present, the investigation of Internet based Identity Theft is
performed on an ad hoc and unstructured basis, in relation to the digital evidence.
This research work aims to construct a formalised and structured approach to digital
Identity Theft investigations that would improve the current computer forensic
investigative practice. The research hypothesis is to create an analytical framework to
facilitate the investigation of Internet Identity Theft cases and the processing of the
related digital evidence.
This research work makes two key contributions to the subject: a) proposing the
approach of examining different computer crimes using a process specifically based
on their nature and b) to differentiate the examination procedure between the victim’s and the fraudster’s side, depending on the ownership of the digital media. The
background research on the existing investigation methods supports the need of
moving towards an individual framework that supports Identity Theft investigations.
The presented investigation framework is designed based on the structure of the
existing computer forensic frameworks. It is a flexible, conceptual tool that will assist
the investigator’s work and analyse incidents related to this type of crime. The
research outcome has been presented in detail, with supporting relevant material for
the investigator. The intention is to offer a coherent tool that could be used by
computer forensics investigators. Therefore, the research outcome will not only be
evaluated from a laboratory experiment, but also strengthened and improved based on
an evaluation feedback by experts from law enforcement.
While personal identities are increasingly being stored and shared on digital media,
the threat of personal and private information that is used fraudulently cannot be
eliminated. However, when such incidents are precisely examined, then the nature of
the problem can be more clearly understood
Secure migration of WebAssembly-based mobile agents between secure enclaves
Cryptography and security protocols are today commonly used to protect data at-rest and in-transit. In contrast, protecting data in-use has seen only limited adoption. Secure data transfer methods employed today rarely provide guarantees regarding the trustworthiness of the software and hardware at the communication endpoints.
The field of study that addresses these issues is called Trusted or Confidential Computing and relies on the use of hardware-based techniques. These techniques aim to isolate critical data and its processing from the rest of the system. More specifically, it investigates the use of hardware isolated Secure Execution Environments (SEEs) where applications cannot be tampered with during operation. Over the past few decades, several implementations of SEEs have been introduced, each based on a different hardware architecture. However, lately, the trend is to move towards architecture-independent SEEs.
As part of this, Huawei research project is developing a secure enclave framework that enables secure execution and migration of applications (mobile agents), regardless of the underlying architecture. This thesis contributes to the development of the framework by participating in the design and implementation of a secure migration scheme for the mobile agents. The goal is a scheme wherein it is possible to transfer the mobile agent without compromising the security guarantees provided by SEEs. Further, the thesis also provides performance measurements of the migration scheme implemented in a proof of concept of the framework