24 research outputs found

    Extraction of User Activity through Comparison of Windows Restore Points

    Get PDF
    The extraction of past user activity is one of the main goals in the analysis of digital evidence. In this paper we present a methodology for extracting this activity by comparing multiple Restore Points found in the Windows XP operating system. We concentrate on comparing the copies of the registry hives found within these points. The registry copies represent a snapshot in time of the state of the system. Differences between them can reveal user activity from one instant to another. This approach is implemented and presented as a tool that is able to compare any set of offline hive files and present the results to the user. Investigative techniques are presented to use the software as efficiently as possible. The techniques range from general analysis, in which areas of high user activity are pinpointed, to specific techniques, where user activity relating to specific files and file types is found

    Extraction and Categorisation of User Activity from Windows Restore Points

    Get PDF
    The extraction of the user activity is one of the main goals in the analysis of digital evidence. In this paper we present a methodology for extracting this activity by comparing multiple Restore Points found in the Windows XP operating system. The registry copies represent a snapshot of the state of the system at a certain point in time. Differences between them can reveal user activity from one instant to another. The algorithms for comparing the hives and interpreting the results are of high complexity. We develop an approach that takes into account the nature of the investigation and the characteristics of the hives to reduce the complexity of the comparison and result interpretation processes. The approach concentrates on hives that present higher activity and highlights only those differences that are relevant to the investigation. The approach is implemented as a software tool that is able to compare any set of offline hives and categorise the results according to the user needs. The categorisation of the results, in terms of activity will help the investigator in interpreting the results. In this paper we present a general concept of result categorisation to prove its efficiency on Windows XP, but these can be adapted to any Windows versions including the latest versions

    Detection of Steganography-Producing Software Artifacts on Crime-Related Seized Computers

    Get PDF
    Steganography is the art and science of hiding information within information so that an observer does not know that communication is taking place. Bad actors passing information using steganography are of concern to the national security establishment and law enforcement. An attempt was made to determine if steganography was being used by criminals to communicate information. Web crawling technology was used and images were downloaded from Web sites that were considered as likely candidates for containing information hidden using steganographic techniques. A detection tool was used to analyze these images. The research failed to demonstrate that steganography was prevalent on the public Internet. The probable reasons included the growth and availability of large number of steganography-producing tools and the limited capacity of the detection tools to cope with them. Thus, a redirection was introduced in the methodology and the detection focus was shifted from the analysis of the ‘product’ of the steganography-producing software; viz. the images, to the \u27artifacts’ left by the steganography-producing software while it is being used to generate steganographic images. This approach was based on the concept of ‘Stego-Usage Timeline’. As a proof of concept, a sample set of criminal computers was scanned for the remnants of steganography-producing software. The results demonstrated that the problem of ‘the detection of the usage of steganography’ could be addressed by the approach adopted after the research redirection and that certain steganographic software was popular among the criminals. Thus, the contribution of the research was in demonstrating that the limitations of the tools based on the signature detection of steganographically altered images can be overcome by focusing the detection effort on detecting the artifacts of the steganography-producing tools. Keywords: steganography, signature detection, file artifact detection

    Automated Digital Forensic Triage: Rapid Detection of Anti-Forensic Tools

    Get PDF
    We live in the information age. Our world is interconnected by digital devices and electronic communication. As such, criminals are finding opportunities to exploit our information rich electronic data. In 2014, the estimated annual cost from computer-related crime was more than 800 billion dollars. Examples include the theft of intellectual property, electronic fraud, identity theft and the distribution of illicit material. Digital forensics grew out of necessity to combat computer crime and involves the investigation and analysis of electronic data after a suspected criminal act. Challenges in digital forensics exist due to constant changes in technology. Investigation challenges include exponential growth in the number of cases and the size of targets; for example, forensic practitioners must analyse multi-terabyte cases comprised of numerous digital devices. A variety of applied challenges also exist, due to continual technological advancements; for example, anti-forensic tools, including the malicious use of encryption or data wiping tools, hinder digital investigations by hiding or removing the availability of evidence. In response, the objective of the research reported here was to automate the effective and efficient detection of anti-forensic tools. A design science research methodology was selected as it provides an applied research method to design, implement and evaluate an innovative Information Technology (IT) artifact to solve a specified problem. The research objective require that a system be designed and implemented to perform automated detection of digital artifacts (e.g., data files and Windows Registry entries) on a target data set. The goal of the system is to automatically determine if an anti-forensic tool is present, or absent, in order to prioritise additional in-depth investigation. The system performs rapid forensic triage, suitable for execution against multiple investigation targets, providing an analyst with high-level information regarding potential malicious anti-forensic tool usage. The system is divided into two main stages: 1) Design and implementation of a solution to automate creation of an application profile (application software reference set) of known unique digital artifacts; and 2) Digital artifact matching between the created reference set and a target data set. Two tools were designed and implemented: 1) A live differential analysis tool, named LiveDiff, to reverse engineer application software with a specific emphasis on digital forensic requirements; 2) A digital artifact matching framework, named Vestigium, to correlate digital artifact metadata and detect anti-forensic tool presence. In addition, a forensic data abstraction, named Application Profile XML (APXML), was designed to store and distribute digital artifact metadata. An associated Application Programming Interface (API), named apxml.py, was authored to provide automated processing of APXML documents. Together, the tools provided an automated triage system to detect anti-forensic tool presence on an investigation target. A two-phase approach was employed in order to assess the research products. The first phase of experimental testing involved demonstration in a controlled laboratory environment. First, the LiveDiff tool was used to create application profiles for three anti-forensic tools. The automated data collection and comparison procedure was more effective and efficient than previous approaches. Two data reduction techniques were tested to remove irrelevant operating system noise: application profile intersection and dynamic blacklisting were found to be effective in this regard. Second, the profiles were used as input to Vestigium and automated digital artifact matching was performed against authored known data sets. The results established the desired system functionality and demonstration then led to refinements of the system, as per the cyclical nature of design science. The second phase of experimental testing involved evaluation using two additional data sets to establish effectiveness and efficiency in a real-world investigation scenario. First, a public data set was subjected to testing to provide research reproducibility, as well as to evaluate system effectiveness in a variety of complex detection scenarios. Results showed the ability to detect anti-forensic tools using a different version than that included in the application profile and on a different Windows operating system version. Both are scenarios where traditional hash set analysis fails. Furthermore, Vestigium was able to detect residual and deleted information, even after a tool had been uninstalled by the user. The efficiency of the system was determined and refinements made, resulting in an implementation that can meet forensic triage requirements. Second, a real-world data set was constructed using a collection of second-hand hard drives. The goal was to test the system using unpredictable and diverse data to provide more robust findings in an uncontrolled environment. The system detected one anti-forensic tool on the data set and processed all input data successfully without error, further validating system design and implementation. The key outcome of this research is the design and implementation of an automated system to detect anti-forensic tool presence on a target data set. Evaluation suggested the solution was both effective and efficient, adhering to forensic triage requirements. Furthermore, techniques not previously utilised in forensic analysis were designed and applied throughout the research: dynamic blacklisting and profile intersection removed irrelevant operating system noise from application profiles; metadata matching methods resulted in efficient digital artifact detection and path normalisation aided full path correlation in complex matching scenarios. The system was subjected to rigorous experimental testing on three data sets that comprised more than 10 terabytes of data. The ultimate outcome is a practically implemented solution that has been executed on hundreds of forensic disk images, thousands of Windows Registry hives, more than 10 million data files, and approximately 50 million Registry entries. The research has resulted in the design of a scalable triage system implemented as a set of computer forensic tools

    A Framework for Identifying Host-based Artifacts in Dark Web Investigations

    Get PDF
    The dark web is the hidden part of the internet that is not indexed by search engines and is only accessible with a specific browser like The Onion Router (Tor). Tor was originally developed as a means of secure communications and is still used worldwide for individuals seeking privacy or those wanting to circumvent restrictive regimes. The dark web has become synonymous with nefarious and illicit content which manifests itself in underground marketplaces containing illegal goods such as drugs, stolen credit cards, stolen user credentials, child pornography, and more (Kohen, 2017). Dark web marketplaces contribute both to illegal drug usage and child pornography. Given the fundamental goal of privacy and anonymity, there are limited techniques for finding forensic artifacts and evidence files when investigating misuse and criminal activity in the dark web. Previous studies of digital forensics frameworks reveal a common theme of collection, examination, analysis, and reporting. The existence and frequency of proposed frameworks demonstrate the acceptance and utility of these frameworks in the field of digital forensics. Previous studies of dark web forensics have focused on network forensics rather than hostbased forensics. macOS is the second most popular operating system after Windows (Net Marketshare, n.d.); however, previous research has focused on the Windows operating system with little attention given to macOS forensics. This research uses design science methodology to develop a framework for identifying host-based artifacts during a digital forensic investigation involving suspected dark web use. Both the Windows operating system and macOS are included with the expected result being a reusable, comprehensive framework that is easy to follow and assists investigators in finding artifacts that are designed to be hidden or otherwise hard to find. The contribution of this framework will assist investigators in identifying evidence in cases where the user is suspected of accessing the dark web for criminal intent when little or no other evidence of a crime is present. The artifact produced for this research, The Dark Web Artifact Framework, was evaluated using three different methods to ensure that it met the stated goals of being easy to follow, considering both Windows and macOS operating systems, considering multiple ways of accessing the dark web, and being adaptable to future platforms. The methods of evaluation v included experimental evaluation conducted using a simulation of the framework, comparison of a previously worked dark web case using the created framework, and the expert opinion of members of the South Dakota Internet Crimes Against Children taskforce (ICAC) and the Division of Criminal Investigation (DCI). A digital component can be found in nearly every crime committed today. The Dark Web Artifact Framework is a reusable, paperless, comprehensive framework that provides investigators with a map to follow to locate the necessary artifacts to determine if the system being investigated has been used to access the dark web for the purpose of committing a crime. In the creation of this framework, a process itself was created that will contribute to future works. The yes/no, if/then structure of the framework is adaptable to fit with workflows in any area that would benefit from a recurring process

    Information leakage and steganography: detecting and blocking covert channels

    Get PDF
    This PhD Thesis explores the threat of information theft perpetrated by malicious insiders. As opposite to outsiders, insiders have access to information assets belonging the organization, know the organization infrastructure and more importantly, know the value of the different assets the organization holds. The risk created by malicious insiders have led both the research community and commercial providers to spend efforts on creating mechanisms and solutions to reduce it. However, the lack of certain controls by current proposals may led security administrators to a false sense of security that could actually ease information theft attempts. As a first step of this dissertation, a study of current state of the art proposals regarding information leakage protections has been performed. This study has allowed to identify the main weaknesses of current proposals which are mainly the usage of steganographic algorithms, the lack of control of modern mobile devices and the lack of control of the action the insiders perform inside the different trusted applications they commonly use. Each of these drawbacks have been explored during this dissertation. Regarding the usage of steganographic algorithms, two different steganographic systems have been proposed. First, a steganographic algorithm that transforms source code into innocuous text has been presented. This system uses free context grammars and to parse the source code to be hidden and produce an innocuous text. This system could be used to extract valuable source code from software development environments, where security restrictions are usually softened. Second, a steganographic application for iOS devices has also been presented. This application, called “Hide It In” allows to embed images into other innocuous images and send those images through the device email account. This application includes a cover mode that allows to take pictures without showing that fact in the device screen. The usage of these kinds of applications is suitable in most of the environments which handle sensitive information, as most of them do not incorporate mechanisms to control the usage of advanced mobile devices. The application, which is already available at the Apple App Store, has been downloaded more than 5.000 times. In order to protect organizations against the malicious usage of steganography, several techniques can be implemented. In this thesis two different approaches are presented. First, steganographic detectors could be deployed along the organization to detect possible transmissions of stego-objects outside the organization perimeter. In this regard, a proposal to detect hidden information inside executable files has been presented. The proposed detector, which measures the assembler instruction selection made by compilers, is able to correctly identify stego-objects created through the tool Hydan. Second, steganographic sanitizers could be deployed over the organization infrastructure to reduce the capacity of covert channels that can transmit information outside the organization. In this regard, a framework to avoid the usage of steganography over the HTTP protocol has been proposed. The presented framework, diassembles HTTP messages, overwrites the possible carriers of hidden information with random noise and assembles the HTTP message again. Obtained results show that it is possible to highly reduce the capacity of covert channels created through HTTP. However, the system introduces a considerable delay in communications. Besides steganography, this thesis has also addressed the usage of trusted applications to extract information from organizations. Although applications execution inside an organization can be restricted, trusted applications used to perform daily tasks are generally executed without any restrictions. However, the complexity of such applications can be used by an insider to transform information in such a way that deployed information protection solutions are not able to detect the transformed information as sensitive. In this thesis, a method to encrypt sensitive information using trusted applications is presented. Once the information has been encrypted it is possible to extract it outside the organization without raising any alarm in the deployed security systems. This technique has been successfully evaluated against a state of the art commercial data leakage protection solution. Besides the presented evasion technique, several improvements to enhance the security of current DLP solutions are presented. These are specifically focused in avoiding information leakage through the usage of trusted applications. The contributions of this dissertation have shown that current information leakage protection mechanisms do not fully address all the possible attacks that a malicious insider can commit to steal sensitive information. However, it has been shown that it is possible to implement mechanisms to avoid the extraction of sensitive information by malicious insiders. Obviously, avoiding such attacks does not mean that all possible threats created by malicious insiders are addressed. It is necessary then, to continue studying the threats that malicious insiders pose to the confidentiality of information assets and the possible mechanisms to mitigate them. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Esta tesis doctoral explora la amenaza creada por los empleados maliciosos en lo referente a la confidencialidad de la información sensible (o privilegiada) en posesión de una organización. Al contrario que los atacantes externos a la organización, los atacantes internos poseen de acceso a los activos de información pertenecientes a la organización, conocen la infraestructura de la misma y lo más importante, conocen el valor de los mismos. El riesgo creado por los empleados maliciosos (o en general atacantes internos) ha llevado tanto a la comunidad investigadora como a los proveedores comerciales de seguridad de la información a la creación de mecanismos y soluciones para reducir estas amenazas. Sin embargo, la falta de controles por parte de ciertas propuestas actuales pueden inducir una falsa sensación de seguridad en los administradores de seguridad de las organizaciones, facilitando los posibles intentos de robo de información. Para la realización de esta tesis doctoral, en primer lugar se ha realizado un estudio de las propuestas actuales con respecto a la protección de fugas de información. Este estudio ha permitido identificar las principales debilidades de las mismas, que son principalmente la falta de control sobre el uso de algoritmos esteganográficos, la falta de control de sobre dispositivos móviles avanzados y la falta de control sobre las acciones que realizan los empleados en el interior de las organizaciones. Cada uno de los problemas identificados ha sido explorado durante la realización de esta tesis doctoral. En lo que respecta al uso de algoritmos esteganográficos, esta tesis incluye la propuesta de dos sistemas de ocultación de información. En primer lugar, se presenta un algoritmo esteganográfico que transforma código fuente en texto inocuo. Este sistema utiliza gramáticas libres de contexto para transformar el código fuente a ocultar en un texto inocuo. Este sistema podría ser utilizado para extraer código fuente valioso de entornos donde se realiza desarrollo de software (donde las restricciones de seguridad suelen ser menores). En segundo lugar, se propone una aplicación esteganográfica para dispositivos móviles (concretamente iOS). Esta aplicación, llamada “Hide It In” permite incrustar imágenes en otras inocuas y enviar el estegoobjeto resultante a través de la cuenta de correo electrónico del dispositivo. Esta aplicación incluye un modo encubierto, que permite tomar imágenes mostrando en el propio dispositivo elementos del interfaz diferentes a los de a cámara, lo que permite tomar fotografías de forma inadvertida. Este tipo de aplicaciones podrían ser utilizadas por empleados malicios en la mayoría de los entornos que manejan información sensible, ya que estos no suelen incorporar mecanismos para controlar el uso de dispositivos móviles avanzados. La aplicación, que ya está disponible en la App Store de Apple, ha sido descargada más de 5.000 veces. Otro objetivo de la tesis ha sido prevenir el uso malintencionado de técnicas esteganográficas. A este respecto, esta tesis presenta dos enfoques diferentes. En primer lugar, se pueden desplegar diferentes detectores esteganográficos a lo largo de la organización. De esta forma, se podrían detectar las posibles transmisiones de estego-objetos fuera del ámbito de la misma. En este sentido, esta tesis presenta un algoritmo de estegoanálisis para la detección de información oculta en archivos ejecutables. El detector propuesto, que mide la selección de instrucciones realizada por los compiladores, es capaz de identificar correctamente estego-objetos creados a través de la herramienta de Hydan. En segundo lugar, los “sanitizadores” esteganográficos podrían ser desplegados a lo largo de la infraestructura de la organización para reducir la capacidad de los posibles canales encubiertos que pueden ser utilizados para transmitir información sensible de forma descontrolada.. En este sentido, se ha propuesto un marco para evitar el uso de la esteganografía a través del protocolo HTTP. El marco presentado, descompone los mensajes HTTP, sobrescribe los posibles portadores de información oculta mediante la inclusión de ruido aleatorio y reconstruye los mensajes HTTP de nuevo. Los resultados obtenidos muestran que es posible reducir drásticamente la capacidad de los canales encubiertos creados a través de HTTP. Sin embargo, el sistema introduce un retraso considerable en las comunicaciones. Además de la esteganografía, esta tesis ha abordado también el uso de aplicaciones de confianza para extraer información sensible de las organizaciones. Aunque la ejecución de aplicaciones dentro de una organización puede ser restringida, las aplicaciones de confianza, que se utilizan generalmente para realizar tareas cotidianas dentro de la organización, se ejecutan normalmente sin ninguna restricción. Sin embargo, la complejidad de estas aplicaciones puede ser utilizada para transformar la información de tal manera que las soluciones de protección ante fugas de información desplegadas no sean capaces de detectar la información transformada como sensibles. En esta tesis, se presenta un método para cifrar información sensible mediante el uso de aplicaciones de confianza. Una vez que la información ha sido cifrada, es posible extraerla de la organización sin generar alarmas en los sistemas de seguridad implementados. Esta técnica ha sido evaluada con éxito contra de una solución comercial para la prevención de fugas de información. Además de esta técnica de evasión, se han presentado varias mejoras en lo que respecta a la seguridad de las actuales soluciones DLP. Estas, se centran específicamente en evitar la fuga de información a través del uso de aplicaciones de confianza. Las contribuciones de esta tesis han demostrado que los actuales mecanismos para la protección ante fugas de información no responden plenamente a todos los posibles ataques que puedan ejecutar empleados maliciosos. Sin embargo, también se ha demostrado que es posible implementar mecanismos para evitar la extracción de información sensible mediante los mencionados ataques. Obviamente, esto no significa que todas las posibles amenazas creadas por empleados maliciosos hayan sido abordadas. Es necesario por lo tanto, continuar el estudio de las amenazas en lo que respecta a la confidencialidad de los activos de información y los posibles mecanismos para mitigar las mismas
    corecore