1,541 research outputs found

    Insight from a Docker Container Introspection

    Get PDF
    Large-scale adoption of virtual containers has stimulated concerns by practitioners and academics about the viability of data acquisition and reliability due to the decreasing window to gather relevant data points. These concerns prompted the idea that introspection tools, which are able to acquire data from a system as it is running, can be utilized as both an early warning system to protect that system and as a data capture system that collects data that would be valuable from a digital forensic perspective. An exploratory case study was conducted utilizing a Docker engine and Prometheus as the introspection tool. The research contribution of this research is two-fold. First, it provides empirical support for the idea that introspection tools can be utilized to ascertain differences between pristine and infected containers. Second, it provides the ground work for future research conducting an analysis of large-scale containerized applications in a virtual cloud

    Análise de malware com suporte de hardware

    Get PDF
    Orientadores: Paulo Lício de Geus, André Ricardo Abed GrégioDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O mundo atual é impulsionado pelo uso de sistemas computacionais, estando estes pre- sentes em todos aspectos da vida cotidiana. Portanto, o correto funcionamento destes é essencial para se assegurar a manutenção das possibilidades trazidas pelos desenvolvi- mentos tecnológicos. Contudo, garantir o correto funcionamento destes não é uma tarefa fácil, dado que indivíduos mal-intencionados tentam constantemente subvertê-los visando benefíciar a si próprios ou a terceiros. Os tipos mais comuns de subversão são os ataques por códigos maliciosos (malware), capazes de dar a um atacante controle total sobre uma máquina. O combate à ameaça trazida por malware baseia-se na análise dos artefatos coletados de forma a permitir resposta aos incidentes ocorridos e o desenvolvimento de contramedidas futuras. No entanto, atacantes têm se especializado em burlar sistemas de análise e assim manter suas operações ativas. Para este propósito, faz-se uso de uma série de técnicas denominadas de "anti-análise", capazes de impedir a inspeção direta dos códigos maliciosos. Dentre essas técnicas, destaca-se a evasão do processo de análise, na qual são empregadas exemplares capazes de detectar a presença de um sistema de análise para então esconder seu comportamento malicioso. Exemplares evasivos têm sido cada vez mais utilizados em ataques e seu impacto sobre a segurança de sistemas é considerá- vel, dado que análises antes feitas de forma automática passaram a exigir a supervisão de analistas humanos em busca de sinais de evasão, aumentando assim o custo de se manter um sistema protegido. As formas mais comuns de detecção de um ambiente de análise se dão através da detecção de: (i) código injetado, usado pelo analista para inspecionar a aplicação; (ii) máquinas virtuais, usadas em ambientes de análise por questões de escala; (iii) efeitos colaterais de execução, geralmente causados por emuladores, também usados por analistas. Para lidar com malware evasivo, analistas tem se valido de técnicas ditas transparentes, isto é, que não requerem injeção de código nem causam efeitos colaterais de execução. Um modo de se obter transparência em um processo de análise é contar com suporte do hardware. Desta forma, este trabalho versa sobre a aplicação do suporte de hardware para fins de análise de ameaças evasivas. No decorrer deste texto, apresenta-se uma avaliação das tecnologias existentes de suporte de hardware, dentre as quais máqui- nas virtuais de hardware, suporte de BIOS e monitores de performance. A avaliação crítica de tais tecnologias oferece uma base de comparação entre diferentes casos de uso. Além disso, são enumeradas lacunas de desenvolvimento existentes atualmente. Mais que isso, uma destas lacunas é preenchida neste trabalho pela proposição da expansão do uso dos monitores de performance para fins de monitoração de malware. Mais especificamente, é proposto o uso do monitor BTS para fins de construção de um tracer e um debugger. O framework proposto e desenvolvido neste trabalho é capaz, ainda, de lidar com ataques do tipo ROP, um dos mais utilizados atualmente para exploração de vulnerabilidades. A avaliação da solução demonstra que não há a introdução de efeitos colaterais, o que per- mite análises de forma transparente. Beneficiando-se desta característica, demonstramos a análise de aplicações protegidas e a identificação de técnicas de evasãoAbstract: Today¿s world is driven by the usage of computer systems, which are present in all aspects of everyday life. Therefore, the correct working of these systems is essential to ensure the maintenance of the possibilities brought about by technological developments. However, ensuring the correct working of such systems is not an easy task, as many people attempt to subvert systems working for their own benefit. The most common kind of subversion against computer systems are malware attacks, which can make an attacker to gain com- plete machine control. The fight against this kind of threat is based on analysis procedures of the collected malicious artifacts, allowing the incident response and the development of future countermeasures. However, attackers have specialized in circumventing analysis systems and thus keeping their operations active. For this purpose, they employ a series of techniques called anti-analysis, able to prevent the inspection of their malicious codes. Among these techniques, I highlight the analysis procedure evasion, that is, the usage of samples able to detect the presence of an analysis solution and then hide their malicious behavior. Evasive examples have become popular, and their impact on systems security is considerable, since automatic analysis now requires human supervision in order to find evasion signs, which significantly raises the cost of maintaining a protected system. The most common ways for detecting an analysis environment are: i) Injected code detec- tion, since injection is used by analysts to inspect applications on their way; ii) Virtual machine detection, since they are used in analysis environments due to scalability issues; iii) Execution side effects detection, usually caused by emulators, also used by analysts. To handle evasive malware, analysts have relied on the so-called transparent techniques, that is, those which do not require code injection nor cause execution side effects. A way to achieve transparency in an analysis process is to rely on hardware support. In this way, this work covers the application of the hardware support for the evasive threats analysis purpose. In the course of this text, I present an assessment of existing hardware support technologies, including hardware virtual machines, BIOS support, performance monitors and PCI cards. My critical evaluation of such technologies provides basis for comparing different usage cases. In addition, I pinpoint development gaps that currently exists. More than that, I fill one of these gaps by proposing to expand the usage of performance monitors for malware monitoring purposes. More specifically, I propose the usage of the BTS monitor for the purpose of developing a tracer and a debugger. The proposed framework is also able of dealing with ROP attacks, one of the most common used technique for remote vulnerability exploitation. The framework evaluation shows no side-effect is introduced, thus allowing transparent analysis. Making use of this capability, I demonstrate how protected applications can be inspected and how evasion techniques can be identifiedMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE

    Improving Memory Forensics Through Emulation and Program Analysis

    Get PDF
    Memory forensics is an important tool in the hands of investigators. However, determining if a computer is infected with malicious software is time consuming, even for experts. Tasks that require manual reverse engineering of code or data structures create a significant bottleneck in the investigative workflow. Through the application of emulation software and symbolic execution, these strains have been greatly lessened, allowing for faster and more thorough investigation. Furthermore, these efforts have reduced the barrier for forensic investigation, so that reasonable conclusions can be drawn even by non-expert investigators. While previously Volatility had allowed for the detection of malicious hooks and injected code with an insurmountably high false positive rate, the techniques presented in the work have allowed for a much lower false positive rate automatically, and yield more detailed information when manual analysis is required. The second contribution of this work is to improve the reliability of memory forensic tools. As it currently stands, if some component of the operating system or language runtime has been updated, the task of verifying that these changes do not affect the correctness of investigative tools involves a large reverse engineering effort, and significant domain knowledge, on the part of whoever maintains the tool. Through modifications of the techniques used in the hook analysis, this burden can be lessened or eliminated by comparing the last known functionality to the new functionality. This allows the tool to be updated quickly and effectively, so that investigations can proceed without issue

    Evidence-based Accountability Audits for Cloud Computing

    Get PDF
    Cloud computing is known for its on-demand service provisioning and has now become mainstream. Many businesses as well as individuals are using cloud services on a daily basis. There is a big variety of services that ranges from the provision of computing resources to services such as productivity suites and social networks. The nature of these services varies heavily in terms of what kind of information is being out-sourced to the cloud provider. Often, that data is sensitive, for instance when PII is being shared by an individual. Also, businesses that move (parts of) their processes to the cloud are actively participating in a major paradigm shift from having data on-premise to transfering data to a third-party provider. However, many new challenges come along with this trend, which are closely tied to the loss of control over data. When moving to the cloud, direct control over geographical storage location, who has access to it and how it is shared and processed is given up. Because of this loss of control, cloud customers have to trust cloud providers that they treat their data in an appropriate and responsible way. Cloud audits can be used to check how data has been processed in the cloud (i.e., by whom, for what purpose) and whether or not this happened in compliance with what has been defined in agreed-upon privacy and data storage, usage and maintenance (i.e., data handling) policies. This way, a cloud customer can regain some of the control he has given up by moving to the cloud. In this thesis, accountability audits are presented as a way to strengthen trust in cloud computing by providing assurance about the processing of data in the cloud according to data handling and privacy policies. In cloud accountability audits, various distributed evidence sources need to be considered. The research presented in this thesis discusses the use of various heterogeous evidence sources on all cloud layers. This way, a complete picture of the actual data handling practices that is based on hard facts can be presented to the cloud consumer. Furthermore, this strengthens transparency of data processing in the cloud, which can lead to improved trust in cloud providers, if they choose to adopt these mechanisms in order to assure their customers that their data is being handled according to their expectations. The system presented in this thesis enables continuous auditing of a cloud provider's adherence to data handling policies in an automated way that shortens audit intervals and that is based on evidence that is produced by cloud subsystems. An important aspect of many cloud offerings is the combination of multiple distinct cloud services that are offered by independent providers. Data is thereby freuqently exchanged between the cloud providers. This also includes trans-border flows of data, where one provider may be required to adhere to more strict data protection requirements than the others. The system presented in this thesis addresses such scenarios by enabling the collection of evidence at providers and evaluating it during audits. Securing evidence quickly becomes a challenge in the system design, when information that is needed for the audit is deemed sensitive or confidential. This means that securing the evidence at-rest as well as in-transit is of utmost importance, in order not to introduce a new liability by building an insecure data heap. This research presents the identification of security and privacy protection requirements alongside proposed solutions that enable the development of an architecture for secure, automated, policy-driven and evidence-based accountability audits

    Framework for Security Transparency in Cloud Computing

    Get PDF
    The migration of sensitive data and applications from the on-premise data centre to a cloud environment increases cyber risks to users, mainly because the cloud environment is managed and maintained by a third-party. In particular, the partial surrender of sensitive data and application to a cloud environment creates numerous concerns that are related to a lack of security transparency. Security transparency involves the disclosure of information by cloud service providers about the security measures being put in place to protect assets and meet the expectations of customers. It establishes trust in service relationship between cloud service providers and customers, and without evidence of continuous transparency, trust and confidence are affected and are likely to hinder extensive usage of cloud services. Also, insufficient security transparency is considered as an added level of risk and increases the difficulty of demonstrating conformance to customer requirements and ensuring that the cloud service providers adequately implement security obligations. The research community have acknowledged the pressing need to address security transparency concerns, and although technical aspects for ensuring security and privacy have been researched widely, the focus on security transparency is still scarce. The relatively few literature mostly approach the issue of security transparency from cloud providers’ perspective, while other works have contributed feasible techniques for comparison and selection of cloud service providers using metrics such as transparency and trustworthiness. However, there is still a shortage of research that focuses on improving security transparency from cloud users’ point of view. In particular, there is still a gap in the literature that (i) dissects security transparency from the lens of conceptual knowledge up to implementation from organizational and technical perspectives and; (ii) support continuous transparency by enabling the vetting and probing of cloud service providers’ conformity to specific customer requirements. The significant growth in moving business to the cloud – due to its scalability and perceived effectiveness – underlines the dire need for research in this area. This thesis presents a framework that comprises the core conceptual elements that constitute security transparency in cloud computing. It contributes to the knowledge domain of security transparency in cloud computing by proposing the following. Firstly, the research analyses the basics of cloud security transparency by exploring the notion and foundational concepts that constitute security transparency. Secondly, it proposes a framework which integrates various concepts from requirement engineering domain and an accompanying process that could be followed to implement the framework. The framework and its process provide an essential set of conceptual ideas, activities and steps that can be followed at an organizational level to attain security transparency, which are based on the principles of industry standards and best practices. Thirdly, for ensuring continuous transparency, the thesis proposes an essential tool that supports the collection and assessment of evidence from cloud providers, including the establishment of remedial actions for redressing deficiencies in cloud provider practices. The tool serves as a supplementary component of the proposed framework that enables continuous inspection of how predefined customer requirements are being satisfied. The thesis also validates the proposed security transparency framework and tool in terms of validity, applicability, adaptability, and acceptability using two different case studies. Feedbacks are collected from stakeholders and analysed using essential criteria such as ease of use, relevance, usability, etc. The result of the analysis illustrates the validity and acceptability of both the framework and tool in enhancing security transparency in a real-world environment
    corecore