35 research outputs found

    Análise de malware com suporte de hardware

    Get PDF
    Orientadores: Paulo Lício de Geus, André Ricardo Abed GrégioDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O mundo atual é impulsionado pelo uso de sistemas computacionais, estando estes pre- sentes em todos aspectos da vida cotidiana. Portanto, o correto funcionamento destes é essencial para se assegurar a manutenção das possibilidades trazidas pelos desenvolvi- mentos tecnológicos. Contudo, garantir o correto funcionamento destes não é uma tarefa fácil, dado que indivíduos mal-intencionados tentam constantemente subvertê-los visando benefíciar a si próprios ou a terceiros. Os tipos mais comuns de subversão são os ataques por códigos maliciosos (malware), capazes de dar a um atacante controle total sobre uma máquina. O combate à ameaça trazida por malware baseia-se na análise dos artefatos coletados de forma a permitir resposta aos incidentes ocorridos e o desenvolvimento de contramedidas futuras. No entanto, atacantes têm se especializado em burlar sistemas de análise e assim manter suas operações ativas. Para este propósito, faz-se uso de uma série de técnicas denominadas de "anti-análise", capazes de impedir a inspeção direta dos códigos maliciosos. Dentre essas técnicas, destaca-se a evasão do processo de análise, na qual são empregadas exemplares capazes de detectar a presença de um sistema de análise para então esconder seu comportamento malicioso. Exemplares evasivos têm sido cada vez mais utilizados em ataques e seu impacto sobre a segurança de sistemas é considerá- vel, dado que análises antes feitas de forma automática passaram a exigir a supervisão de analistas humanos em busca de sinais de evasão, aumentando assim o custo de se manter um sistema protegido. As formas mais comuns de detecção de um ambiente de análise se dão através da detecção de: (i) código injetado, usado pelo analista para inspecionar a aplicação; (ii) máquinas virtuais, usadas em ambientes de análise por questões de escala; (iii) efeitos colaterais de execução, geralmente causados por emuladores, também usados por analistas. Para lidar com malware evasivo, analistas tem se valido de técnicas ditas transparentes, isto é, que não requerem injeção de código nem causam efeitos colaterais de execução. Um modo de se obter transparência em um processo de análise é contar com suporte do hardware. Desta forma, este trabalho versa sobre a aplicação do suporte de hardware para fins de análise de ameaças evasivas. No decorrer deste texto, apresenta-se uma avaliação das tecnologias existentes de suporte de hardware, dentre as quais máqui- nas virtuais de hardware, suporte de BIOS e monitores de performance. A avaliação crítica de tais tecnologias oferece uma base de comparação entre diferentes casos de uso. Além disso, são enumeradas lacunas de desenvolvimento existentes atualmente. Mais que isso, uma destas lacunas é preenchida neste trabalho pela proposição da expansão do uso dos monitores de performance para fins de monitoração de malware. Mais especificamente, é proposto o uso do monitor BTS para fins de construção de um tracer e um debugger. O framework proposto e desenvolvido neste trabalho é capaz, ainda, de lidar com ataques do tipo ROP, um dos mais utilizados atualmente para exploração de vulnerabilidades. A avaliação da solução demonstra que não há a introdução de efeitos colaterais, o que per- mite análises de forma transparente. Beneficiando-se desta característica, demonstramos a análise de aplicações protegidas e a identificação de técnicas de evasãoAbstract: Today¿s world is driven by the usage of computer systems, which are present in all aspects of everyday life. Therefore, the correct working of these systems is essential to ensure the maintenance of the possibilities brought about by technological developments. However, ensuring the correct working of such systems is not an easy task, as many people attempt to subvert systems working for their own benefit. The most common kind of subversion against computer systems are malware attacks, which can make an attacker to gain com- plete machine control. The fight against this kind of threat is based on analysis procedures of the collected malicious artifacts, allowing the incident response and the development of future countermeasures. However, attackers have specialized in circumventing analysis systems and thus keeping their operations active. For this purpose, they employ a series of techniques called anti-analysis, able to prevent the inspection of their malicious codes. Among these techniques, I highlight the analysis procedure evasion, that is, the usage of samples able to detect the presence of an analysis solution and then hide their malicious behavior. Evasive examples have become popular, and their impact on systems security is considerable, since automatic analysis now requires human supervision in order to find evasion signs, which significantly raises the cost of maintaining a protected system. The most common ways for detecting an analysis environment are: i) Injected code detec- tion, since injection is used by analysts to inspect applications on their way; ii) Virtual machine detection, since they are used in analysis environments due to scalability issues; iii) Execution side effects detection, usually caused by emulators, also used by analysts. To handle evasive malware, analysts have relied on the so-called transparent techniques, that is, those which do not require code injection nor cause execution side effects. A way to achieve transparency in an analysis process is to rely on hardware support. In this way, this work covers the application of the hardware support for the evasive threats analysis purpose. In the course of this text, I present an assessment of existing hardware support technologies, including hardware virtual machines, BIOS support, performance monitors and PCI cards. My critical evaluation of such technologies provides basis for comparing different usage cases. In addition, I pinpoint development gaps that currently exists. More than that, I fill one of these gaps by proposing to expand the usage of performance monitors for malware monitoring purposes. More specifically, I propose the usage of the BTS monitor for the purpose of developing a tracer and a debugger. The proposed framework is also able of dealing with ROP attacks, one of the most common used technique for remote vulnerability exploitation. The framework evaluation shows no side-effect is introduced, thus allowing transparent analysis. Making use of this capability, I demonstrate how protected applications can be inspected and how evasion techniques can be identifiedMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE

    A Survey and Evaluation of Android-Based Malware Evasion Techniques and Detection Frameworks

    Get PDF
    Android platform security is an active area of research where malware detection techniques continuously evolve to identify novel malware and improve the timely and accurate detection of existing malware. Adversaries are constantly in charge of employing innovative techniques to avoid or prolong malware detection effectively. Past studies have shown that malware detection systems are susceptible to evasion attacks where adversaries can successfully bypass the existing security defenses and deliver the malware to the target system without being detected. The evolution of escape-resistant systems is an open research problem. This paper presents a detailed taxonomy and evaluation of Android-based malware evasion techniques deployed to circumvent malware detection. The study characterizes such evasion techniques into two broad categories, polymorphism and metamorphism, and analyses techniques used for stealth malware detection based on the malware’s unique characteristics. Furthermore, the article also presents a qualitative and systematic comparison of evasion detection frameworks and their detection methodologies for Android-based malware. Finally, the survey discusses open-ended questions and potential future directions for continued research in mobile malware detection

    Record and replay based virtual-machine introspection for system security

    Get PDF
    Hardware security features need to strike a careful balance between design intrusiveness and completeness of methods. Securing against attacks like Return Oriented Programming (ROP) requires frequent and expensive checks. Complete security defenses have been proposed yet modern systems are still vulnerable to ROP attacks. We provide complete security by decomposing the solution into two stages. The first stage raises alarms based on an imprecise, low cost hardware detector. The second stage applies complete methods in order to accurately distinguish real attacks from false alarms. This decomposition is enabled with Record and Deterministic Replay. The original execution is recorded and subjected to replay analysis as alarms are raised. In this way the Replay infrastructure can compensate for the occasional hardware imprecision. We demonstrate this approach by applying it to thwart ROP attacks on the Linux kernel. We call the design RnR-ROPSafe. It reuses a simple Return Address Stack (RAS) as the hardware detector. The RAS is slightly modified to prevent corruption of the RAS due to multithreading and due to non-procedural returns—improving its performance as a ROP detector. Rare false positives due to underflows are eliminated via replay instead of hardware over-design. RnR-ROPSafe relies on two on-the-fly replayers: an always-on, fast Checkpointing replayer that periodically creates checkpoints, and a detailed-analysis Alarm replayer that is triggered when there is a threat alarm. We find that the first one has execution speed comparable to that of the recorder, and can be replaying all the time, while the latter has to handle only very few false positives

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    Scalable Emulation of Heterogeneous Systems

    Get PDF
    The breakdown of Dennard's transistor scaling has driven computing systems toward application-specific accelerators, which can provide orders-of-magnitude improvements in performance and energy efficiency over general-purpose processors. To enable the radical departures from conventional approaches that heterogeneous systems entail, research infrastructure must be able to model processors, memory and accelerators, as well as system-level changes---such as operating system or instruction set architecture (ISA) innovations---that might be needed to realize the accelerators' potential. Unfortunately, existing simulation tools that can support such system-level research are limited by the lack of fast, scalable machine emulators to drive execution. To fill this need, in this dissertation we first present a novel machine emulator design based on dynamic binary translation that makes the following improvements over the state of the art: it scales on multicore hosts while remaining memory efficient, correctly handles cross-ISA differences in atomic instruction semantics, leverages the host floating point (FP) unit to speed up FP emulation without sacrificing correctness, and can be efficiently instrumented to---among other possible uses---drive the execution of a full-system, cross-ISA simulator with support for accelerators. We then demonstrate the utility of machine emulation for studying heterogeneous systems by leveraging it to make two additional contributions. First, we quantify the trade-offs in different coupling models for on-chip accelerators. Second, we present a technique to reuse the private memories of on-chip accelerators when they are otherwise inactive to expand the system's last-level cache, thereby reducing the opportunity cost of the accelerators' integration

    Standart-konformes Snapshotting fĂĽr SystemC Virtuelle Plattformen

    Get PDF
    The steady increase in complexity of high-end embedded systems goes along with an increasingly complex design process. We are currently still in a transition phase from Hardware-Description Language (HDL) based design towards virtual-platform-based design of embedded systems. As design complexity rises faster than developer productivity a gap forms. Restoring productivity while at the same time managing increased design complexity can also be achieved through focussing on the development of new tools and design methodologies. In most application areas, high-level modelling languages such as SystemC are used in early design phases. In modern software development Continuous Integration (CI) is used to automatically test if a submitted piece of code breaks functionality. Application of the CI concept to embedded system design and testing requires fast build and test execution times from the virtual platform framework. For this use case the ability to save a specific state of a virtual platform becomes necessary. The saving and restoring of specific states of a simulation requires the ability to serialize all data structures within the simulation models. Improving the frameworks and establishing better methods will only help to narrow the design gap, if these changes are introduced with the needs of the engineers and developers in mind. Ultimately, it is their productivity that shall be improved. The ability to save the state of a virtual platform enables developers to run longer test campaigns that can even contain randomized test stimuli. If the saved states are modifiable the developers can inject faulty states into the simulation models. This work contributes an extension to the SoCRocket virtual platform framework to enable snapshotting. The snapshotting extension can be considered a reference implementation as the utilization of current SystemC/TLM standards makes it compatible to other frameworkds. Furthermore, integrating the UVM SystemC library into the framework enables test driven development and fast validation of SystemC/TLM models using snapshots. These extensions narrow the design gap by supporting designers, testers and developers to work more efficiently.Die stetige Steigerung der Komplexität eingebetteter Systeme geht einher mit einer ebenso steigenden Komplexität des Entwurfsprozesses. Wir befinden uns momentan in der Übergangsphase vom Entwurf von eingebetteten Systemen basierend auf Hardware-Beschreibungssprachen hin zum Entwurf ebendieser basierend auf virtuellen Plattformen. Da die Entwurfskomplexität rasanter steigt als die Produktivität der Entwickler, entsteht eine Kluft. Die Produktivität wiederherzustellen und gleichzeitig die gesteigerte Entwurfskomplexität zu bewältigen, kann auch erreicht werden, indem der Fokus auf die Entwicklung neuer Werkzeuge und Entwurfsmethoden gelegt wird. In den meisten Anwendungsgebieten werden Modellierungssprachen auf hoher Ebene, wie zum Beispiel SystemC, in den frühen Entwurfsphasen benutzt. In der modernen Software-Entwicklung wird Continuous Integration (CI) benutzt um automatisiert zu überprüfen, ob eine eingespielte Änderung am Quelltext bestehende Funktionalitäten beeinträchtigt. Die Anwendung des CI-Konzepts auf den Entwurf und das Testen von eingebetteten Systemen fordert schnelle Bau- und Test-Ausführungszeiten von dem genutzten Framework für virtuelle Plattformen. Für diesen Anwendungsfall wird auch die Fähigkeit, einen bestimmten Zustand der virtuellen Plattform zu speichern, erforderlich. Das Speichern und Wiederherstellen der Zustände einer Simulation erfordert die Serialisierung aller Datenstrukturen, die sich in den Simulationsmodellen befinden. Das Verbessern von Frameworks und Etablieren besserer Methodiken hilft nur die Entwurfs-Kluft zu verringern, wenn diese Änderungen mit Berücksichtigung der Bedürfnisse der Entwickler und Ingenieure eingeführt werden. Letztendlich ist es ihre Produktivität, die gesteigert werden soll. Die Fähigkeit den Zustand einer virtuellen Plattform zu speichern, ermöglicht es den Entwicklern, längere Testkampagnen laufen zu lassen, die auch zufällig erzeugte Teststimuli beinhalten können oder, falls die gespeicherten Zustände modifizierbar sind, fehlerbehaftete Zustände in die Simulationsmodelle zu injizieren. Mein mit dieser Arbeit geleisteter Beitrag beinhaltet die Erweiterung des SoCRocket Frameworks um Checkpointing Funktionalität im Sinne einer Referenzimplementierung. Weiterhin ermöglicht die Integration der UVM SystemC Bibliothek in das Framework die Umsetzung der testgetriebenen Entwicklung und schnelle Validierung von SystemC/TLM Modellen mit Hilfe von Snapshots

    Virtual Machine Flow Analysis Using Host Kernel Tracing

    Get PDF
    L’infonuagique a beaucoup gagné en popularité car elle permet d’offrir des services à coût réduit, avec le modèle économique Pay-to-Use, un stockage illimité avec les systèmes de stockage distribué, et une grande puissance de calcul grâce à l’accès direct au matériel. La technologie de virtualisation permet de partager un serveur physique entre plusieurs environnements virtualisés isolés, en déployant une couche logicielle (Hyperviseur) au-dessus du matériel. En conséquence, les environnements isolés peuvent fonctionner avec des systèmes d’exploitation et des applications différentes, sans interférence mutuelle. La croissance du nombre d’utilisateurs des services infonuagiques et la démocratisation de la technologie de virtualisation présentent un nouveau défi pour les fournisseurs de services infonuagiques. Fournir une bonne qualité de service et une haute disponibilité est une exigence principale pour l’infonuagique. La raison de la dégradation des performances d’une machine virtuelle peut être nombreuses. a Activité intense d’une application à l’intérieur de la machine virtuelle. b Conflits avec d’autres applications à l’intérieur de la machine même virtuelle. c Conflits avec d’autres machines virtuelles qui roulent sur la même machine physique. d Échecs de la plateforme infonuagique. Les deux premiers cas peuvent être gérés par le propriétaire de la machine virtuelle et les autres cas doivent être résolus par le fournisseur de l’infrastructure infonuagique. Ces infrastructures sont généralement très complexes et peuvent contenir différentes couches de virtualisation. Il est donc nécessaire d’avoir un outil d’analyse à faible surcoût pour détecter ces types de problèmes. Dans cette thèse, nous présentons une méthode précise permettant de récupérer le flux d’exécution des environnements virtualisés à partir de la machine hôte, quel que soit le niveau de la virtualisation. Pour éviter des problèmes de sécurité, faciliter le déploiement et minimiser le surcoût, notre méthode limite la collecte de données au niveau de l’hyperviseur. Pour analyser le comportement des machines virtuelles, nous utilisons un outil de traçage léger appelé Linux Trace Toolkit Next Generation (LTTng) [1]. LTTng est capable d’effectuer un traçage à haut débit et à faible surcoût, grâce aux mécanismes de synchronisation sans verrous utilisés pour mettre à jour le contenu des tampons de traçage.----------ABSTRACT: Cloud computing has gained popularity as it offers services at lower cost, with Pay-per-Use model, unlimited storage, with distributed storage, and flexible computational power, with direct hardware access. Virtualization technology allows to share a physical server, between several isolated virtualized environments, by deploying an hypervisor layer on top of hardware. As a result, each isolated environment can run with its OS and application without mutual interference. With the growth of cloud usage, and the use of virtualization, performance understanding and debugging are becoming a serious challenge for Cloud providers. Offering a better QoS and high availability are expected to be salient features of cloud computing. Nonetheless, possible reasons behind performance degradation in VMs are numerous. a) Heavy load of an application inside the VM. b) Contention with other applications inside the VM. c) Contention with other co-located VMs. d) Cloud platform failures. The first two cases can be managed by the VM owner, while the other cases need to be solved by the infrastructure provider. One key requirement for such a complex environment, with different virtualization layers, is a precise low overhead analysis tool. In this thesis, we present a host-based, precise method to recover the execution flow of virtualized environments, regardless of the level of nested virtualization. To avoid security issues, ease deployment and reduce execution overhead, our method limits its data collection to the hypervisor level. In order to analyse the behavior of each VM, we use a lightweight tracing tool called the Linux Trace Toolkit Next Generation (LTTng) [1]. LTTng is optimised for high throughput tracing with low overhead, thanks to its lock-free synchronization mechanisms used to update the trace buffer content

    Continuous monitoring methods to achieve resiliency for virtual machines

    Get PDF
    This dissertation describes monitoring methods to achieve both security and reliability in virtualized computer systems. Our key contribution is showing how we can perform continuous monitoring and leverage information across different layers of a virtualized computer system to detect malicious attacks and accidental failures. For monitoring software running inside a virtual ma- chine, we introduce HyperTap and Hprobes, which are out-of-VM monitoring frameworks that facilitate detection of security and reliability incidents oc- curring inside a VM. For monitoring the hypervisor, we introduce hShield, a Control-Flow Integrity (CFI) enforcement method to detect VM-escape at- tacks. HyperTap, Hprobes, and hShield create a complete chain-of-trust for the entire virtualization software stack
    corecore