380 research outputs found

    TKRD : trusted kernel rootkit detection for cybersecurity of VMs based on machine learning and memory forensic analysis

    Get PDF
    The promotion of cloud computing makes the virtual machine (VM) increasingly a target of malware attacks in cybersecurity such as those by kernel rootkits. Memory forensic, which observes the malicious tracks from the memory aspect, is a useful way for malware detection. In this paper, we propose a novel TKRD method to automatically detect kernel rootkits in VMs from private cloud, by combining VM memory forensic analysis with bio-inspired machine learning technology. Malicious features are extracted from the memory dumps of the VM through memory forensic analysis method. Based on these features, various machine learning classifiers are trained including Decision tree, Rule based classifiers, Bayesian and Support vector machines (SVM). The experiment results show that the Random Forest classifier has the best performance which can effectively detect unknown kernel rootkits with an Accuracy of 0.986 and an AUC value (the area under the receiver operating characteristic curve) of 0.998

    TxT: Real-time Transaction Encapsulation for Ethereum Smart Contracts

    Full text link
    Ethereum is a permissionless blockchain ecosystem that supports execution of smart contracts, the key enablers of decentralized finance (DeFi) and non-fungible tokens (NFT). However, the expressiveness of Ethereum smart contracts is a double-edged sword: while it enables blockchain programmability, it also introduces security vulnerabilities, i.e., the exploitable discrepancies between expected and actual behaviors of the contract code. To address these discrepancies and increase the vulnerability coverage, we propose a new smart contract security testing approach called transaction encapsulation. The core idea lies in the local execution of transactions on a fully-synchronized yet isolated Ethereum node, which creates a preview of outcomes of transaction sequences on the current state of blockchain. This approach poses a critical technical challenge -- the well-known time-of-check/time-of-use (TOCTOU) problem, i.e., the assurance that the final transactions will exhibit the same execution paths as the encapsulated test transactions. In this work, we determine the exact conditions for guaranteed execution path replicability of the tested transactions, and implement a transaction testing tool, TxT, which reveals the actual outcomes of Ethereum transactions. To ensure the correctness of testing, TxT deterministically verifies whether a given sequence of transactions ensues an identical execution path on the current state of blockchain. We analyze over 1.3 billion Ethereum transactions and determine that 96.5% of them can be verified by TxT. We further show that TxT successfully reveals the suspicious behaviors associated with 31 out of 37 vulnerabilities (83.8% coverage) in the smart contract weakness classification (SWC) registry. In comparison, the vulnerability coverage of all the existing defense approaches combined only reaches 40.5%.Comment: To appear in IEEE Transactions on Information Forensics and Securit

    A Roadmap for Benchmarking in Wireless Networks

    Get PDF
    Experimentation is evolving as a viable and realistic performance analysis approach in wireless networking research. Realism is provisioned by deploying real software (network stack, drivers, OS), and hardware (wireless cards, network equipment, etc.) in the actual physical environment. However, the experimenter is more likely to be dogged by tricky issues because of calibration problems and bugs in the software/hardware tools. This, coupled with difficulty of dealing with multitude of hardware/software parameters and unpredictable characteristics of the wireless channel in the wild, poses significant challenges in the way of experiment repeatability and reproducibility. Furthermore, experimentation has been impeded by the lack of standard definitions, measurement methodologies and full disclosure reports that are particularly important to understand the suitability of protocols and services to emerging wireless application scenarios. Lack of tools to manage large number experiment runs, deal with huge amount of measurement data and facilitate peer-verifiable analysis further complicates the process. In this paper, we present a holistic view of benchmarking in wireless networks and formulate a procedure complemented by step-by-step case study to help drive future efforts on benchmarking in wireless network applications and protocols

    Whole-System Worst-Case Energy-Consumption Analysis for Energy-Constrained Real-Time Systems

    Get PDF
    Although internal devices (e.g., memory, timers) and external devices (e.g., transceivers, sensors) significantly contribute to the energy consumption of an embedded real-time system, their impact on the worst-case response energy consumption (WCRE) of tasks is usually not adequately taken into account. Most WCRE analysis techniques, for example, only focus on the processor and therefore do not consider the energy consumption of other hardware units. Apart from that, the typical approach for dealing with devices is to assume that all of them are always activated, which leads to high WCRE overestimations in the general case where a system switches off the devices that are currently not needed in order to minimize energy consumption. In this paper, we present SysWCEC, an approach that addresses these problems by enabling static WCRE analysis for entire real-time systems, including internal as well as external devices. For this purpose, SysWCEC introduces a novel abstraction, the power-state-transition graph, which contains information about the worst-case energy consumption of all possible execution paths. To construct the graph, SysWCEC decomposes the analyzed real-time system into blocks during which the set of active devices in the system does not change and is consequently able to precisely handle devices being dynamically activated or deactivated

    Interconnected Services for Time-Series Data Management in Smart Manufacturing Scenarios

    Get PDF
    xvii, 218 p.The rise of Smart Manufacturing, together with the strategic initiatives carried out worldwide, have promoted its adoption among manufacturers who are increasingly interested in boosting data-driven applications for different purposes, such as product quality control, predictive maintenance of equipment, etc. However, the adoption of these approaches faces diverse technological challenges with regard to the data-related technologies supporting the manufacturing data life-cycle. The main contributions of this dissertation focus on two specific challenges related to the early stages of the manufacturing data life-cycle: an optimized storage of the massive amounts of data captured during the production processes and an efficient pre-processing of them. The first contribution consists in the design and development of a system that facilitates the pre-processing task of the captured time-series data through an automatized approach that helps in the selection of the most adequate pre-processing techniques to apply to each data type. The second contribution is the design and development of a three-level hierarchical architecture for time-series data storage on cloud environments that helps to manage and reduce the required data storage resources (and consequently its associated costs). Moreover, with regard to the later stages, a thirdcontribution is proposed, that leverages advanced data analytics to build an alarm prediction system that allows to conduct a predictive maintenance of equipment by anticipating the activation of different types of alarms that can be produced on a real Smart Manufacturing scenario

    Applications and Experiences of Quality Control

    Get PDF
    The rich palette of topics set out in this book provides a sufficiently broad overview of the developments in the field of quality control. By providing detailed information on various aspects of quality control, this book can serve as a basis for starting interdisciplinary cooperation, which has increasingly become an integral part of scientific and applied research

    Interconnected Services for Time-Series Data Management in Smart Manufacturing Scenarios

    Get PDF
    xvii, 218 p.The rise of Smart Manufacturing, together with the strategic initiatives carried out worldwide, have promoted its adoption among manufacturers who are increasingly interested in boosting data-driven applications for different purposes, such as product quality control, predictive maintenance of equipment, etc. However, the adoption of these approaches faces diverse technological challenges with regard to the data-related technologies supporting the manufacturing data life-cycle. The main contributions of this dissertation focus on two specific challenges related to the early stages of the manufacturing data life-cycle: an optimized storage of the massive amounts of data captured during the production processes and an efficient pre-processing of them. The first contribution consists in the design and development of a system that facilitates the pre-processing task of the captured time-series data through an automatized approach that helps in the selection of the most adequate pre-processing techniques to apply to each data type. The second contribution is the design and development of a three-level hierarchical architecture for time-series data storage on cloud environments that helps to manage and reduce the required data storage resources (and consequently its associated costs). Moreover, with regard to the later stages, a thirdcontribution is proposed, that leverages advanced data analytics to build an alarm prediction system that allows to conduct a predictive maintenance of equipment by anticipating the activation of different types of alarms that can be produced on a real Smart Manufacturing scenario

    Análise de malware com suporte de hardware

    Get PDF
    Orientadores: Paulo Lício de Geus, André Ricardo Abed GrégioDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O mundo atual é impulsionado pelo uso de sistemas computacionais, estando estes pre- sentes em todos aspectos da vida cotidiana. Portanto, o correto funcionamento destes é essencial para se assegurar a manutenção das possibilidades trazidas pelos desenvolvi- mentos tecnológicos. Contudo, garantir o correto funcionamento destes não é uma tarefa fácil, dado que indivíduos mal-intencionados tentam constantemente subvertê-los visando benefíciar a si próprios ou a terceiros. Os tipos mais comuns de subversão são os ataques por códigos maliciosos (malware), capazes de dar a um atacante controle total sobre uma máquina. O combate à ameaça trazida por malware baseia-se na análise dos artefatos coletados de forma a permitir resposta aos incidentes ocorridos e o desenvolvimento de contramedidas futuras. No entanto, atacantes têm se especializado em burlar sistemas de análise e assim manter suas operações ativas. Para este propósito, faz-se uso de uma série de técnicas denominadas de "anti-análise", capazes de impedir a inspeção direta dos códigos maliciosos. Dentre essas técnicas, destaca-se a evasão do processo de análise, na qual são empregadas exemplares capazes de detectar a presença de um sistema de análise para então esconder seu comportamento malicioso. Exemplares evasivos têm sido cada vez mais utilizados em ataques e seu impacto sobre a segurança de sistemas é considerá- vel, dado que análises antes feitas de forma automática passaram a exigir a supervisão de analistas humanos em busca de sinais de evasão, aumentando assim o custo de se manter um sistema protegido. As formas mais comuns de detecção de um ambiente de análise se dão através da detecção de: (i) código injetado, usado pelo analista para inspecionar a aplicação; (ii) máquinas virtuais, usadas em ambientes de análise por questões de escala; (iii) efeitos colaterais de execução, geralmente causados por emuladores, também usados por analistas. Para lidar com malware evasivo, analistas tem se valido de técnicas ditas transparentes, isto é, que não requerem injeção de código nem causam efeitos colaterais de execução. Um modo de se obter transparência em um processo de análise é contar com suporte do hardware. Desta forma, este trabalho versa sobre a aplicação do suporte de hardware para fins de análise de ameaças evasivas. No decorrer deste texto, apresenta-se uma avaliação das tecnologias existentes de suporte de hardware, dentre as quais máqui- nas virtuais de hardware, suporte de BIOS e monitores de performance. A avaliação crítica de tais tecnologias oferece uma base de comparação entre diferentes casos de uso. Além disso, são enumeradas lacunas de desenvolvimento existentes atualmente. Mais que isso, uma destas lacunas é preenchida neste trabalho pela proposição da expansão do uso dos monitores de performance para fins de monitoração de malware. Mais especificamente, é proposto o uso do monitor BTS para fins de construção de um tracer e um debugger. O framework proposto e desenvolvido neste trabalho é capaz, ainda, de lidar com ataques do tipo ROP, um dos mais utilizados atualmente para exploração de vulnerabilidades. A avaliação da solução demonstra que não há a introdução de efeitos colaterais, o que per- mite análises de forma transparente. Beneficiando-se desta característica, demonstramos a análise de aplicações protegidas e a identificação de técnicas de evasãoAbstract: Today¿s world is driven by the usage of computer systems, which are present in all aspects of everyday life. Therefore, the correct working of these systems is essential to ensure the maintenance of the possibilities brought about by technological developments. However, ensuring the correct working of such systems is not an easy task, as many people attempt to subvert systems working for their own benefit. The most common kind of subversion against computer systems are malware attacks, which can make an attacker to gain com- plete machine control. The fight against this kind of threat is based on analysis procedures of the collected malicious artifacts, allowing the incident response and the development of future countermeasures. However, attackers have specialized in circumventing analysis systems and thus keeping their operations active. For this purpose, they employ a series of techniques called anti-analysis, able to prevent the inspection of their malicious codes. Among these techniques, I highlight the analysis procedure evasion, that is, the usage of samples able to detect the presence of an analysis solution and then hide their malicious behavior. Evasive examples have become popular, and their impact on systems security is considerable, since automatic analysis now requires human supervision in order to find evasion signs, which significantly raises the cost of maintaining a protected system. The most common ways for detecting an analysis environment are: i) Injected code detec- tion, since injection is used by analysts to inspect applications on their way; ii) Virtual machine detection, since they are used in analysis environments due to scalability issues; iii) Execution side effects detection, usually caused by emulators, also used by analysts. To handle evasive malware, analysts have relied on the so-called transparent techniques, that is, those which do not require code injection nor cause execution side effects. A way to achieve transparency in an analysis process is to rely on hardware support. In this way, this work covers the application of the hardware support for the evasive threats analysis purpose. In the course of this text, I present an assessment of existing hardware support technologies, including hardware virtual machines, BIOS support, performance monitors and PCI cards. My critical evaluation of such technologies provides basis for comparing different usage cases. In addition, I pinpoint development gaps that currently exists. More than that, I fill one of these gaps by proposing to expand the usage of performance monitors for malware monitoring purposes. More specifically, I propose the usage of the BTS monitor for the purpose of developing a tracer and a debugger. The proposed framework is also able of dealing with ROP attacks, one of the most common used technique for remote vulnerability exploitation. The framework evaluation shows no side-effect is introduced, thus allowing transparent analysis. Making use of this capability, I demonstrate how protected applications can be inspected and how evasion techniques can be identifiedMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE

    Assessing performance overhead of Virtual Machine Introspection and its suitability for malware analysis

    Get PDF
    Virtual Machine Introspection is the process of introspecting guest VM’s memory and reconstructing the state of the guest operating system. Due to its isolation, stealth and full visibility of the monitored target, VMI lends itself well for security monitoring and malware analysis. The topics covered in this thesis include operating system and hypervisor concepts, the semantic gap issue, VMI techniques and implementations, applying VMI for malware analysis, and analysis of the performance overhead. The behaviour and magnitude of the performance overhead associated with doing virtual machine introspection is analysed with five different empirical test cases. The intention of the tests is to estimate the costs of a single trapped event, determine the feasibility of various monitoring sensors from usability and stealth perspective, and analyse the behaviour of performance overhead. Various VMI-based tools were considered for the measurement, but DRAKVUF was chosen as it is the most advanced tool available. The test cases go as follows. The chosen load is first executed without any monitoring to determine the baseline execution time. Then a DRAKVUF monitoring plugin is turned on and the load is executed again. After both measurements have been made, the difference between the two execution times is the time spent executing monitoring code. The execution overhead is then determined by calculating the difference between the two execution times and dividing it by the baseline execution time. The disc consumption and execution overhead of a sensor, which captures removed files is small enough to be deployed as a monitoring solution. The performance overhead of system call monitoring sensor is dependant on the number of issued system calls. Loads which issue large numbers of system calls cause high performance overhead. The performance overhead of such loads can be limited by monitoring a subset of all system calls
    corecore