299 research outputs found

    Transparent and Precise Malware Analysis Using Virtualization: From Theory to Practice

    Get PDF
    Dynamic analysis is an important technique used in malware analysis and is complementary to static analysis. Thus far, virtualization has been widely adopted for building fine-grained dynamic analysis tools and this trend is expected to continue. Unlike User/Kernel space malware analysis platforms that essentially co-exist with malware, virtualization based platforms benefit from isolation and fine-grained instrumentation support. Isolation makes it more difficult for malware samples to disrupt analysis and fine-grained instrumentation provides analysts with low level details, such as those at the machine instruction level. This in turn supports the development of advanced analysis tools such as dynamic taint analysis and symbolic execution for automatic path exploration. The major disadvantage of virtualization based malware analysis is the loss of semantic information, also known as the semantic gap problem. To put it differently, since analysis takes place at the virtual machine monitor where only the raw system state (e.g., CPU and memory) is visible, higher level constructs such as processes and files must be reconstructed using the low level information. The collection of techniques used to bridge semantic gaps is known as Virtual Machine Introspection. Virtualization based analysis platforms can be further separated into emulation and hardware virtualization. Emulators have the advantages of flexibility of analysis tool development and efficiency for fine-grained analysis; however, emulators suffer from the transparency problem. That is, malware can employ methods to determine whether it is executing in an emulated environment versus real hardware and cease operations to disrupt analysis if the machine is emulated. In brief, emulation based dynamic analysis has advantages over User/Kernel space and hardware virtualization based techniques, but it suffers from semantic gap and transparency problems. These problems have been exacerbated by recent discoveries of anti-emulation malware that detects emulators and Android malware with two semantic gaps, Java and native. Also, it is foreseeable that malware authors will have a similar response to taint analysis. In other words, once taint analysis becomes widely used to understand how malware operates, the authors will create new malware that attacks the imprecisions in taint analysis implementations and induce false-positives and false-negatives in an effort to frustrate analysts. This dissertation addresses these problems by presenting concepts, methods and techniques that can be used to transparently and precisely analyze both desktop and mobile malware using virtualization. This is achieved in three parts. First, precise heterogeneous record and replay is presented as a means to help emulators benefit from the transparency characteristics of hardware virtualization. This technique is implemented in a tool called V2E that uses KVM for recording and TEMU for replaying and analysis. It was successfully used to analyze real-world anti-emulation malware that evaded analysis using TEMU alone. Second, the design of an emulation based Android malware analysis platform that uses virtual machine introspection to bridge both the Java and native level semantic gaps as well as seamlessly bind the two views together into a single view is presented. The core introspection and instrumentation techniques were implemented in a new analysis platform called DroidScope that is based on the Android emulator. It was successfully used to analyze two real-world Android malware samples that have cooperating Java and native level components. Taint analysis was also used to study their information ex-filtration behaviors. Third, formal methods for studying the sources of false-positives and false-negatives in dynamic taint analysis designs and for verifying the correctness of manually defined taint propagation rules are presented. These definitions and methods were successfully used to analyze and compare previously published taint analysis platforms in terms of false-positives and false-negatives

    Vulnerability detection in device drivers

    Get PDF
    Tese de doutoramento, Informática (Ciência da Computação), Universidade de Lisboa, Faculdade de Ciências, 2017The constant evolution in electronics lets new equipment/devices to be regularly made available on the market, which has led to the situation where common operating systems (OS) include many device drivers(DD) produced by very diverse manufactures. Experience has shown that the development of DD is error prone, as a majority of the OS crashes can be attributed to flaws in their implementation. This thesis addresses the challenge of designing methodologies and tools to facilitate the detection of flaws in DD, contributing to decrease the errors in this kind of software, their impact in the OS stability, and the security threats caused by them. This is especially relevant because it can help developers to improve the quality of drivers during their implementation or when they are integrated into a system. The thesis work started by assessing how DD flaws can impact the correct execution of the Windows OS. The employed approach used a statistical analysis to obtain the list of kernel functions most used by the DD, and then automatically generated synthetic drivers that introduce parameter errors when calling a kernel function, thus mimicking a faulty interaction. The experimental results showed that most targeted functions were ineffective in the defence of the incorrect parameters. A reasonable number of crashes and a small number of hangs were observed suggesting a poor error containment capability of these OS functions. Then, we produced an architecture and a tool that supported the automatic injection of network attacks in mobile equipment (e.g., phone), with the objective of finding security flaws (or vulnerabilities) in Wi-Fi drivers. These DD were selected because they are of easy access to an external adversary, which simply needs to create malicious traffic to exploit them, and therefore the flaws in their implementation could have an important impact. Experiments with the tool uncovered a previously unknown vulnerability that causes OS hangs, when a specific value was assigned to the TIM element in the Beacon frame. The experiments also revealed a potential implementation problem of the TCP-IP stack by the use of disassociation frames when the target device was associated and authenticated with a Wi-Fi access point. Next, we developed a tool capable of registering and instrumenting the interactions between a DD and the OS. The solution used a wrapper DD around the binary of the driver under test, enabling full control over the function calls and parameters involved in the OS-DD interface. This tool can support very diverse testing operations, including the log of system activity and to reverse engineer the driver behaviour. Some experiments were performed with the tool, allowing to record the insights of the behaviour of the interactions between the DD and the OS, the parameter values and return values. Results also showed the ability to identify bugs in drivers, by executing tests based on the knowledge obtained from the driver’s dynamics. Our final contribution is a methodology and framework for the discovery of errors and vulnerabilities in Windows DD by resorting to the execution of the drivers in a fully emulated environment. This approach is capable of testing the drivers without requiring access to the associated hardware or the DD source code, and has a granular control over each machine instruction. Experiments performed with Off the Shelf DD confirmed a high dependency of the correctness of the parameters passed by the OS, identified the precise location and the motive of memory leaks, the existence of dormant and vulnerable code.A constante evolução da eletrónica tem como consequência a disponibilização regular no mercado de novos equipamentos/dispositivos, levando a uma situação em que os sistemas operativos (SO) mais comuns incluem uma grande quantidade de gestores de dispositivos (GD) produzidos por diversos fabricantes. A experiência tem mostrado que o desenvolvimento dos GD é sujeito a erros uma vez que a causa da maioria das paragens do SO pode ser atribuída a falhas na sua implementação. Esta tese centra-se no desafio da criação de metodologias e ferramentas que facilitam a deteção de falhas nos GD, contribuindo para uma diminuição nos erros neste tipo de software, o seu impacto na estabilidade do SO, e as ameaças de segurança por eles causadas. Isto é especialmente relevante porque pode ajudar a melhorar a qualidade dos GD tanto na sua implementação como quando estes são integrados em sistemas. Este trabalho inicia-se com uma avaliação de como as falhas nos GD podem levar a um funcionamento incorreto do SO Windows. A metodologia empregue usa uma análise estatística para obter a lista das funções do SO que são mais utilizadas pelos GD, e posteriormente constrói GD sintéticos que introduzem erros nos parâmetros passados durante a chamada às funções do SO, e desta forma, imita a integração duma falta. Os resultados das experiências mostraram que a maioria das funções testadas não se protege eficazmente dos parâmetros incorretos. Observou-se a ocorrência de um número razoável de paragens e um pequeno número de bloqueios, o que sugere uma pobre capacidade das funções do SO na contenção de erros. Posteriormente, produzimos uma arquitetura e uma ferramenta que suporta a injeção automática de ataques em equipamentos móveis (e.g., telemóveis), com o objetivo de encontrar falhas de segurança (ou vulnerabilidades) em GD de placas de rede Wi-Fi. Estes GD foram selecionados porque são de fácil acesso a um atacante remoto, o qual apenas necessita de criar tráfego malicioso para explorar falhas na sua implementação podendo ter um impacto importante. As experiências realizadas com a ferramenta revelaram uma vulnerabilidade anteriormente desconhecida que provoca um bloqueio no SO quando é atribuído um valor específico ao campo TIM da mensagem de Beacon. As experiências também revelaram um potencial problema na implementação do protocolo TCP-IP no uso das mensagens de desassociação quando o dispositivo alvo estava associado e autenticado com o ponto de acesso Wi-Fi. A seguir, desenvolvemos uma ferramenta com a capacidade de registar e instrumentar as interações entre os GD e o SO. A solução usa um GD que envolve o código binário do GD em teste, permitindo um controlo total sobre as chamadas a funções e aos parâmetros envolvidos na interface SO-GD. Esta ferramenta suporta diversas operações de teste, incluindo o registo da atividade do sistema e compreensão do comportamento do GD. Foram realizadas algumas experiências com esta ferramenta, permitindo o registo das interações entre o GD e o SO, os valores dos parâmetros e os valores de retorno das funções. Os resultados mostraram a capacidade de identificação de erros nos GD, através da execução de testes baseados no conhecimento da dinâmica do GD. A nossa contribuição final é uma metodologia e uma ferramenta para a descoberta de erros e vulnerabilidades em GD Windows recorrendo à execução do GD num ambiente totalmente emulado. Esta abordagem permite testar GD sem a necessidade do respetivo hardware ou o código fonte, e possuí controlo granular sobre a execução de cada instrução máquina. As experiências realizadas com GD disponíveis comercialmente confirmaram a grande dependência que os GD têm nos parâmetros das funções do SO, e identificaram o motivo e a localização precisa de fugas de memória, a existência de código não usado e vulnerável

    Analysis and Mitigation of Remote Side-Channel and Fault Attacks on the Electrical Level

    Get PDF
    In der fortlaufenden Miniaturisierung von integrierten Schaltungen werden physikalische Grenzen erreicht, wobei beispielsweise Einzelatomtransistoren eine mögliche untere Grenze für Strukturgrößen darstellen. Zudem ist die Herstellung der neuesten Generationen von Mikrochips heutzutage finanziell nur noch von großen, multinationalen Unternehmen zu stemmen. Aufgrund dieser Entwicklung ist Miniaturisierung nicht länger die treibende Kraft um die Leistung von elektronischen Komponenten weiter zu erhöhen. Stattdessen werden klassische Computerarchitekturen mit generischen Prozessoren weiterentwickelt zu heterogenen Systemen mit hoher Parallelität und speziellen Beschleunigern. Allerdings wird in diesen heterogenen Systemen auch der Schutz von privaten Daten gegen Angreifer zunehmend schwieriger. Neue Arten von Hardware-Komponenten, neue Arten von Anwendungen und eine allgemein erhöhte Komplexität sind einige der Faktoren, die die Sicherheit in solchen Systemen zur Herausforderung machen. Kryptografische Algorithmen sind oftmals nur unter bestimmten Annahmen über den Angreifer wirklich sicher. Es wird zum Beispiel oft angenommen, dass der Angreifer nur auf Eingaben und Ausgaben eines Moduls zugreifen kann, während interne Signale und Zwischenwerte verborgen sind. In echten Implementierungen zeigen jedoch Angriffe über Seitenkanäle und Faults die Grenzen dieses sogenannten Black-Box-Modells auf. Während bei Seitenkanalangriffen der Angreifer datenabhängige Messgrößen wie Stromverbrauch oder elektromagnetische Strahlung ausnutzt, wird bei Fault Angriffen aktiv in die Berechnungen eingegriffen, und die falschen Ausgabewerte zum Finden der geheimen Daten verwendet. Diese Art von Angriffen auf Implementierungen wurde ursprünglich nur im Kontext eines lokalen Angreifers mit Zugriff auf das Zielgerät behandelt. Jedoch haben bereits Angriffe, die auf der Messung der Zeit für bestimmte Speicherzugriffe basieren, gezeigt, dass die Bedrohung auch durch Angreifer mit Fernzugriff besteht. In dieser Arbeit wird die Bedrohung durch Seitenkanal- und Fault-Angriffe über Fernzugriff behandelt, welche eng mit der Entwicklung zu mehr heterogenen Systemen verknüpft sind. Ein Beispiel für neuartige Hardware im heterogenen Rechnen sind Field-Programmable Gate Arrays (FPGAs), mit welchen sich fast beliebige Schaltungen in programmierbarer Logik realisieren lassen. Diese Logik-Chips werden bereits jetzt als Beschleuniger sowohl in der Cloud als auch in Endgeräten eingesetzt. Allerdings wurde gezeigt, wie die Flexibilität dieser Beschleuniger zur Implementierung von Sensoren zur Abschätzung der Versorgungsspannung ausgenutzt werden kann. Zudem können durch eine spezielle Art der Aktivierung von großen Mengen an Logik Berechnungen in anderen Schaltungen für Fault Angriffe gestört werden. Diese Bedrohung wird hier beispielsweise durch die Erweiterung bestehender Angriffe weiter analysiert und es werden Strategien zur Absicherung dagegen entwickelt

    Mecanismos dinâmicos de segurança para redes softwarizadas e virtualizadas

    Get PDF
    The relationship between attackers and defenders has traditionally been asymmetric, with attackers having time as an upper hand to devise an exploit that compromises the defender. The push towards the Cloudification of the world makes matters more challenging, as it lowers the cost of an attack, with a de facto standardization on a set of protocols. The discovery of a vulnerability now has a broader impact on various verticals (business use cases), while previously, some were in a segregated protocol stack requiring independent vulnerability research. Furthermore, defining a perimeter within a cloudified system is non-trivial, whereas before, the dedicated equipment already created a perimeter. This proposal takes the newer technologies of network softwarization and virtualization, both Cloud-enablers, to create new dynamic security mechanisms that address this asymmetric relationship using novel Moving Target Defense (MTD) approaches. The effective use of the exploration space, combined with the reconfiguration capabilities of frameworks like Network Function Virtualization (NFV) and Management and Orchestration (MANO), should allow for adjusting defense levels dynamically to achieve the required security as defined by the currently acceptable risk. The optimization tasks and integration tasks of this thesis explore these concepts. Furthermore, the proposed novel mechanisms were evaluated in real-world use cases, such as 5G networks or other Network Slicing enabled infrastructures.A relação entre atacantes e defensores tem sido tradicionalmente assimétrica, com os atacantes a terem o tempo como vantagem para conceberem uma exploração que comprometa o defensor. O impulso para a Cloudificação do mundo torna a situação mais desafiante, pois reduz o custo de um ataque, com uma padronização de facto sobre um conjunto de protocolos. A descoberta de uma vulnerabilidade tem agora um impacto mais amplo em várias verticais (casos de uso empresarial), enquanto anteriormente, alguns estavam numa pilha de protocolos segregados que exigiam uma investigação independente das suas vulnerabilidades. Além disso, a definição de um perímetro dentro de um sistema Cloud não é trivial, enquanto antes, o equipamento dedicado já criava um perímetro. Esta proposta toma as mais recentes tecnologias de softwarização e virtualização da rede, ambas facilitadoras da Cloud, para criar novos mecanismos dinâmicos de segurança que incidem sobre esta relação assimétrica utilizando novas abordagens de Moving Target Defense (MTD). A utilização eficaz do espaço de exploração, combinada com as capacidades de reconfiguração de frameworks como Network Function Virtualization (NFV) e Management and Orchestration (MANO), deverá permitir ajustar dinamicamente os níveis de defesa para alcançar a segurança necessária, tal como definida pelo risco actualmente aceitável. As tarefas de optimização e de integração desta tese exploram estes conceitos. Além disso, os novos mecanismos propostos foram avaliados em casos de utilização no mundo real, tais como redes 5G ou outras infraestruturas de Network Slicing.Programa Doutoral em Engenharia Informátic

    System Abstractions for Scalable Application Development at the Edge

    Get PDF
    Recent years have witnessed an explosive growth of Internet of Things (IoT) devices, which collect or generate huge amounts of data. Given diverse device capabilities and application requirements, data processing takes place across a range of settings, from on-device to a nearby edge server/cloud and remote cloud. Consequently, edge-cloud coordination has been studied extensively from the perspectives of job placement, scheduling and joint optimization. Typical approaches focus on performance optimization for individual applications. This often requires domain knowledge of the applications, but also leads to application-specific solutions. Application development and deployment over diverse scenarios thus incur repetitive manual efforts. There are two overarching challenges to provide system-level support for application development at the edge. First, there is inherent heterogeneity at the device hardware level. The execution settings may range from a small cluster as an edge cloud to on-device inference on embedded devices, differing in hardware capability and programming environments. Further, application performance requirements vary significantly, making it even more difficult to map different applications to already heterogeneous hardware. Second, there are trends towards incorporating edge and cloud and multi-modal data. Together, these add further dimensions to the design space and increase the complexity significantly. In this thesis, we propose a novel framework to simplify application development and deployment over a continuum of edge to cloud. Our framework provides key connections between different dimensions of design considerations, corresponding to the application abstraction, data abstraction and resource management abstraction respectively. First, our framework masks hardware heterogeneity with abstract resource types through containerization, and abstracts away the application processing pipelines into generic flow graphs. Further, our framework further supports a notion of degradable computing for application scenarios at the edge that are driven by multimodal sensory input. Next, as video analytics is the killer app of edge computing, we include a generic data management service between video query systems and a video store to organize video data at the edge. We propose a video data unit abstraction based on a notion of distance between objects in the video, quantifying the semantic similarity among video data. Last, considering concurrent application execution, our framework supports multi-application offloading with device-centric control, with a userspace scheduler service that wraps over the operating system scheduler

    Simulation Native des Systèmes Multiprocesseurs sur Puce à l'aide de la Virtualisation Assistée par le Matériel

    Get PDF
    L'intégration de plusieurs processeurs hétérogènes en un seul système sur puce (SoC) est une tendance claire dans les systèmes embarqués. La conception et la vérification de ces systèmes nécessitent des plateformes rapides de simulation, et faciles à construire. Parmi les approches de simulation de logiciels, la simulation native est un bon candidat grâce à l'exécution native de logiciel embarqué sur la machine hôte, ce qui permet des simulations à haute vitesse, sans nécessiter le développement de simulateurs d'instructions. Toutefois, les techniques de simulation natives existantes exécutent le logiciel de simulation dans l'espace de mémoire partagée entre le matériel modélisé et le système d'exploitation hôte. Il en résulte de nombreux problèmes, par exemple les conflits l'espace d'adressage et les chevauchements de mémoire ainsi que l'utilisation des adresses de la machine hôte plutôt des celles des plates-formes matérielles cibles. Cela rend pratiquement impossible la simulation native du code existant fonctionnant sur la plate-forme cible. Pour surmonter ces problèmes, nous proposons l'ajout d'une couche transparente de traduction de l'espace adressage pour séparer l'espace d'adresse cible de celui du simulateur de hôte. Nous exploitons la technologie de virtualisation assistée par matériel (HAV pour Hardware-Assisted Virtualization) à cet effet. Cette technologie est maintenant disponibles sur plupart de processeurs grande public à usage général. Les expériences montrent que cette solution ne dégrade pas la vitesse de simulation native, tout en gardant la possibilité de réaliser l'évaluation des performances du logiciel simulé. La solution proposée est évolutive et flexible et nous fournit les preuves nécessaires pour appuyer nos revendications avec des solutions de simulation multiprocesseurs et hybrides. Nous abordons également la simulation d'exécutables cross- compilés pour les processeurs VLIW (Very Long Instruction Word) en utilisant une technique de traduction binaire statique (SBT) pour généré le code natif. Ainsi il n'est pas nécessaire de faire de traduction à la volée ou d'interprétation des instructions. Cette approche est intéressante dans les situations où le code source n'est pas disponible ou que la plate-forme cible n'est pas supporté par les compilateurs reciblable, ce qui est généralement le cas pour les processeurs VLIW. Les simulateurs générés s'exécutent au-dessus de notre plate-forme basée sur le HAV et modélisent les processeurs de la série C6x de Texas Instruments (TI). Les résultats de simulation des binaires pour VLIW montrent une accélération de deux ordres de grandeur par rapport aux simulateurs précis au cycle près.Integration of multiple heterogeneous processors into a single System-on-Chip (SoC) is a clear trend in embedded systems. Designing and verifying these systems require high-speed and easy-to-build simulation platforms. Among the software simulation approaches, native simulation is a good candidate since the embedded software is executed natively on the host machine, resulting in high speed simulations and without requiring instruction set simulator development effort. However, existing native simulation techniques execute the simulated software in memory space shared between the modeled hardware and the host operating system. This results in many problems, including address space conflicts and overlaps as well as the use of host machine addresses instead of the target hardware platform ones. This makes it practically impossible to natively simulate legacy code running on the target platform. To overcome these issues, we propose the addition of a transparent address space translation layer to separate the target address space from that of the host simulator. We exploit the Hardware-Assisted Virtualization (HAV) technology for this purpose, which is now readily available on almost all general purpose processors. Experiments show that this solution does not degrade the native simulation speed, while keeping the ability to accomplish software performance evaluation. The proposed solution is scalable as well as flexible and we provide necessary evidence to support our claims with multiprocessor and hybrid simulation solutions. We also address the simulation of cross-compiled Very Long Instruction Word (VLIW) executables, using a Static Binary Translation (SBT) technique to generated native code that does not require run-time translation or interpretation support. This approach is interesting in situations where either the source code is not available or the target platform is not supported by any retargetable compilation framework, which is usually the case for VLIW processors. The generated simulators execute on top of our HAV based platform and model the Texas Instruments (TI) C6x series processors. Simulation results for VLIW binaries show a speed-up of around two orders of magnitude compared to the cycle accurate simulators.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Intrusion detection based on behavioral rules for the bytes of the headers of the data units in IP networks

    Get PDF
    Nowadays, communications through computer networks are of utmost importance for the normal functioning of organizations, worldwide transactions and content delivery. These networks are threatened by all kinds of attacks, leading to traffic anomalies that will eventually disrupt the normal behaviour of the networks, exploring specific breaches on a system component or exhausting network resources. Automatic detection of these network anomalies comprises one of the most important resources for network administration, and Intrusion Detection Systems(IDSs) are amongst the systems responsible for this automatic detection. This dissertation starts from the assumption that it is possible to use machine learning to, consistently and automatically, produce rules for an intrusion detector based on statistics for the first 64 bytes of the headers of Internet Protocol (IP) packets. The survey on the state of the art on related works and currently available IDSs shows that the specific approach taken here is worth to be explored. The decision tree learning algorithm known as C4.5 is identified as a suitable means to produce the aforementioned rules, due to the similarity between their syntax and the tree structure. Several rules are then devised using the ML approach for several attacks. The attacks were the same used in a previous work, in which the rules were devised manually. Both rule sets are then compared to show that, in fact, it is possible to construct rules using the approach taken herein, and that the rules created resorting to the C4.5 algorithm are superior to the ones devised after thorough human analysis of several statistics calculated for the bytes of the headers of the packets. To compare them, each rule set was used to detect intrusions in third party traces containing attacks and in live traffic during simulation of attacks. Most of the attacks producing noticeable impact on the headers were detected by both rule sets, but the results for the third party traces were better in the case of the ML devised rules, providing a clear evidence for the aforementioned assumptions.Hoje em dia, as comunicações através de redes informáticas são da maior importância para o normal funcionamento das organizações, transações mundiais e entrega de conteúdos. Essas redes são ameaçadas por todo o tipo de ataques, levando a anomalias no tráfego, que eventualmente vão corromper o normal funcionamento da rede, explorando falhas específicas num componente de um sistema, ou esgotando os recursos de rede. A deteção automática dessas anomalias de rede é um dos recursos mais importantes para os administradores de rede, e os Sistemas de Deteção de Intrusões estão entre os sistemas responsáveis por essa deteção. Esta dissertação tem como ponto de partida, a assunção que é possível usar mecanismos de aprendizagem automática para produzir, de modo consistente e automático, regras para a deteção de intrusões, baseadas em estatísticas dos primeiros 64 bytes dos cabeçalhos dos pacotes IP. O estudo sobre o estado da arte em trabalhos da área, e em sistemas de deteção atualmente disponíveis, mostrou que o método usado nesta dissertação merece ser estudado. O algoritmo de árvores de decisão C4.5 foi identificado como um meio apropriado para produzir as regras já referidas, devido à semelhança entre a sintaxe das mesmas e a estrutura em árvore deste algoritmo. Várias regras foram depois produzidas para vários tipos de ataque, usando a abordagem por aprendizagem automática. Os ataques tomados em consideração foram os mesmos que foram utilizados num trabalho anterior, em que a regras foram concebidas manualmente. Ambos os conjuntos de regras são depois comparados, para mostrar que, de facto, é possível construir regras através da abordagem utilizada nesta dissertação, e que as regras criadas através do algoritmo C4.5 são superiores às que foram criadas através de análise humana das várias estatísticas calculadas para os bytes dos cabeçalhos dos pacotes. Para as comparar, cada conjunto de regras foi utilizado para detetar intrusões em registos de tráfego disponíveis na Internet contendo ataques e em tráfego em tempo real, durante a simulação de ataques. A maioria dos ataques que produz um forte impacto nos cabeçalhos dos pacotes foi detetado por ambos os conjuntos, mas os resultados com os registos retirados da Internet foram melhores para as regras produzidas por aprendizagem automática, dando uma prova clara para o que foi previamente assumido

    Building the Hyperconnected Society- Internet of Things Research and Innovation Value Chains, Ecosystems and Markets

    Get PDF
    This book aims to provide a broad overview of various topics of Internet of Things (IoT), ranging from research, innovation and development priorities to enabling technologies, nanoelectronics, cyber-physical systems, architecture, interoperability and industrial applications. All this is happening in a global context, building towards intelligent, interconnected decision making as an essential driver for new growth and co-competition across a wider set of markets. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from research to technological innovation, validation and deployment.The book builds on the ideas put forward by the European Research Cluster on the Internet of Things Strategic Research and Innovation Agenda, and presents global views and state of the art results on the challenges facing the research, innovation, development and deployment of IoT in future years. The concept of IoT could disrupt consumer and industrial product markets generating new revenues and serving as a growth driver for semiconductor, networking equipment, and service provider end-markets globally. This will create new application and product end-markets, change the value chain of companies that creates the IoT technology and deploy it in various end sectors, while impacting the business models of semiconductor, software, device, communication and service provider stakeholders. The proliferation of intelligent devices at the edge of the network with the introduction of embedded software and app-driven hardware into manufactured devices, and the ability, through embedded software/hardware developments, to monetize those device functions and features by offering novel solutions, could generate completely new types of revenue streams. Intelligent and IoT devices leverage software, software licensing, entitlement management, and Internet connectivity in ways that address many of the societal challenges that we will face in the next decade

    Secure and safe virtualization-based framework for embedded systems development

    Get PDF
    Tese de Doutoramento - Programa Doutoral em Engenharia Electrónica e de Computadores (PDEEC)The Internet of Things (IoT) is here. Billions of smart, connected devices are proliferating at rapid pace in our key infrastructures, generating, processing and exchanging vast amounts of security-critical and privacy-sensitive data. This strong connectivity of IoT environments demands for a holistic, end-to-end security approach, addressing security and privacy risks across different abstraction levels: device, communications, cloud, and lifecycle managment. Security at the device level is being misconstrued as the addition of features in a late stage of the system development. Several software-based approaches such as microkernels, and virtualization have been used, but it is proven, per se, they fail in providing the desired security level. As a step towards the correct operation of these devices, it is imperative to extend them with new security-oriented technologies which guarantee security from the outset. This thesis aims to conceive and design a novel security and safety architecture for virtualized systems by 1) evaluating which technologies are key enablers for scalable and secure virtualization, 2) designing and implementing a fully-featured virtualization environment providing hardware isolation 3) investigating which "hard entities" can extend virtualization to guarantee the security requirements dictated by confidentiality, integrity, and availability, and 4) simplifying system configurability and integration through a design ecosystem supported by a domain-specific language. The developed artefacts demonstrate: 1) why ARM TrustZone is nowadays a reference technology for security, 2) how TrustZone can be adequately exploited for virtualization in different use-cases, 3) why the secure boot process, trusted execution environment and other hardware trust anchors are essential to establish and guarantee a complete root and chain of trust, and 4) how a domain-specific language enables easy design, integration and customization of a secure virtualized system assisted by the above mentioned building blocks.Vivemos na era da Internet das Coisas (IoT). Biliões de dispositivos inteligentes começam a proliferar nas nossas infraestruturas chave, levando ao processamento de avolumadas quantidades de dados privados e sensíveis. Esta forte conectividade inerente ao conceito IoT necessita de uma abordagem holística, em que os riscos de privacidade e segurança são abordados nas diferentes camadas de abstração: dispositivo, comunicações, nuvem e ciclo de vida. A segurança ao nível dos dispositivos tem sido erradamente assegurada pela inclusão de funcionalidades numa fase tardia do desenvolvimento. Têm sido utilizadas diversas abordagens de software, incluindo a virtualização, mas está provado que estas não conseguem garantir o nível de segurança desejado. De forma a garantir a correta operação dos dispositivos, é fundamental complementar os mesmos com novas tecnologias que promovem a segurança desde os primeiros estágios de desenvolvimento. Esta tese propõe, assim, o desenvolvimento de uma solução arquitetural inovadora para sistemas virtualizados seguros, contemplando 1) a avaliação de tecnologias chave que promovam tal realização, 2) a implementação de uma solução de virtualização garantindo isolamento por hardware, 3) a identificação de componentes que integrados permitirão complementar a virtualização para garantir os requisitos de segurança, e 4) a simplificação do processo de configuração e integração da solução através de um ecossistema suportado por uma linguagem de domínio específico. Os artefactos desenvolvidos demonstram: 1) o porquê da tecnologia ARM TrustZone ser uma tecnologia de referência para a segurança, 2) a efetividade desta tecnologia quando utilizada em diferentes domínios, 3) o porquê do processo seguro de inicialização, juntamente com um ambiente de execução seguro e outros componentes de hardware, serem essenciais para estabelecer uma cadeia de confiança, e 4) a viabilidade em utilizar uma linguagem de um domínio específico para configurar e integrar um ambiente virtualizado suportado pelos artefactos supramencionados
    • …
    corecore