557 research outputs found

    Vulnerability detection in device drivers

    Get PDF
    Tese de doutoramento, Informática (Ciência da Computação), Universidade de Lisboa, Faculdade de Ciências, 2017The constant evolution in electronics lets new equipment/devices to be regularly made available on the market, which has led to the situation where common operating systems (OS) include many device drivers(DD) produced by very diverse manufactures. Experience has shown that the development of DD is error prone, as a majority of the OS crashes can be attributed to flaws in their implementation. This thesis addresses the challenge of designing methodologies and tools to facilitate the detection of flaws in DD, contributing to decrease the errors in this kind of software, their impact in the OS stability, and the security threats caused by them. This is especially relevant because it can help developers to improve the quality of drivers during their implementation or when they are integrated into a system. The thesis work started by assessing how DD flaws can impact the correct execution of the Windows OS. The employed approach used a statistical analysis to obtain the list of kernel functions most used by the DD, and then automatically generated synthetic drivers that introduce parameter errors when calling a kernel function, thus mimicking a faulty interaction. The experimental results showed that most targeted functions were ineffective in the defence of the incorrect parameters. A reasonable number of crashes and a small number of hangs were observed suggesting a poor error containment capability of these OS functions. Then, we produced an architecture and a tool that supported the automatic injection of network attacks in mobile equipment (e.g., phone), with the objective of finding security flaws (or vulnerabilities) in Wi-Fi drivers. These DD were selected because they are of easy access to an external adversary, which simply needs to create malicious traffic to exploit them, and therefore the flaws in their implementation could have an important impact. Experiments with the tool uncovered a previously unknown vulnerability that causes OS hangs, when a specific value was assigned to the TIM element in the Beacon frame. The experiments also revealed a potential implementation problem of the TCP-IP stack by the use of disassociation frames when the target device was associated and authenticated with a Wi-Fi access point. Next, we developed a tool capable of registering and instrumenting the interactions between a DD and the OS. The solution used a wrapper DD around the binary of the driver under test, enabling full control over the function calls and parameters involved in the OS-DD interface. This tool can support very diverse testing operations, including the log of system activity and to reverse engineer the driver behaviour. Some experiments were performed with the tool, allowing to record the insights of the behaviour of the interactions between the DD and the OS, the parameter values and return values. Results also showed the ability to identify bugs in drivers, by executing tests based on the knowledge obtained from the driver’s dynamics. Our final contribution is a methodology and framework for the discovery of errors and vulnerabilities in Windows DD by resorting to the execution of the drivers in a fully emulated environment. This approach is capable of testing the drivers without requiring access to the associated hardware or the DD source code, and has a granular control over each machine instruction. Experiments performed with Off the Shelf DD confirmed a high dependency of the correctness of the parameters passed by the OS, identified the precise location and the motive of memory leaks, the existence of dormant and vulnerable code.A constante evolução da eletrónica tem como consequência a disponibilização regular no mercado de novos equipamentos/dispositivos, levando a uma situação em que os sistemas operativos (SO) mais comuns incluem uma grande quantidade de gestores de dispositivos (GD) produzidos por diversos fabricantes. A experiência tem mostrado que o desenvolvimento dos GD é sujeito a erros uma vez que a causa da maioria das paragens do SO pode ser atribuída a falhas na sua implementação. Esta tese centra-se no desafio da criação de metodologias e ferramentas que facilitam a deteção de falhas nos GD, contribuindo para uma diminuição nos erros neste tipo de software, o seu impacto na estabilidade do SO, e as ameaças de segurança por eles causadas. Isto é especialmente relevante porque pode ajudar a melhorar a qualidade dos GD tanto na sua implementação como quando estes são integrados em sistemas. Este trabalho inicia-se com uma avaliação de como as falhas nos GD podem levar a um funcionamento incorreto do SO Windows. A metodologia empregue usa uma análise estatística para obter a lista das funções do SO que são mais utilizadas pelos GD, e posteriormente constrói GD sintéticos que introduzem erros nos parâmetros passados durante a chamada às funções do SO, e desta forma, imita a integração duma falta. Os resultados das experiências mostraram que a maioria das funções testadas não se protege eficazmente dos parâmetros incorretos. Observou-se a ocorrência de um número razoável de paragens e um pequeno número de bloqueios, o que sugere uma pobre capacidade das funções do SO na contenção de erros. Posteriormente, produzimos uma arquitetura e uma ferramenta que suporta a injeção automática de ataques em equipamentos móveis (e.g., telemóveis), com o objetivo de encontrar falhas de segurança (ou vulnerabilidades) em GD de placas de rede Wi-Fi. Estes GD foram selecionados porque são de fácil acesso a um atacante remoto, o qual apenas necessita de criar tráfego malicioso para explorar falhas na sua implementação podendo ter um impacto importante. As experiências realizadas com a ferramenta revelaram uma vulnerabilidade anteriormente desconhecida que provoca um bloqueio no SO quando é atribuído um valor específico ao campo TIM da mensagem de Beacon. As experiências também revelaram um potencial problema na implementação do protocolo TCP-IP no uso das mensagens de desassociação quando o dispositivo alvo estava associado e autenticado com o ponto de acesso Wi-Fi. A seguir, desenvolvemos uma ferramenta com a capacidade de registar e instrumentar as interações entre os GD e o SO. A solução usa um GD que envolve o código binário do GD em teste, permitindo um controlo total sobre as chamadas a funções e aos parâmetros envolvidos na interface SO-GD. Esta ferramenta suporta diversas operações de teste, incluindo o registo da atividade do sistema e compreensão do comportamento do GD. Foram realizadas algumas experiências com esta ferramenta, permitindo o registo das interações entre o GD e o SO, os valores dos parâmetros e os valores de retorno das funções. Os resultados mostraram a capacidade de identificação de erros nos GD, através da execução de testes baseados no conhecimento da dinâmica do GD. A nossa contribuição final é uma metodologia e uma ferramenta para a descoberta de erros e vulnerabilidades em GD Windows recorrendo à execução do GD num ambiente totalmente emulado. Esta abordagem permite testar GD sem a necessidade do respetivo hardware ou o código fonte, e possuí controlo granular sobre a execução de cada instrução máquina. As experiências realizadas com GD disponíveis comercialmente confirmaram a grande dependência que os GD têm nos parâmetros das funções do SO, e identificaram o motivo e a localização precisa de fugas de memória, a existência de código não usado e vulnerável

    Link-time smart card code hardening

    Get PDF
    This paper presents a feasibility study to protect smart card software against fault-injection attacks by means of link-time code rewriting. This approach avoids the drawbacks of source code hardening, avoids the need for manual assembly writing, and is applicable in conjunction with closed third-party compilers. We implemented a range of cookbook code hardening recipes in a prototype link-time rewriter and evaluate their coverage and associated overhead to conclude that this approach is promising. We demonstrate that the overhead of using an automated link-time approach is not significantly higher than what can be obtained with compile-time hardening or with manual hardening of compiler-generated assembly code

    Intrusion tolerant routing with data consensus in wireless sensor networks

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaWireless sensor networks (WSNs) are rapidly emerging and growing as an important new area in computing and wireless networking research. Applications of WSNs are numerous, growing, and ranging from small-scale indoor deployment scenarios in homes and buildings to large scale outdoor deployment settings in natural, industrial, military and embedded environments. In a WSN, the sensor nodes collect data to monitor physical conditions or to measure and pre-process physical phenomena, and forward that data to special computing nodes called Syncnodes or Base Stations (BSs). These nodes are eventually interconnected, as gateways, to other processing systems running applications. In large-scale settings, WSNs operate with a large number of sensors – from hundreds to thousands of sensor nodes – organised as ad-hoc multi-hop or mesh networks, working without human supervision. Sensor nodes are very limited in computation, storage, communication and energy resources. These limitations impose particular challenges in designing large scale reliable and secure WSN services and applications. However, as sensors are very limited in their resources they tend to be very cheap. Resilient solutions based on a large number of nodes with replicated capabilities, are possible approaches to address dependability concerns, namely reliability and security requirements and fault or intrusion tolerant network services. This thesis proposes, implements and tests an intrusion tolerant routing service for large-scale dependable WSNs. The service is based on a tree-structured multi-path routing algorithm, establishing multi-hop and multiple disjoint routes between sensors and a group of BSs. The BS nodes work as an overlay, processing intrusion tolerant data consensus over the routed data. In the proposed solution the multiple routes are discovered, selected and established by a self-organisation process. The solution allows the WSN nodes to collect and route data through multiple disjoint routes to the different BSs, with a preventive intrusion tolerance approach, while handling possible Byzantine attacks and failures in sensors and BS with a pro-active recovery strategy supported by intrusion and fault tolerant data-consensus algorithms, performed by the group of Base Stations

    Securing Arm Platform: From Software-Based To Hardware-Based Approaches

    Get PDF
    With the rapid proliferation of the ARM architecture on smart mobile phones and Internet of Things (IoT) devices, the security of ARM platform becomes an emerging problem. In recent years, the number of malware identified on ARM platforms, especially on Android, shows explosive growth. Evasion techniques are also used in these malware to escape from being detected by existing analysis systems. In our research, we first present a software-based mechanism to increase the accuracy of existing static analysis tools by reassembleable bytecode extraction. Our solution collects bytecode and data at runtime, and then reassemble them offline to help static analysis tools to reveal the hidden behavior in an application. Further, we implement a hardware-based transparent malware analysis framework for general ARM platforms to defend against the traditional evasion techniques. Our framework leverages hardware debugging features and Trusted Execution Environment (TEE) to achieve transparent tracing and debugging with reasonable overhead. To learn the security of the involved hardware debugging features, we perform a comprehensive study on the ARM debugging features and summarize the security implications. Based on the implications, we design a novel attack scenario that achieves privilege escalation via misusing the debugging features in inter-processor debugging model. The attack has raised our concern on the security of TEEs and Cyber-physical System (CPS). For a better understanding of the security of TEEs, we investigate the security of various TEEs on different architectures and platforms, and state the security challenges. A study of the deploying the TEEs on edge platform is also presented. For the security of the CPS, we conduct an analysis on the real-world traffic signal infrastructure and summarize the security problems

    FINE-GRAINED ACCESS CONTROL ON ANDROID COMPONENT

    Get PDF
    The pervasiveness of Android devices in today’s interconnected world emphasizes the importance of mobile security in protecting user privacy and digital assets. Android’s current security model primarily enforces application-level mechanisms, which fail to address component-level (e.g., Activity, Service, and Content Provider) security concerns. Consequently, third-party code may exploit an application’s permissions, and security features like MDM or BYOD face limitations in their implementation. To address these concerns, we propose a novel Android component context-aware access control mechanism that enforces layered security at multiple Exception Levels (ELs), including EL0, EL1, and EL3. This approach effectively restricts component privileges and controls resource access as needed. Our solution comprises Flasa at EL0, extending SELinux policies for inter-component interactions and SQLite content control; Compac, spanning EL0 and EL1, which enforces component-level permission controls through Android runtime and kernel modifications; and TzNfc, leveraging TrustZone technologies to secure third-party services and limit system privileges via Trusted Execution Environment (TEE). Our evaluations demonstrate the effectiveness of our proposed solution in containing component privileges, controlling inter-component interactions and protecting component level resource access. This enhanced solution, complementing Android’s existing security architecture, provides a more comprehensive approach to Android security, benefiting users, developers, and the broader mobile ecosystem

    Near Field Communication: From theory to practice

    Get PDF
    This book provides the technical essentials, state-of-the-art knowledge, business ecosystem and standards of Near Field Communication (NFC)by NFC Lab - Istanbul research centre which conducts intense research on NFC technology. In this book, the authors present the contemporary research on all aspects of NFC, addressing related security aspects as well as information on various business models. In addition, the book provides comprehensive information a designer needs to design an NFC project, an analyzer needs to analyze requirements of a new NFC based system, and a programmer needs to implement an application. Furthermore, the authors introduce the technical and administrative issues related to NFC technology, standards, and global stakeholders. It also offers comprehensive information as well as use case studies for each NFC operating mode to give the usage idea behind each operating mode thoroughly. Examples of NFC application development are provided using Java technology, and security considerations are discussed in detail. Key Features: Offers a complete understanding of the NFC technology, including standards, technical essentials, operating modes, application development with Java, security and privacy, business ecosystem analysis Provides analysis, design as well as development guidance for professionals from administrative and technical perspectives Discusses methods, techniques and modelling support including UML are demonstrated with real cases Contains case studies such as payment, ticketing, social networking and remote shopping This book will be an invaluable guide for business and ecosystem analysts, project managers, mobile commerce consultants, system and application developers, mobile developers and practitioners. It will also be of interest to researchers, software engineers, computer scientists, information technology specialists including students and graduates.Publisher's Versio

    Cyber-security protection techniques to mitigate memory errors exploitation

    Full text link
    Tesis por compendio[EN] Practical experience in software engineering has demonstrated that the goal of building totally fault-free software systems, although desirable, is impossible to achieve. Therefore, it is necessary to incorporate mitigation techniques in the deployed software, in order to reduce the impact of latent faults. This thesis makes contributions to three memory corruption mitigation techniques: the stack smashing protector (SSP), address space layout randomisation (ASLR) and automatic software diversification. The SSP is a very effective protection technique used against stack buffer overflows, but it is prone to brute force attacks, particularly the dangerous byte-for-byte attack. A novel modification, named RenewSSP, has been proposed which eliminates brute force attacks, can be used in a completely transparent way with existing software and has negligible overheads. There are two different kinds of application for which RenewSSP is especially beneficial: networking servers (tested in Apache) and application launchers (tested on Android). ASLR is a generic concept with multiple designs and implementations. In this thesis, the two most relevant ASLR implementations of Linux have been analysed (Vanilla Linux and PaX patch), and several weaknesses have been found. Taking into account technological improvements in execution support (compilers and libraries), a new ASLR design has been proposed, named ASLR-NG, which maximises entropy, effectively addresses the fragmentation issue and removes a number of identified weaknesses. Furthermore, ASLR-NG is transparent to applications, in that it preserves binary code compatibility and does not add overheads. ASLR-NG has been implemented as a patch to the Linux kernel 4.1. Software diversification is a technique that covers a wide range of faults, including memory errors. The main problem is how to create variants, i.e. programs which have identical behaviours on normal inputs but where faults manifest differently. A novel form of automatic variant generation has been proposed, using multiple cross-compiler suites and processor emulators. One of the main goals of this thesis is to create applicable results. Therefore, I have placed particular emphasis on the development of real prototypes in parallel with the theoretical study. The results of this thesis are directly applicable to real systems; in fact, some of the results have already been included in real-world products.[ES] La creación de software supone uno de los retos más complejos para el ser humano ya que requiere un alto grado de abstracción. Aunque se ha avanzado mucho en las metodologías para la prevención de los fallos software, es patente que el software resultante dista mucho de ser confiable, y debemos asumir que el software que se produce no está libre de fallos. Dada la imposibilidad de diseñar o implementar sistemas libres de fallos, es necesario incorporar técnicas de mitigación de errores para mejorar la seguridad. La presente tesis realiza aportaciones en tres de las principales técnicas de mitigación de errores de corrupción de memoria: Stack Smashing Protector (SSP), Address Space Layout Randomisation (ASLR) y Automatic Software Diversification. SSP es una técnica de protección muy efectiva contra ataques de desbordamiento de buffer en pila, pero es sensible a ataques de fuerza bruta, en particular al peligroso ataque denominado byte-for-byte. Se ha propuesto una novedosa modificación del SSP, llamada RenewSSP, la cual elimina los ataques de fuerza bruta. Puede ser usada de manera completamente transparente con los programas existentes sin introducir sobrecarga. El RenewSSP es especialmente beneficioso en dos áreas de aplicación: Servidores de red (probado en Apache) y lanzadores de aplicaciones eficientes (probado en Android). ASLR es un concepto genérico, del cual hay multitud de diseños e implementaciones. Se han analizado las dos implementaciones más relevantes de Linux (Vanilla Linux y PaX patch), encontrándose en ambas tanto debilidades como elementos mejorables. Teniendo en cuenta las mejoras tecnológicas en el soporte a la ejecución (compiladores y librerías), se ha propuesto un nuevo diseño del ASLR, llamado ASLR-NG, el cual: maximiza la entropía, soluciona el problema de la fragmentación y elimina las debilidades encontradas. Al igual que la solución propuesta para el SSP, la nueva propuesta de ASLR es transparente para las aplicaciones y compatible a nivel binario sin introducir sobrecarga. ASLR-NG ha sido implementado como un parche del núcleo de Linux para la versión 4.1. La diversificación software es una técnica que cubre una amplia gama de fallos, incluidos los errores de memoria. La principal dificultad para aplicar esta técnica radica en la generación de las "variantes", que son programas que tienen un comportamiento idéntico entre ellos ante entradas normales, pero tienen un comportamiento diferenciado en presencia de entradas anormales. Se ha propuesto una novedosa forma de generar variantes de forma automática a partir de un mismo código fuente, empleando la emulación de sistemas. Una de las máximas de esta investigación ha sido la aplicabilidad de los resultados, por lo que se ha hecho especial hincapié en el desarrollo de prototipos sobre sistemas reales a la par que se llevaba a cabo el estudio teórico. Como resultado, las propuestas de esta tesis son directamente aplicables a sistemas reales, algunas de ellas ya están siendo explotadas en la práctica.[CA] La creació de programari suposa un dels reptes més complexos per al ser humà ja que requerix un alt grau d'abstracció. Encara que s'ha avançat molt en les metodologies per a la prevenció de les fallades de programari, és palès que el programari resultant dista molt de ser confiable, i hem d'assumir que el programari que es produïx no està lliure de fallades. Donada la impossibilitat de dissenyar o implementar sistemes lliures de fallades, és necessari incorporar tècniques de mitigació d'errors per a millorar la seguretat. La present tesi realitza aportacions en tres de les principals tècniques de mitigació d'errors de corrupció de memòria: Stack Smashing Protector (SSP), Address Space Layout Randomisation (ASLR) i Automatic Software Diversification. SSP és una tècnica de protecció molt efectiva contra atacs de desbordament de buffer en pila, però és sensible a atacs de força bruta, en particular al perillós atac denominat byte-for-byte. S'ha proposat una nova modificació del SSP, RenewSSP, la qual elimina els atacs de força bruta. Pot ser usada de manera completament transparent amb els programes existents sense introduir sobrecàrrega. El RenewSSP és especialment beneficiós en dos àrees d'aplicació: servidors de xarxa (provat en Apache) i llançadors d'aplicacions eficients (provat en Android). ASLR és un concepte genèric, del qual hi ha multitud de dissenys i implementacions. S'han analitzat les dos implementacions més rellevants de Linux (Vanilla Linux i PaX patch), trobant-se en ambdues tant debilitats com elements millorables. Tenint en compte les millores tecnològiques en el suport a l'execució (compiladors i llibreries), s'ha proposat un nou disseny de l'ASLR: ASLR-NG, el qual, maximitza l'entropia, soluciona el problema de la fragmentació i elimina les debilitats trobades. Igual que la solució proposada per al SSP, la nova proposta d'ASLR és transparent per a les aplicacions i compatible a nivell binari sense introduir sobrecàrrega. ASLR-NG ha sigut implementat com un pedaç del nucli de Linux per a la versió 4.1. La diversificació de programari és una tècnica que cobrix una àmplia gamma de fa\-llades, inclosos els errors de memòria. La principal dificultat per a aplicar esta tècnica radica en la generació de les "variants", que són programes que tenen un comportament idèntic entre ells davant d'entrades normals, però tenen un comportament diferenciat en presència d'entrades anormals. S'ha proposat una nova forma de generar variants de forma automàtica a partir d'un mateix codi font, emprant l'emulació de sistemes. Una de les màximes d'esta investigació ha sigut l'aplicabilitat dels resultats, per la qual cosa s'ha fet especial insistència en el desenrotllament de prototips sobre sistemes reals al mateix temps que es duia a terme l'estudi teòric. Com a resultat, les propostes d'esta tesi són directament aplicables a sistemes reals, algunes d'elles ja estan sent explotades en la pràctica.Marco Gisbert, H. (2015). Cyber-security protection techniques to mitigate memory errors exploitation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57806TESISCompendi

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases
    • …
    corecore