24 research outputs found

    Information Leakage Attacks and Countermeasures

    Get PDF
    The scientific community has been consistently working on the pervasive problem of information leakage, uncovering numerous attack vectors, and proposing various countermeasures. Despite these efforts, leakage incidents remain prevalent, as the complexity of systems and protocols increases, and sophisticated modeling methods become more accessible to adversaries. This work studies how information leakages manifest in and impact interconnected systems and their users. We first focus on online communications and investigate leakages in the Transport Layer Security protocol (TLS). Using modern machine learning models, we show that an eavesdropping adversary can efficiently exploit meta-information (e.g., packet size) not protected by the TLS’ encryption to launch fingerprinting attacks at an unprecedented scale even under non-optimal conditions. We then turn our attention to ultrasonic communications, and discuss their security shortcomings and how adversaries could exploit them to compromise anonymity network users (even though they aim to offer a greater level of privacy compared to TLS). Following up on these, we delve into physical layer leakages that concern a wide array of (networked) systems such as servers, embedded nodes, Tor relays, and hardware cryptocurrency wallets. We revisit location-based side-channel attacks and develop an exploitation neural network. Our model demonstrates the capabilities of a modern adversary but also presents an inexpensive tool to be used by auditors for detecting such leakages early on during the development cycle. Subsequently, we investigate techniques that further minimize the impact of leakages found in production components. Our proposed system design distributes both the custody of secrets and the cryptographic operation execution across several components, thus making the exploitation of leaks difficult

    Runtime Monitoring for Dependable Hardware Design

    Get PDF
    Mit dem Voranschreiten der Technologieskalierung und der Globalisierung der Produktion von integrierten Schaltkreisen eröffnen sich eine Fülle von Schwachstellen bezüglich der Verlässlichkeit von Computerhardware. Jeder Mikrochip wird aufgrund von Produktionsschwankungen mit einem einzigartigen Charakter geboren, welcher sich durch seine Arbeitsbedingungen, Belastung und Umgebung in individueller Weise entwickelt. Daher sind deterministische Modelle, welche zur Entwurfszeit die Verlässlichkeit prognostizieren, nicht mehr ausreichend um Integrierte Schaltkreise mit Nanometertechnologie sinnvoll abbilden zu können. Der Bedarf einer Laufzeitanalyse des Zustandes steigt und mit ihm die notwendigen Maßnahmen zum Erhalt der Zuverlässigkeit. Transistoren sind anfällig für auslastungsbedingte Alterung, die die Laufzeit der Schaltung erhöht und mit ihr die Möglichkeit einer Fehlberechnung. Hinzu kommen spezielle Abläufe die das schnelle Altern des Chips befördern und somit seine zuverlässige Lebenszeit reduzieren. Zusätzlich können strahlungsbedingte Laufzeitfehler (Soft-Errors) des Chips abnormales Verhalten kritischer Systeme verursachen. Sowohl das Ausbreiten als auch das Maskieren dieser Fehler wiederum sind abhängig von der Arbeitslast des Systems. Fabrizierten Chips können ebenfalls vorsätzlich während der Produktion boshafte Schaltungen, sogenannte Hardwaretrojaner, hinzugefügt werden. Dies kompromittiert die Sicherheit des Chips. Da diese Art der Manipulation vor ihrer Aktivierung kaum zu erfassen ist, ist der Nachweis von Trojanern auf einem Chip direkt nach der Produktion extrem schwierig. Die Komplexität dieser Verlässlichkeitsprobleme machen ein einfaches Modellieren der Zuverlässigkeit und Gegenmaßnahmen ineffizient. Sie entsteht aufgrund verschiedener Quellen, eingeschlossen der Entwicklungsparameter (Technologie, Gerät, Schaltung und Architektur), der Herstellungsparameter, der Laufzeitauslastung und der Arbeitsumgebung. Dies motiviert das Erforschen von maschinellem Lernen und Laufzeitmethoden, welche potentiell mit dieser Komplexität arbeiten können. In dieser Arbeit stellen wir Lösungen vor, die in der Lage sind, eine verlässliche Ausführung von Computerhardware mit unterschiedlichem Laufzeitverhalten und Arbeitsbedingungen zu gewährleisten. Wir entwickelten Techniken des maschinellen Lernens um verschiedene Zuverlässigkeitseffekte zu modellieren, zu überwachen und auszugleichen. Verschiedene Lernmethoden werden genutzt, um günstige Überwachungspunkte zur Kontrolle der Arbeitsbelastung zu finden. Diese werden zusammen mit Zuverlässigkeitsmetriken, aufbauend auf Ausfallsicherheit und generellen Sicherheitsattributen, zum Erstellen von Vorhersagemodellen genutzt. Des Weiteren präsentieren wir eine kosten-optimierte Hardwaremonitorschaltung, welche die Überwachungspunkte zur Laufzeit auswertet. Im Gegensatz zum aktuellen Stand der Technik, welcher mikroarchitektonische Überwachungspunkte ausnutzt, evaluieren wir das Potential von Arbeitsbelastungscharakteristiken auf der Logikebene der zugrundeliegenden Hardware. Wir identifizieren verbesserte Features auf Logikebene um feingranulare Laufzeitüberwachung zu ermöglichen. Diese Logikanalyse wiederum hat verschiedene Stellschrauben um auf höhere Genauigkeit und niedrigeren Overhead zu optimieren. Wir untersuchten die Philosophie, Überwachungspunkte auf Logikebene mit Hilfe von Lernmethoden zu identifizieren und günstigen Monitore zu implementieren um eine adaptive Vorbeugung gegen statisches Altern, dynamisches Altern und strahlungsinduzierte Soft-Errors zu schaffen und zusätzlich die Aktivierung von Hardwaretrojanern zu erkennen. Diesbezüglich haben wir ein Vorhersagemodell entworfen, welches den Arbeitslasteinfluss auf alterungsbedingte Verschlechterungen des Chips mitverfolgt und dazu genutzt werden kann, dynamisch zur Laufzeit vorbeugende Techniken, wie Task-Mitigation, Spannungs- und Frequenzskalierung zu benutzen. Dieses Vorhersagemodell wurde in Software implementiert, welche verschiedene Arbeitslasten aufgrund ihrer Alterungswirkung einordnet. Um die Widerstandsfähigkeit gegenüber beschleunigter Alterung sicherzustellen, stellen wir eine Überwachungshardware vor, welche einen Teil der kritischen Flip-Flops beaufsichtigt, nach beschleunigter Alterung Ausschau hält und davor warnt, wenn ein zeitkritischer Pfad unter starker Alterungsbelastung steht. Wir geben die Implementierung einer Technik zum Reduzieren der durch das Ausführen spezifischer Subroutinen auftretenden Belastung von zeitkritischen Pfaden. Zusätzlich schlagen wir eine Technik zur Abschätzung von online Soft-Error-Schwachstellen von Speicherarrays und Logikkernen vor, welche auf der Überwachung einer kleinen Gruppe Flip-Flops des Entwurfs basiert. Des Weiteren haben wir eine Methode basierend auf Anomalieerkennung entwickelt, um Arbeitslastsignaturen von Hardwaretrojanern während deren Aktivierung zur Laufzeit zu erkennen und somit eine letzte Verteidigungslinie zu bilden. Basierend auf diesen Experimenten demonstriert diese Arbeit das Potential von fortgeschrittener Feature-Extraktion auf Logikebene und lernbasierter Vorhersage basierend auf Laufzeitdaten zur Verbesserung der Zuverlässigkeit von Harwareentwürfen

    A Survey on Industrial Control System Testbeds and Datasets for Security Research

    Full text link
    The increasing digitization and interconnection of legacy Industrial Control Systems (ICSs) open new vulnerability surfaces, exposing such systems to malicious attackers. Furthermore, since ICSs are often employed in critical infrastructures (e.g., nuclear plants) and manufacturing companies (e.g., chemical industries), attacks can lead to devastating physical damages. In dealing with this security requirement, the research community focuses on developing new security mechanisms such as Intrusion Detection Systems (IDSs), facilitated by leveraging modern machine learning techniques. However, these algorithms require a testing platform and a considerable amount of data to be trained and tested accurately. To satisfy this prerequisite, Academia, Industry, and Government are increasingly proposing testbed (i.e., scaled-down versions of ICSs or simulations) to test the performances of the IDSs. Furthermore, to enable researchers to cross-validate security systems (e.g., security-by-design concepts or anomaly detectors), several datasets have been collected from testbeds and shared with the community. In this paper, we provide a deep and comprehensive overview of ICSs, presenting the architecture design, the employed devices, and the security protocols implemented. We then collect, compare, and describe testbeds and datasets in the literature, highlighting key challenges and design guidelines to keep in mind in the design phases. Furthermore, we enrich our work by reporting the best performing IDS algorithms tested on every dataset to create a baseline in state of the art for this field. Finally, driven by knowledge accumulated during this survey's development, we report advice and good practices on the development, the choice, and the utilization of testbeds, datasets, and IDSs

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    Detection of Anomalous Behavior of Wireless Devices using Power Signal and Changepoint Detection Theory.

    Get PDF
    Anomaly detection has been applied in different fields of science and engineering over many years to recognize inconsistent behavior, which can affect the regular operation of devices, machines, and even organisms. The main goal of the research described in this thesis is to extract the meaningful features of an object's characteristics that allow researchers recognize such malicious behavior. Specifically, this work is focused on identifying malicious behavior in Android smartphones caused by code running on it. In general, extraneous activities can affect different parameters of such devices such as network traffic, CPU usage, hardware and software resources. Therefore, it is possible to use these parameters to unveil malicious activities. Using only one parameter can not guarantee an accurate model since a parameter may be modified by cybercriminals to act as a benign application. In contrast, using many parameters can produce excessive usage of smartphone's resources, or/and it can affect the time of detection of a proposed methodology. Considering that malicious activities are injected through the software applications that manage the usage of all hardware components, a smartphone's overall power consumption is a better choice for detecting malicious behavior. This metric is considered critical for anomaly analysis because it summarizes the impact of all hardware components' power consumption. Using only one metric is guaranteed to be efficient and accurate methodology for detecting malware on Android smartphones. This thesis analyzes the accuracy of two methodologies that are evaluated with emulated and real malware. It is necessary to highlight that the detection of real malware can be a challenging task because malicious activities can be triggered only if a user executes the correct combination of actions on the application. For this reason, in the present work, this drawback is solved by automating the user inputs with Android Debug Bridge (ADB) commands and Droidbot. With this automation tool, it is highly likely that malicious behavior can act, leaving a fingerprint in the power consumption. It should be noted that power consumption consist of time-series data that can be considered non-stationary signals due to changes in statistical parameters such as mean and variance over time. Therefore, the present work approaches the problem by analyzing each signal as a stochastic, using Changepoint detection theory to extract features from the time series. Finally, these features become the input of different machine learning classifiers used to differentiate non-malicious from malicious applications. Furthermore, the efficiency of each methodology is assessed in terms of the time of detection

    Contextual Bandit Modeling for Dynamic Runtime Control in Computer Systems

    Get PDF
    Modern operating systems and microarchitectures provide a myriad of mechanisms for monitoring and affecting system operation and resource utilization at runtime. Dynamic runtime control of these mechanisms can tailor system operation to the characteristics and behavior of the current workload, resulting in improved performance. However, developing effective models for system control can be challenging. Existing methods often require extensive manual effort, computation time, and domain knowledge to identify relevant low-level performance metrics, relate low-level performance metrics and high-level control decisions to workload performance, and to evaluate the resulting control models. This dissertation develops a general framework, based on the contextual bandit, for describing and learning effective models for runtime system control. Random profiling is used to characterize the relationship between workload behavior, system configuration, and performance. The framework is evaluated in the context of two applications of progressive complexity; first, the selection of paging modes (Shadow Paging, Hardware-Assisted Page) in the Xen virtual machine memory manager; second, the utilization of hardware memory prefetching for multi-core, multi-tenant workloads with cross-core contention for shared memory resources, such as the last-level cache and memory bandwidth. The resulting models for both applications are competitive in comparison to existing runtime control approaches. For paging mode selection, the resulting model provides equivalent performance to the state of the art while substantially reducing the computation requirements of profiling. For hardware memory prefetcher utilization, the resulting models are the first to provide dynamic control for hardware prefetchers using workload statistics. Finally, a correlation-based feature selection method is evaluated for identifying relevant low-level performance metrics related to hardware memory prefetching

    Detection of Anomalous Behavior of IoT/CPS Devices Using Their Power Signals

    Get PDF
    Embedded computing devices, in the Internet of Things (IoT) or Cyber-Physical Systems (CPS), are becoming pervasive in many domains around the world. Their wide deployment in simple applications (e.g., smart buildings, fleet management, and smart agriculture) or in more critical operations (e.g., industrial control, smart power grids, and self-driving cars) creates significant market potential ($ 4-11 trillion in annual revenue is expected by 2025). A main requirement for the success of such systems and applications is the capacity to ensure the performance of these devices. This task includes equipping them to be resilient against security threats and failures. Globally, several critical infrastructure applications have been the target of cyber attacks. These recent incidents, as well as the rich applicable literature, confirm that more research is needed to overcome such challenges. Consequently, the need for robust approaches that detect anomalous behaving devices in security and safety-critical applications has become paramount. Solving such a problem minimizes different kinds of losses (e.g., confidential data theft, financial loss, service access restriction, or even casualties). In light of the aforementioned motivation and discussion, this thesis focuses on the problem of detecting the anomalous behavior of IoT/CPS devices by considering their side-channel information. Solving such a problem is extremely important in maintaining the security and dependability of critical systems and applications. Although several side-channel based approaches are found in the literature, there are still important research gaps that need to be addressed. First, the intrusive nature of the monitoring in some of the proposed techniques results in resources overhead and requires instrumentation of the internal components of a device, which makes them impractical. It also raises a data integrity flag. Second, the lack of realistic experimental power consumption datasets that reflect the normal and anomalous behaviors of IoT and CPS devices has prevented fair and coherent comparisons with the state of the art in this domain. Finally, most of the research to date has concentrated on the accuracy of detection and not the novelty of detecting new anomalies. Such a direction relies on: (i) the availability of labeled datasets; (ii) the complexity of the extracted features; and (iii) the available compute resources. These assumptions and requirements are usually unrealistic and unrepresentative. This research aims to bridge these gaps as follows. First, this study extends the state of the art that adopts the idea of leveraging the power consumption of devices as a signal and the concept of decoupling the monitoring system and the devices to be monitored to detect and classify the "operational health'' of the devices. Second, this thesis provides and builds power consumption-based datasets that can be utilized by AI as well as security research communities to validate newly developed detection techniques. The collected datasets cover a wide range of anomalous device behavior due to the main aspects of device security (i.e., confidentiality, integrity, and availability) and partial system failures. The extensive experiments include: a wide spectrum of various emulated malware scenarios; five real malware applications taken from the well-known Drebin dataset; distributed denial of service attack (DDOS) where an IoT device is treated as: (1) a victim of a DDOS attack, and (2) the source of a DDOS attack; cryptomining malware where the resources of an IoT device are being hijacked to be used to advantage of the attacker’s wish and desire; and faulty CPU cores. This level of extensive validation has not yet been reported in any study in the literature. Third, this research presents a novel supervised technique to detect anomalous device behavior based on transforming the problem into an image classification problem. The main aim of this methodology is to improve the detection performance. In order to achieve the goals of this study, the methodology combines two powerful computer vision tools, namely Histograms of Oriented Gradients (HOG) and a Convolutional Neural Network (CNN). Such a detection technique is not only useful in this present case but can contribute to most time-series classification (TSC) problems. Finally, this thesis proposes a novel unsupervised detection technique that requires only the normal behavior of a device in the training phase. Therefore, this methodology aims at detecting new/unseen anomalous behavior. The methodology leverages the power consumption of a device and Restricted Boltzmann Machine (RBM) AutoEncoders (AE) to build a model that makes them more robust to the presence of security threats. The methodology makes use of stacked RBM AE and Principal Component Analysis (PCA) to extract feature vector based on AE's reconstruction errors. A One-Class Support Vector Machine (OC-SVM) classifier is then trained to perform the detection task. Across 18 different datasets, both of our proposed detection techniques demonstrated high detection performance with at least ~ 88% accuracy and 85% F-Score on average. The empirical results indicate the effectiveness of the proposed techniques and demonstrated improved detection performance gain of 9% - 17% over results reported in other methods

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities

    Challenges and Open Questions of Machine Learning in Computer Security

    Get PDF
    This habilitation thesis presents advancements in machine learning for computer security, arising from problems in network intrusion detection and steganography. The thesis put an emphasis on explanation of traits shared by steganalysis, network intrusion detection, and other security domains, which makes these domains different from computer vision, speech recognition, and other fields where machine learning is typically studied. Then, the thesis presents methods developed to at least partially solve the identified problems with an overall goal to make machine learning based intrusion detection system viable. Most of them are general in the sense that they can be used outside intrusion detection and steganalysis on problems with similar constraints. A common feature of all methods is that they are generally simple, yet surprisingly effective. According to large-scale experiments they almost always improve the prior art, which is likely caused by being tailored to security problems and designed for large volumes of data. Specifically, the thesis addresses following problems: anomaly detection with low computational and memory complexity such that efficient processing of large data is possible; multiple-instance anomaly detection improving signal-to-noise ration by classifying larger group of samples; supervised classification of tree-structured data simplifying their encoding in neural networks; clustering of structured data; supervised training with the emphasis on the precision in top p% of returned data; and finally explanation of anomalies to help humans understand the nature of anomaly and speed-up their decision. Many algorithms and method presented in this thesis are deployed in the real intrusion detection system protecting millions of computers around the globe

    TOWARD ASSURANCE AND TRUST FOR THE INTERNET OF THINGS

    Get PDF
    Kevin Ashton first used the term Internet of Things (IoT) in 1999 to describe a system in which objects in the physical world could be connected to the Internet by sensors. Since the inception of the term, the total number of Internet-connected devices has skyrocketed, resulting in their integration into every sector of society. Along with the convenience and functionality IoT devices introduce, there is serious concern regarding security, and the IoT security market has been slow to address fundamental security gaps. This dissertation explores some of these challenges in detail and proposes solutions that could make the IoT more secure. Because the challenges in IoT are broad, this work takes a broad view of securing the IoT. Each chapter in this dissertation explores particular aspects of security and privacy of the IoT, and introduces approaches to address them. We outline security threats related to IoT. We outline trends in the IoT market and explore opportunities to apply machine learning to protect IoT. We developed an IoT testbed to support IoT machine learning research. We propose a Connected Home Automated Security Monitor (CHASM) system that prevents devices from becoming invisible and uses machine learning to improve the security of the connected home and other connected domains. We extend the machine learning algorithms in CHASM to the network perimeter via a novel IoT edge sensor device. We assess the ways in which cybersecurity analytics will need to evolve and identify the potential role of government in promoting needed changes due to IoT adoptions. We applied supervised learning and deep learning classifiers to an IoT network connection log dataset to effectively identify varied botnet activity. We proposed a methodology, based on trust metrics and Delphic and Analytic Hierarchical Processes, to identify vulnera¬bilities in a supply chain and better quantify risk. We built a voice assistant for cyber in response to the increased rigor and associated cognitive load needed to maintain and protect IoT networks
    corecore