7 research outputs found
On Information-centric Resiliency and System-level Security in Constrained, Wireless Communication
The Internet of Things (IoT) interconnects many heterogeneous embedded devices either locally between each other, or globally with the Internet. These things are resource-constrained, e.g., powered by battery, and typically communicate via low-power and lossy wireless links. Communication needs to be secured and relies on crypto-operations that are often resource-intensive and in conflict with the device constraints. These challenging operational conditions on the cheapest hardware possible, the unreliable wireless transmission, and the need for protection against common threats of the inter-network, impose severe challenges to IoT networks. In this thesis, we advance the current state of the art in two dimensions.
Part I assesses Information-centric networking (ICN) for the IoT, a network paradigm that promises enhanced reliability for data retrieval in constrained edge networks. ICN lacks a lower layer definition, which, however, is the key to enable device sleep cycles and exclusive wireless media access. This part of the thesis designs and evaluates an effective media access strategy for ICN to reduce the energy consumption and wireless interference on constrained IoT nodes.
Part II examines the performance of hardware and software crypto-operations, executed on off-the-shelf IoT platforms. A novel system design enables the accessibility and auto-configuration of crypto-hardware through an operating system. One main focus is the generation of random numbers in the IoT. This part of the thesis further designs and evaluates Physical Unclonable Functions (PUFs) to provide novel randomness sources that generate highly unpredictable secrets, on low-cost devices that lack hardware-based security features.
This thesis takes a practical view on the constrained IoT and is accompanied by real-world implementations and measurements. We contribute open source software, automation tools, a simulator, and reproducible measurement results from real IoT deployments using off-the-shelf hardware. The large-scale experiments in an open access testbed provide a direct starting point for future research
Methodology for Testing RFID Applications
Radio Frequency Identification (RFID) is a promising technology for process automation and beyond that capable of identifying objects without the need for a line-of-sight. However, the trend towards automatic identification of objects also increases the demand for high quality RFID applications. Therefore, research on testing RFID systems and methodical approaches for testing are needed. This thesis presents a novel methodology for the system level test of RFID applications. The approach called ITERA, allows for the automatic generation of tests, defines a semantic model of the RFID system and provides a test environment for RFID applications. The method introduced can be used to gradually transform use cases into a semi-formal test specification. Test cases are then systematically generated, in order to execute them in the test environment. It applies the principle of model based testing from a black-box perspective in combination with a virtual environment for automatic test execution. The presence of RFID tags in an area, monitored by an RFID reader, can be modelled by time-based sets using set-theory and discrete events. Furthermore, the proposed description and semantics can be used to specify RFID systems and their applications, which might also be used for other purposes than testing. The approach uses the Unified Modelling Language to model the characteristics of the system under test. Based on the ITERA meta model test execution paths are extracted directly from activity diagrams and RFID specific test cases are generated. The approach introduced in this thesis allows to reduce the efforts for RFID application testing by systematically generating test cases and the automatic test execution. In combination with meta model and by considering additional parameters, like unreliability factors, it not only satisfies functional testing aspects, but also increases the confidence in the robustness of the tested application. Mixed with the instantly available virtual readers, it has the potential to speed up the development process and decrease the costs - even during the early development phases. ITERA can be used for highly automated testing, reproducible
tests and because of the instantly available readers, even before the real environment is deployed. Furthermore, the total control of the RFID environment enables to test applications which might be difficult to test manually. This thesis will explain the motivation and objectives of this new RFID application test methodology. Based on a RFID system analysis it proposes a practical solution on the identified issues. Further, it gives a literature review on testing fundamentals, model based test case generation, the typical components of a RFID system and RFID standards used in industry.Integrative Test-Methodology for RFID Applications (ITERA) - Project: Eurostars!5516 ITERA, FKZ 01QE1105
Statistical Population Genomics
This open access volume presents state-of-the-art inference methods in population genomics, focusing on data analysis based on rigorous statistical techniques. After introducing general concepts related to the biology of genomes and their evolution, the book covers state-of-the-art methods for the analysis of genomes in populations, including demography inference, population structure analysis and detection of selection, using both model-based inference and simulation procedures. Last but not least, it offers an overview of the current knowledge acquired by applying such methods to a large variety of eukaryotic organisms. Written in the highly successful Methods in Molecular Biology series format, chapters include introductions to their respective topics, pointers to the relevant literature, step-by-step, readily reproducible laboratory protocols, and tips on troubleshooting and avoiding known pitfalls. Authoritative and cutting-edge, Statistical Population Genomics aims to promote and ensure successful applications of population genomic methods to an increasing number of model systems and biological questions
Leveraging the heterogeneity of the internet of things devices to improve the security of smart environments
The growing number of devices that are being incorporated into the Internet of Things (IoT) environments leads to a wider presence of a variety of sensors, making these environments heterogeneous. However, the lack of standard input interfaces in such ecosystems poses a challenge in securing them. Among other existing vulnerabilities, the most prevalent are the lack of adequate access control mechanisms and the exploitation of cross-channel interactions between smart devices.
In order to tackle the first challenge, I propose a novel behavioral biometric system based on naturally occurring interactions with objects in smart environments. This system is designed to reduce the reliance on existing app-based authentication mechanisms of current smart home platforms and it leverages existing heterogeneous IoT devices to both identify and authenticate users without requiring any hardware modifications of existing smart home devices.
To be able to collect the data and evaluate this system, I introduce an end-to-end framework for remote experiments. Such experiments play an important role across multiple fields of studies, from medical science to engineering, as they allow for better representation of human participants and more realistic experimental environments, and ensure research continuity in exceptional circumstances, such as nationwide lockdowns. Yet cyber security has few standards for conducting experiments with human participants, let alone in a remote setting. This framework systematizes design and deployment practices while preserving realistic, reproducible data collection and the safety and privacy of participants.
Using this methodology, I conduct two experiments. The first one is a multi-user study taking place in six households composed of 25 participants. The second experiment involves 13 participants in a company environment and is used to study mimicry attacks on the biometric system proposed in this thesis. I demonstrate that this system can identify users in multi-user environments with an accuracy of at least 98% for a single object interaction without requiring any sensors on the object itself. I also show that it can provide seamless and unobtrusive authentication while remaining highly resistant to zero-effort, video, and in-person observation-based mimicry attacks. Even when at most 1% of the strongest type of mimicry attacks are successful, this system does not require the user to take out their phone to approve legitimate transactions in more than 80% of cases for a single interaction. This increases to 92% of transactions when interactions with more objects are considered.
To mitigate the second vulnerability, where an attacker exploits multiple heterogeneous devices in a chain such that each one triggers the next, I propose a novel approach that uses only dynamic analysis to examine such interactions in smart ecosystems. I use real-time device data to generate a knowledge graph that models the interactions between devices and enables the system to identify attack chains and vulnerable automations. I evaluate this approach in a smart home environment with 8 devices and 10 automations, with and without the presence of an active user. I demonstrate that such a system can accurately detect 10 cross-channel interactions that lead to 30 different cross-channel interaction chains in the unoccupied environment and 6 such interactions that result in 13 interaction chains in the occupied environment
APPLYING COLLABORATIVE ONLINE ACTIVE LEARNING IN VEHICULAR NETWORKS FOR FUTURE CONNECTED AND AUTONOMOUS VEHICLES
The main objective of this thesis is to provide a framework for, and proof of concept of, collaborative online active learning in vehicular networks. Another objective is to advance the state of the art in simulation-based evaluation and validation of connected intelligent vehicle applications. With advancements in machine learning and artificial intelligence, connected autonomous vehicles (CAVs) have begun to migrate from laboratory development and testing conditions to driving on public roads. Their deployment in our environmental landscape offers potential for decreases in road accidents and traffic congestion, as well as improved mobility in overcrowded cities. Although common driving scenarios can be relatively easily solved with classic perception, path planning, and motion control methods, the remaining unsolved scenarios are corner cases in which traditional methods fail. These unsolved cases are the keys to deploying CAVs safely on the road, but they require an enormous amount of data collection and high-quality human annotation, which are very cost-ineffective considering the ever-changing real-world scenarios and highly diverse road/weather conditions. Additionally, evaluating and testing applications for CAVs in real testbeds are extremely expensive, as obvious failures like crashes tend to be rare events and can hardly be captured through predefined test scenarios. Therefore, realistic simulation tools with the benefit of lower cost as well as generating reproducible experiment results are needed to complement the real testbeds in validating applications for CAVs. Therefore, in this thesis, we address the challenges therein and establish the fundamentals of the collaborative online active learning framework in vehicular network for future connected and autonomous vehicles.Ph.D
Recommended from our members
Resource Allocation for the Internet of Everything: From Energy Harvesting Tags to Cellular Networks
In the near future, objects equipped with heterogeneous devices such as sensors, actuators, and tags, will be able to interact with each other and cooperate to achieve common goals. These networks are termed the Internet of Things (IoT) and have applications in healthcare, smart buildings, assisted living, manufacturing, supply chain management, and intelligent transportation. The IoT vision is enabled by ubiquitous wireless communications and there are numerous resource allocation challenges to efficiently connect each device to the network. In this thesis, we study wireless resource allocation problems that arise in the IoT, namely in the areas of the energy harvesting tags, termed the Internet of Tags (IoTags), and in cellular networks (mobile and cognitive).
First, we present our experience designing and developing Energy Harvesting Active Networked Tags (EnHANTs). The prototypes harvest indoor light energy using custom organic solar cells, communicate and form multihop networks using ultra-low-power Ultra- Wideband Impulse Radio (UWB-IR) transceivers, and dynamically adapt their communications and networking patterns to the energy harvesting and battery states. Using our custom designed small scale testbed, we evaluate energy-adaptive networking algorithms spanning the protocol stack (link, network, and flow control). Throughout the evaluation of experiments, we highlight numerous phenomena which are typically difficult to capture in simulations and nearly impossible to model in analytical work. We believe that these lessons would be useful for the designers of many different types of energy harvesters and energy harvesting adaptive networks.
Based on the lessons learned from EnHANTs, we present Power Aware Neighbor Discovery Asynchronously (Panda), a Neighbor Discovery (ND) protocol optimized for networks of energy harvesting nodes. To enable object tracking and monitoring applications for IoTags, Panda is designed to efficiently identify nodes which are within wireless communication range of one another. By accounting for numerous hardware constraints which are typically ignored (i.e., energy costs for transmission/reception, and transceiver state switching times/costs), we formulate a power budget to guarantee perpetual ND. Finally, via testbed evaluation utilizing Commercial Off-The-Shelf (COTS) energy harvesting nodes, we demonstrate experimentally that Panda outperforms existing protocols by a factor of 2-3x.
We then consider Proportional Fair (PF) cellular scheduling algorithms for mobile users, These users experience slow-fading wireless channels while traversing roads, train tracks, bus routes, etc. We leverage the predicable mobility on these routes and present the Predictive Finite-horizon PF Scheduling ((PF)2S) Framework. We collect extensive channel measurement results from a 3G network and characterize mobility-induced channel state trends. We show that a user’s channel state is highly reproducible and leverage that to develop a data rate prediction mechanism. Our trace-based simulations of the (PF)2S Framework indicate that the framework can increase the throughput by 15%–55% compared to traditional PF schedulers, while improving fairness.
Finally, we study fragmentation within a probability model of combinatorial structures. Our model does not refer to any particular application. Yet, it is applicable to dynamic spectrum access networks which can be used as the wireless access technology for numerous IoT applications. In dynamic spectrum access networks, users share the wireless resource and compete to transmit and receive data, and accordingly have specific bandwidth and residence-time requirements. We prove that the spectrum tends towards states of complete fragmentation. That is, for every request for j > 1 sub-channels, nearly all size-j requests are allocated j mutually disjoint sub-channels. In a suite of four theorems, we show how this result specializes for certain classes of request-size distributions. We also show that the delays in reaching the inefficient states of complete fragmentation can be surprisingly long. The results of this chapter provide insights into the fragmentation process and, in turn, into those circumstances where defragmentation is worth the cost it incurs
Analyse von IT-Anwendungen mittels Zeitvariation
Performanzprobleme treten in der Praxis von IT-Anwendungen häufig auf, trotz steigender Hardwareleistung und verschiedenster Ansätze zur Entwicklung performanter Software im Softwarelebenszyklus. Modellbasierte Performanzanalysen ermöglichen auf Basis von Entwurfsartefakten eine Prävention von Performanzproblemen. Bei bestehenden oder teilweise implementierten IT-Anwendungen wird versucht, durch Hardwareskalierung oder Optimierung des Codes Performanzprobleme zu beheben. Beide Ansätze haben Nachteile: modellbasierte Ansätze werden durch die benötigte hohe Expertise nicht generell genutzt, die nachträgliche Optimierung ist ein unsystematischer und unkoordinierter Prozess. Diese Dissertation schlägt einen neuen Ansatz zur Performanzanalyse für eine nachfolgende Optimierung vor. Mittels eines Experiments werden Performanzwechselwirkungen in der IT-Anwendung identifiziert. Basis des Experiments, das Analyseinstrumentarium, ist eine zielgerichtete, zeitliche Variation von Start-, Endzeitpunkt oder Laufzeitdauer von Abläufen der IT-Anwendung. Diese Herangehensweise ist automatisierbar und kann strukturiert und ohne hohen Lernaufwand im Softwareentwicklungsprozess angewandt werden. Mittels der Turingmaschine wird bewiesen, dass durch die zeitliche Variation des Analyseinstrumentariums die Korrektheit von sequentiellen Berechnung beibehalten wird. Dies wird auf nebenläufige Systeme mittels der parallelen Registermaschine erweitert und diskutiert. Mit diesem praxisnahen Maschinenmodell wird dargelegt, dass die entdeckten Wirkzusammenhänge des Analyseinstrumentariums Optimierungskandidaten identifizieren. Eine spezielle Experimentierumgebung, in der die Abläufe eines Systems, bestehend aus Software und Hardware, programmierbar variiert werden können, wird mittels einer Virtualisierungslösung realisiert. Techniken zur Nutzung des Analyseinstrumentariums durch eine Instrumentierung werden angegeben. Eine Methode zur Ermittlung von Mindestanforderungen von IT-Anwendungen an die Hardware wird präsentiert und mittels der Experimentierumgebung anhand von zwei Szenarios und dem Android Betriebssystem exemplifiziert. Verschiedene Verfahren, um aus den Beobachtungen des Experiments die Optimierungskandidaten des Systems zu eruieren, werden vorgestellt, klassifiziert und evaluiert. Die Identifikation von Optimierungskandidaten und -potenzial wird an Illustrationsszenarios und mehreren großen IT-Anwendungen mittels dieser Methoden praktisch demonstriert. Als konsequente Erweiterung wird auf Basis des Analyseinstrumentariums eine Testmethode zum Validieren eines Systems gegenüber nicht deterministisch reproduzierbaren Fehlern, die auf Grund mangelnder Synchronisationsmechanismen (z.B. Races) oder zeitlicher Abläufe entstehen (z.B. Heisenbugs, alterungsbedingte Fehler), angegeben.Performance problems are very common in IT-Application, even though hardware performance is consistently increasing and there are several different software performance engineering methodologies during the software life cycle.
The early model based performance predictions are offering a prevention of performance problems based on software engineering artifacts. Existing or partially implemented IT-Applications are optimized with hardware scaling or code tuning. There are disadvantages with both approaches: the model based performance predictions are not generally used due to the needed high expertise, the ex post optimization is an unsystematic and unstructured process.
This thesis proposes a novel approach to a performance analysis for a subsequent optimization of the IT-Application. Via an experiment in the IT-Application performance interdependencies are identified. The core of the analysis is a specific variation of start-, end time or runtime of events or processes in the IT-Application. This approach is automatic and can easily be used in a structured way in the software development process.
With a Turingmachine the correctness of this experimental approach was proved. With these temporal variations the correctness of a sequential calculation is held.
This is extended and discussed on concurrent systems with a parallel Registermachine. With this very practical machine model the effect of the experiment and the subsequent identification of optimization potential and candidates are demonstrated. A special experimental environment to vary temporal processes and events of the hardware and the software of a system was developed with a virtual machine.
Techniques for this experimental approach via instrumenting are stated. A method to determine minimum hardware requirements with this experimental approach is presented and exemplified with two scenarios based on the Android Framework. Different techniques to determine candidates and potential for an optimization are presented, classified and evaluated. The process to analyze and identify optimization candidates and potential is demonstrated on scenarios for illustration purposes and real IT-Applications.
As a consistent extension a test methodology enabling a test of non-deterministic reproducible errors is given. Such non-deterministic reproducible errors are faults in the system caused by insufficient synchronization mechanisms (for example Races or Heisenbugs) or aging-related faults