427 research outputs found

    A State-Machine Model for Reliability Eliciting over Wireless Sensor and Actuator Networks

    Get PDF
    AbstractAdvances in communications and embedded systems have led to the proliferation of wireless sensor and actuator networks (WSANs) in a wide variety of application domains. One important key of many such WSAN applications is the needed to meet non-functional requirements (e.g., lifetime, reliability, time guarantees) as well as functional ones (e.g. monitoring, actuation). Some application domains even require that sensor nodes be deployed in harsh environments (e.g., refineries), where they can fail due to communication interference, power problems or other issues. Unfortunately, the node failures can be catastrophic for critical or safety related systems. State machines can offer a promising approach to separate the two concerns – functional and non-functional – bringing forth reliability exception conditions handling, by means of fault handling states. We develop an approach that allows users to define and program typical applications using their platform language, but also adds state machine logic to design, view and handle explicitly other concerns such as reliability. The experimental section shows a working deployment of this concept in an industrial refinery settin

    Contents

    Get PDF

    Ensuring the resilience of wireless sensor networks to malicious data injections through measurements inspection

    Get PDF
    Malicious data injections pose a severe threat to the systems based on \emph{Wireless Sensor Networks} (WSNs) since they give the attacker control over the measurements, and on the system's status and response in turn. Malicious measurements are particularly threatening when used to spoof or mask events of interest, thus eliciting or preventing desirable responses. Spoofing and masking attacks are particularly difficult to detect since they depict plausible behaviours, especially if multiple sensors have been compromised and \emph{collude} to inject a coherent set of malicious measurements. Previous work has tackled the problem through \emph{measurements inspection}, which analyses the inter-measurements correlations induced by the physical phenomena. However, these techniques consider simplistic attacks and are not robust to collusion. Moreover, they assume highly predictable patterns in the measurements distribution, which are invalidated by the unpredictability of events. We design a set of techniques that effectively \emph{detect} malicious data injections in the presence of sophisticated collusion strategies, when one or more events manifest. Moreover, we build a methodology to \emph{characterise} the likely compromised sensors. We also design \emph{diagnosis} criteria that allow us to distinguish anomalies arising from malicious interference and faults. In contrast with previous work, we test the robustness of our methodology with automated and sophisticated attacks, where the attacker aims to evade detection. We conclude that our approach outperforms state-of-the-art approaches. Moreover, we estimate quantitatively the WSN degree of resilience and provide a methodology to give a WSN owner an assured degree of resilience by automatically designing the WSN deployment. To deal also with the extreme scenario where the attacker has compromised most of the WSN, we propose a combination with \emph{software attestation techniques}, which are more reliable when malicious data is originated by a compromised software, but also more expensive, and achieve an excellent trade-off between cost and resilience.Open Acces

    Determining Resilience Gains from Anomaly Detection for Event Integrity in Wireless Sensor Networks

    Get PDF
    Measurements collected in a wireless sensor network (WSN) can be maliciously compromised through several attacks, but anomaly detection algorithms may provide resilience by detecting inconsistencies in the data. Anomaly detection can identify severe threats to WSN applications, provided that there is a sufficient amount of genuine information. This article presents a novel method to calculate an assurance measure for the network by estimating the maximum number of malicious measurements that can be tolerated. In previous work, the resilience of anomaly detection to malicious measurements has been tested only against arbitrary attacks, which are not necessarily sophisticated. The novel method presented here is based on an optimization algorithm, which maximizes the attack’s chance of staying undetected while causing damage to the application, thus seeking the worst-case scenario for the anomaly detection algorithm. The algorithm is tested on a wildfire monitoring WSN to estimate the benefits of anomaly detection on the system’s resilience. The algorithm also returns the measurements that the attacker needs to synthesize, which are studied to highlight the weak spots of anomaly detection. Finally, this article presents a novel methodology that takes in input the degree of resilience required and automatically designs the deployment that satisfies such a requirement

    Safety, security and privacy in machine learning based Internet of Things

    Get PDF
    Recent developments in communication and information technologies, especially in the internet of things (IoT), have greatly changed and improved the human lifestyle. Due to the easy access to, and increasing demand for, smart devices, the IoT system faces new cyber-physical security and privacy attacks, such as denial of service, spoofing, phishing, obfuscations, jamming, eavesdropping, intrusions, and other unforeseen cyber threats to IoT systems. The traditional tools and techniques are not very efficient to prevent and protect against the new cyber-physical security challenges. Robust, dynamic, and up-to-date security measures are required to secure IoT systems. The machine learning (ML) technique is considered the most advanced and promising method, and opened up many research directions to address new security challenges in the cyber-physical systems (CPS). This research survey presents the architecture of IoT systems, investigates different attacks on IoT systems, and reviews the latest research directions to solve the safety and security of IoT systems based on machine learning techniques. Moreover, it discusses the potential future research challenges when employing security methods in IoT systems

    Modélisation conjointe de la sûreté et de la sécurité pour l’évaluation des risques dans les systèmes cyber-physiques

    Get PDF
    Cyber physical systems (CPS) denote systems that embed programmable components in order to control a physical process or infrastructure. CPS are henceforth widely used in different industries like energy, aeronautics, automotive, medical or chemical industry. Among the variety of existing CPS stand SCADA (Supervisory Control And Data Acquisition) systems that offer the necessary means to control and supervise critical infrastructures. Their failure or malfunction can engender adverse consequences on the system and its environment.SCADA systems used to be isolated and based on simple components and proprietary standards. They are nowadays increasingly integrating information and communication technologies (ICT) in order to facilitate supervision and control of the industrial process and to reduce exploitation costs. This trend induces more complexity in SCADA systems and exposes them to cyber-attacks that exploit vulnerabilities already existent in the ICT components. Such attacks can reach some critical components within the system and alter its functioning causing safety harms.We associate throughout this dissertation safety with accidental risks originating from the system and security with malicious risks with a focus on cyber-attacks. In this context of industrial systems supervised by new SCADA systems, safety and security requirements and risks converge and can have mutual interactions. A joint risk analysis covering both safety and security aspects would be necessary to identify these interactions and optimize the risk management.In this thesis, we give first a comprehensive survey of existing approaches considering both safety and security issues for industrial systems, and highlight their shortcomings according to the four following criteria that we believe essential for a good model-based approach: formal, automatic, qualitative and quantitative and robust (i.e. easily integrates changes on system into the model).Next, we propose a new model-based approach for a safety and security joint risk analysis: S-cube (SCADA Safety and Security modeling), that satisfies all the above criteria. The S-cube approach enables to formally model CPS and yields the associated qualitative and quantitative risk analysis. Thanks to graphical modeling, S-cube enables to input the system architecture and to easily consider different hypothesis about it. It enables next to automatically generate safety and security risk scenarios likely to happen on this architecture and that lead to a given undesirable event, with an estimation of their probabilities.The S-cube approach is based on a knowledge base that describes the typical components of industrial architectures encompassing information, process control and instrumentation levels. This knowledge base has been built upon a taxonomy of attacks and failure modes and a hierarchical top-down reasoning mechanism. It has been implemented using the Figaro modeling language and the associated tools. In order to build the model of a system, the user only has to describe graphically the physical and functional (in terms of software and data flows) architectures of the system. The association of the knowledge base and the system architecture produces a dynamic state based model: a Continuous Time Markov Chain. Because of the combinatorial explosion of the states, this CTMC cannot be exhaustively built, but it can be explored in two ways: by a search of sequences leading to an undesirable event, or by Monte Carlo simulation. This yields both qualitative and quantitative results.We finally illustrate the S-cube approach on a realistic case study: a pumped storage hydroelectric plant, in order to show its ability to yield a holistic analysis encompassing safety and security risks on such a system. We investigate the results obtained in order to identify potential safety and security interactions and give recommendations.Les Systèmes Cyber Physiques (CPS) intègrent des composants programmables afin de contrôler un processus physique. Ils sont désormais largement répandus dans différentes industries comme l’énergie, l’aéronautique, l’automobile ou l’industrie chimique. Parmi les différents CPS existants, les systèmes SCADA (Supervisory Control And Data Acquisition) permettent le contrôle et la supervision des installations industrielles critiques. Leur dysfonctionnement peut engendrer des impacts néfastes sur l’installation et son environnement.Les systèmes SCADA ont d’abord été isolés et basés sur des composants et standards propriétaires. Afin de faciliter la supervision du processus industriel et réduire les coûts, ils intègrent de plus en plus les technologies de communication et de l’information (TIC). Ceci les rend plus complexes et les expose à des cyber-attaques qui exploitent les vulnérabilités existantes des TIC. Ces attaques peuvent modifier le fonctionnement du système et nuire à sa sûreté.On associe dans la suite la sûreté aux risques de nature accidentelle provenant du système, et la sécurité aux risques d’origine malveillante et en particulier les cyber-attaques. Dans ce contexte où les infrastructures industrielles sont contrôlées par les nouveaux systèmes SCADA, les risques et les exigences liés à la sûreté et à la sécurité convergent et peuvent avoir des interactions mutuelles. Une analyse de risque qui couvre à la fois la sûreté et la sécurité est indispensable pour l’identification de ces interactions ce qui conditionne l’optimalité de la gestion de risque.Dans cette thèse, on donne d’abord un état de l’art complet des approches qui traitent la sûreté et la sécurité des systèmes industriels et on souligne leur carences par rapport aux quatre critères suivants qu’on juge nécessaires pour une bonne approche basée sur les modèles : formelle, automatique, qualitative et quantitative, et robuste (i.e. intègre facilement dans le modèle des variations d’hypothèses sur le système).On propose ensuite une nouvelle approche orientée modèle d’analyse conjointe de la sûreté et de la sécurité : S-cube (SCADA Safety and Security modeling), qui satisfait les critères ci-dessus. Elle permet une modélisation formelle des CPS et génère l’analyse de risque qualitative et quantitative associée. Grâce à une modélisation graphique de l’architecture du système, S-cube permet de prendre en compte différentes hypothèses et de générer automatiquement les scenarios de risque liés à la sûreté et à la sécurité qui amènent à un évènement indésirable donné, avec une estimation de leurs probabilités.L’approche S-cube est basée sur une base de connaissance (BDC) qui décrit les composants typiques des architectures industrielles incluant les systèmes d’information, le contrôle et la supervision, et l’instrumentation. Cette BDC a été conçue sur la base d’une taxonomie d’attaques et modes de défaillances et un mécanisme de raisonnement hiérarchique. Elle a été mise en œuvre à l’aide du langage de modélisation Figaro et ses outils associés. Afin de construire le modèle du système, l’utilisateur saisit graphiquement l’architecture physique et fonctionnelle (logiciels et flux de données) du système. L’association entre la BDC et ce modèle produit un modèle d’états dynamiques : une chaîne de Markov à temps continu. Pour limiter l’explosion combinatoire, cette chaîne n’est pas construite mais peut être explorée de deux façons : recherche de séquences amenant à un évènement indésirable ou simulation de Monte Carlo, ce qui génère des résultats qualitatifs et quantitatifs.On illustre enfin l’approche S-cube sur un cas d’étude réaliste : un système de stockage d’énergie par pompage, et on montre sa capacité à générer une analyse holistique couvrant les risques liés à la sûreté et à la sécurité. Les résultats sont ensuite analysés afin d’identifier les interactions potentielles entre sûreté et sécurité et de donner des recommandations

    Connecting the Brain to Itself through an Emulation.

    Get PDF
    Pilot clinical trials of human patients implanted with devices that can chronically record and stimulate ensembles of hundreds to thousands of individual neurons offer the possibility of expanding the substrate of cognition. Parallel trains of firing rate activity can be delivered in real-time to an array of intermediate external modules that in turn can trigger parallel trains of stimulation back into the brain. These modules may be built in software, VLSI firmware, or biological tissue as in vitro culture preparations or in vivo ectopic construct organoids. Arrays of modules can be constructed as early stage whole brain emulators, following canonical intra- and inter-regional circuits. By using machine learning algorithms and classic tasks known to activate quasi-orthogonal functional connectivity patterns, bedside testing can rapidly identify ensemble tuning properties and in turn cycle through a sequence of external module architectures to explore which can causatively alter perception and behavior. Whole brain emulation both (1) serves to augment human neural function, compensating for disease and injury as an auxiliary parallel system, and (2) has its independent operation bootstrapped by a human-in-the-loop to identify optimal micro- and macro-architectures, update synaptic weights, and entrain behaviors. In this manner, closed-loop brain-computer interface pilot clinical trials can advance strong artificial intelligence development and forge new therapies to restore independence in children and adults with neurological conditions

    Progress in ambient assisted systems for independent living by the elderly

    Get PDF
    One of the challenges of the ageing population in many countries is the efficient delivery of health and care services, which is further complicated by the increase in neurological conditions among the elderly due to rising life expectancy. Personal care of the elderly is of concern to their relatives, in case they are alone in their homes and unforeseen circumstances occur, affecting their wellbeing. The alternative; i.e. care in nursing homes or hospitals is costly and increases further if specialized care is mobilized to patients’ place of residence. Enabling technologies for independent living by the elderly such as the ambient assisted living systems (AALS) are seen as essential to enhancing care in a cost-effective manner. In light of significant advances in telecommunication, computing and sensor miniaturization, as well as the ubiquity of mobile and connected devices embodying the concept of the Internet of Things (IoT), end-to-end solutions for ambient assisted living have become a reality. The premise of such applications is the continuous and most often real-time monitoring of the environment and occupant behavior using an event-driven intelligent system, thereby providing a facility for monitoring and assessment, and triggering assistance as and when needed. As a growing area of research, it is essential to investigate the approaches for developing AALS in literature to identify current practices and directions for future research. This paper is, therefore, aimed at a comprehensive and critical review of the frameworks and sensor systems used in various ambient assisted living systems, as well as their objectives and relationships with care and clinical systems. Findings from our work suggest that most frameworks focused on activity monitoring for assessing immediate risks while the opportunities for integrating environmental factors for analytics and decision-making, in particular for the long-term care were often overlooked. The potential for wearable devices and sensors, as well as distributed storage and access (e.g. cloud) are yet to be fully appreciated. There is a distinct lack of strong supporting clinical evidence from the implemented technologies. Socio-cultural aspects such as divergence among groups, acceptability and usability of AALS were also overlooked. Future systems need to look into the issues of privacy and cyber security

    Pervasive computing reference architecture from a software engineering perspective (PervCompRA-SE)

    Get PDF
    Pervasive computing (PervComp) is one of the most challenging research topics nowadays. Its complexity exceeds the outdated main frame and client-server computation models. Its systems are highly volatile, mobile, and resource-limited ones that stream a lot of data from different sensors. In spite of these challenges, it entails, by default, a lengthy list of desired quality features like context sensitivity, adaptable behavior, concurrency, service omnipresence, and invisibility. Fortunately, the device manufacturers improved the enabling technology, such as sensors, network bandwidth, and batteries to pave the road for pervasive systems with high capabilities. On the other hand, this domain area has gained an enormous amount of attention from researchers ever since it was first introduced in the early 90s of the last century. Yet, they are still classified as visionary systems that are expected to be woven into people’s daily lives. At present, PervComp systems still have no unified architecture, have limited scope of context-sensitivity and adaptability, and many essential quality features are insufficiently addressed in PervComp architectures. The reference architecture (RA) that we called (PervCompRA-SE) in this research, provides solutions for these problems by providing a comprehensive and innovative pair of business and technical architectural reference models. Both models were based on deep analytical activities and were evaluated using different qualitative and quantitative methods. In this thesis we surveyed a wide range of research projects in PervComp in various subdomain areas to specify our methodological approach and identify the quality features in the PervComp domain that are most commonly found in these areas. It presented a novice approach that utilizes theories from sociology, psychology, and process engineering. The thesis analyzed the business and architectural problems in two separate chapters covering the business reference architecture (BRA) and the technical reference architecture (TRA). The solutions for these problems were introduced also in the BRA and TRA chapters. We devised an associated comprehensive ontology with semantic meanings and measurement scales. Both the BRA and TRA were validated throughout the course of research work and evaluated as whole using traceability, benchmark, survey, and simulation methods. The thesis introduces a new reference architecture in the PervComp domain which was developed using a novel requirements engineering method. It also introduces a novel statistical method for tradeoff analysis and conflict resolution between the requirements. The adaptation of the activity theory, human perception theory and process re-engineering methods to develop the BRA and the TRA proved to be very successful. Our approach to reuse the ontological dictionary to monitor the system performance was also innovative. Finally, the thesis evaluation methods represent a role model for researchers on how to use both qualitative and quantitative methods to evaluate a reference architecture. Our results show that the requirements engineering process along with the trade-off analysis were very important to deliver the PervCompRA-SE. We discovered that the invisibility feature, which was one of the envisioned quality features for the PervComp, is demolished and that the qualitative evaluation methods were just as important as the quantitative evaluation methods in order to recognize the overall quality of the RA by machines as well as by human beings

    Time Synchronization in Multimodal Wireless Cyber-Physical Systems: A Wearable Biopotential Acquisition and Collaborative Brain-Computer Interface Paradigm

    Get PDF
    Die Forschung zu Brain-Computer Interface (BCI) hat in den letzten drei Jahren riesige Fortschritte gemacht, nicht nur im Bereich der menschlich gesteuerten Roboter, der Steuerung von Prothesen, des Interpretierens von Wörtern, der Kommunikation in einer Virtual Reality Umgebung oder der Computerspiele, sondern auch in der kognitiven Neurologie. Patienten, die unter enormen motorischen Dysfunktionen leiden (letztes Stadium Amyotrophe Lateralsklerose) könnten solch ein BCI System als alternatives Medium zur Kommunikation durch die eigene Gehirnaktivität nutzen. Neuste Studien zeigen, dass die Verwendung dieses BCI Systems in einem Gruppenexperiment helfen kann die menschliche Entscheidungstreffung deutlich zu verbessern. Dies ist ein neues Feld des BCI, nämlich das Collaborative BCI. Einerseits erfordert die Durchführung solch eines Gruppenexperiments drahtlose Hochleistungs-EEG Systeme, basierend auf BCI, welches kostengünstig und tragbar sein sollte und Langzeit-Monitoring hochwertiger EEG Daten sicherstellt. Andererseits ist es erforderlich, eine Zeitsynchronisierung zwischen den einzelnen BCI Systemen einzusetzen, damit diese für ein Gruppenexperiment zum Einsatz kommen können. Diese Herausforderungen setzten die Grundlage dieser Doktorarbeit. In dieser Arbeit wurde ein neuartiges, nicht invasives, modulares, biopotentiales Messsystem entwickelt: Dieses kann Breitband (0.5 Hz–150 Hz) Biopotentiale ableiten, bestehend aus Elektromyographie (EMG), Elektrokardiografie (EKG), Elektroencephalografie (EEG), wurde insgesamt bezeichnet als ExG bzw. das Messsystem als ExG-System benannt. Die Modularität des ExG-Systems erlaubt 8 bis hin zu 256 Kanäle zu konfigurieren, je nach Anforderung, ob in einen textilen Schlauch eingekapselt zur Erfassung von EMG Signalen, in eine textilen Weste zur Erfassung von ECG Signalen oder in eine textilen Kappe zur Erfassung von EEG Signalen. Der Einbau des ExG-Systems in eine Kappe wurde ebenfalls im Rahmen der Arbeit entwickelt. Der letzte Schritt des ExG-Systems zeigt niedriges Eingangsrauschen von 7 µVvon-Spitze-zu-Spitze und benötigt 41 mW/Kanal der Datenaufnahme im aktiven Zustand. Ein WiFi-Modul wurde für eine drahtlose Datenübertragung an einen ferngesteuerten PC in das ExG-System eingebaut. Um mit dem entwickelte System BCI Anwendungen zu ermöglichen, wurde ein akustisch und visuell evozierter Potenzialstimulator (SSVEP/AEP Stimulator) entwickelt. In eben diesem wurde ein Rasperry Pi als Zentralrechner benutzt und ein Bash basiertes Player-Skript iii einprogrammiert, das Mediadaten (Video, Audio, Ton) aus der Angabe einer Lookup Tabelle (LUT) in ihr Linux Betriebssystem spielt. Im Rahmen der Arbeit wurde eine Zeitsynchronisierung an einigen dieser ExG-Systeme mit Hilfe von einer eingebetteten Hardware/Softwarelösung durchgeführt. Die Hardwareteile bestehen aus einigen Leiterplatten, nämlich Sync Modulen mit einem Quarzoszillator, einem Mikrocontroller und einem Funkmodul (Hierbei Bluetooth 4.0). Eines von diesen ist das Sync-Addon, das mit jedem Messsystem (z.B. ExG-System) das zu synchronisieren ist, angeschlossen wird. Das andere bezeichnet man als Sync-Center, das an die Datenverarbeitungsrechner angehängt wird. Das Softwareteil übernimmt den Zeitsynchronisierungsmechanismus mit Hilfe eines funkbasierten Protokolls. Im Rahmen der Arbeit wurde ein neues energieeffizientes pairwise broadcast Zeitsynchronisationsprotokoll (PBS), welches nur theoretisch vorgestellt wurde, experimentell verifiziert. Außerdem wurde es mit anderen bestehenden Zeitsynchronisationsprotokollen auf dem aktuellen Stand der Technik evaluiert, basierend auf den Ergebnissen der gleichen Hardwareebene. In der letzten Iteration der Sync-Module wurde ein durchschnittlicher Synchronisationsfehler von 2 ms, den Konfidenzintervall von 95% berücksichtigend, erlangt. Da für Collaborative BCI, P300, ein Ereignis bezogenes Potenzial mit dem Auslöseimpuls, der 300−500 ms nach dem Vorgang eintritt, eingestellt wurde, ist die erreichte Synchronisationsgenauigkeit genügend, um solch ein Experiment durchzuführen.Brain-computer interface (BCI) has experienced the last three decades tremendous technological advances not only in the field of human controller robotics, or in controlling prosthesis, or in spelling words, or in interacting with a virtual reality environment, or in gaming but also in cognitive neuroscience. Patients suffering from severe motoric dysfunction (e.g. late stage of Amyotrophic Lateral Sclerosis) may utilise such a BCI system as an alternative medium of communication by mental activity. Recently studies have shown that usage of such BCI in a group experiment can help to improve human decision making. This is a new field of BCI, namely collaborative BCI. On one hand, performing such group experiments require wireless, high density EEG system based BCI which should be low-cost, wearable and provide long time monitoring of good quality EEG data. On the other hand time synchronization is required to be established among a group of BCI systems if they are employed for such a group experiments. These drawbacks set the foundation of this thesis work. In this work a novel non-invasive modular biopotential measurement system which can acquire wideband (0.15 Hz–200 Hz) biopotential signals consisting Electromyography (EMG), Electrocardiography (ECG), Electroencephalography (EEG) together called ExG, following ExG-system was designed. The modularity of the ExG-system allows it to be configured from 8 up to 256 channels according to the requirement if it’s to be encapsulated in a textile sleeve for recording of EMG signals, or in a textile vest for recording of ECG signals, or in a textile cap for recording of EEG signals. The assembly of the ExG-system in cap was also developed during the scope of the work. The final iteration of the ExG-system exhibits low input noise of 7 µVpeak-to-peak and require 41 mW/channel of data recording in active state. A WiFi module was embedded into the ExG-system for wireless data transmission to a remote PC. To enable the developed system for BCI applications a steady-state visually/auditory evoked potential stimulator (SSVEP/AEP stimulator) incorporating a Raspberry Pi as a main computer and a bash based player script which plays media data (video, pictures, sound) as defined in a lookup table in the Linux operating system of it. Within the scope of the work time synchronization among a group of such ExG-systems was further realized with the help of an embedded hardware/software solution. The hardware part consists of two different PCB sync modules that are incorporated with a crystal oscillator a microcontroller, a radio module (in this case Bluetooth 4.0). One of them is called the v sync-addon which is to be attached to each of the measurement systems (e.g. ExG-system) that are to be synchronized and the sync-center which is to be attached to the remote PC. On the software part, a wireless time synchronization protocol exchanging timing information among the sync-center and sync-addons must establish tight time synchronization between the ExG-system. Within the framework of this work, a novel time synchronization protocol energy efficient pairwise broadcast synchronization protocol (PBS) that was only theoretically proposed before but not evaluated on real hardware was experimentally evaluated with the developed sync modules. Moreover a benchmarking with other state-of-the-art existing time synchronization protocols based on the results from same hardware platform were drawn. In the final iteration of sync modules an average synchronization error of 2 ms was achieved considering the 95% of confidence interval. Since for collaborative BCI, P300, an event related potential was triggered with the stimuli that occur 300−500 ms after the event, the achieved synchronization accuracy is sufficient to conduct such experiments
    • …
    corecore