138 research outputs found

    Modélisation formelle des systèmes de détection d'intrusions

    Get PDF
    L’écosystème de la cybersécurité évolue en permanence en termes du nombre, de la diversité, et de la complexité des attaques. De ce fait, les outils de détection deviennent inefficaces face à certaines attaques. On distingue généralement trois types de systèmes de détection d’intrusions : détection par anomalies, détection par signatures et détection hybride. La détection par anomalies est fondée sur la caractérisation du comportement habituel du système, typiquement de manière statistique. Elle permet de détecter des attaques connues ou inconnues, mais génère aussi un très grand nombre de faux positifs. La détection par signatures permet de détecter des attaques connues en définissant des règles qui décrivent le comportement connu d’un attaquant. Cela demande une bonne connaissance du comportement de l’attaquant. La détection hybride repose sur plusieurs méthodes de détection incluant celles sus-citées. Elle présente l’avantage d’être plus précise pendant la détection. Des outils tels que Snort et Zeek offrent des langages de bas niveau pour l’expression de règles de reconnaissance d’attaques. Le nombre d’attaques potentielles étant très grand, ces bases de règles deviennent rapidement difficiles à gérer et à maintenir. De plus, l’expression de règles avec état dit stateful est particulièrement ardue pour reconnaître une séquence d’événements. Dans cette thèse, nous proposons une approche stateful basée sur les diagrammes d’état-transition algébriques (ASTDs) afin d’identifier des attaques complexes. Les ASTDs permettent de représenter de façon graphique et modulaire une spécification, ce qui facilite la maintenance et la compréhension des règles. Nous étendons la notation ASTD avec de nouvelles fonctionnalités pour représenter des attaques complexes. Ensuite, nous spécifions plusieurs attaques avec la notation étendue et exécutons les spécifications obtenues sur des flots d’événements à l’aide d’un interpréteur pour identifier des attaques. Nous évaluons aussi les performances de l’interpréteur avec des outils industriels tels que Snort et Zeek. Puis, nous réalisons un compilateur afin de générer du code exécutable à partir d’une spécification ASTD, capable d’identifier de façon efficiente les séquences d’événements.Abstract : The cybersecurity ecosystem continuously evolves with the number, the diversity, and the complexity of cyber attacks. Generally, we have three types of Intrusion Detection System (IDS) : anomaly-based detection, signature-based detection, and hybrid detection. Anomaly detection is based on the usual behavior description of the system, typically in a static manner. It enables detecting known or unknown attacks but also generating a large number of false positives. Signature based detection enables detecting known attacks by defining rules that describe known attacker’s behavior. It needs a good knowledge of attacker behavior. Hybrid detection relies on several detection methods including the previous ones. It has the advantage of being more precise during detection. Tools like Snort and Zeek offer low level languages to represent rules for detecting attacks. The number of potential attacks being large, these rule bases become quickly hard to manage and maintain. Moreover, the representation of stateful rules to recognize a sequence of events is particularly arduous. In this thesis, we propose a stateful approach based on algebraic state-transition diagrams (ASTDs) to identify complex attacks. ASTDs allow a graphical and modular representation of a specification, that facilitates maintenance and understanding of rules. We extend the ASTD notation with new features to represent complex attacks. Next, we specify several attacks with the extended notation and run the resulting specifications on event streams using an interpreter to identify attacks. We also evaluate the performance of the interpreter with industrial tools such as Snort and Zeek. Then, we build a compiler in order to generate executable code from an ASTD specification, able to efficiently identify sequences of events

    Wireless Sensor Data Transport, Aggregation and Security

    Get PDF
    abstract: Wireless sensor networks (WSN) and the communication and the security therein have been gaining further prominence in the tech-industry recently, with the emergence of the so called Internet of Things (IoT). The steps from acquiring data and making a reactive decision base on the acquired sensor measurements are complex and requires careful execution of several steps. In many of these steps there are still technological gaps to fill that are due to the fact that several primitives that are desirable in a sensor network environment are bolt on the networks as application layer functionalities, rather than built in them. For several important functionalities that are at the core of IoT architectures we have developed a solution that is analyzed and discussed in the following chapters. The chain of steps from the acquisition of sensor samples until these samples reach a control center or the cloud where the data analytics are performed, starts with the acquisition of the sensor measurements at the correct time and, importantly, synchronously among all sensors deployed. This synchronization has to be network wide, including both the wired core network as well as the wireless edge devices. This thesis studies a decentralized and lightweight solution to synchronize and schedule IoT devices over wireless and wired networks adaptively, with very simple local signaling. Furthermore, measurement results have to be transported and aggregated over the same interface, requiring clever coordination among all nodes, as network resources are shared, keeping scalability and fail-safe operation in mind. Furthermore ensuring the integrity of measurements is a complicated task. On the one hand Cryptography can shield the network from outside attackers and therefore is the first step to take, but due to the volume of sensors must rely on an automated key distribution mechanism. On the other hand cryptography does not protect against exposed keys or inside attackers. One however can exploit statistical properties to detect and identify nodes that send false information and exclude these attacker nodes from the network to avoid data manipulation. Furthermore, if data is supplied by a third party, one can apply automated trust metric for each individual data source to define which data to accept and consider for mentioned statistical tests in the first place. Monitoring the cyber and physical activities of an IoT infrastructure in concert is another topic that is investigated in this thesis.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Primena Big Data analitike za istraživanje prostorno-vremenske dinamike ljudske populacije

    Get PDF
    With the rapid growth of the volume of available data related to human dynamics, it became more challenging to research and investigate topics that could reveal novel knowledge in the area. In present time people tend to live mostly in large cities, where knowledge about human dynamics, habits and behaviour could lead to better city organisation, energy efficiency, transport organisation and overall better quality and more sustainable living. Human dynamics could be reasoned from many different aspects, but all of them have three elements in common: time, space and data volume. Human activity and interaction could not be inspected without space and time component because everything is happening somewhere at some time. Also, with huge smartphone adoption now terabytes of data related to human dynamic are available. Although data is sensitive to personal information, true owners of the data is either telecom operator company, social media company or any other company that provides the applications that are used on the mobile phone. If such data is to be opened to public or scientific community to conduct a research with it, it needs to be anonimized first.Another challenge of user generated data is data set volume. Data is usually very large in size (Volume), it comes from different sources and in different formats (Variety) and it is generated in real-time and it evolves very fast (Velocity). These are three V's of Big Data, and such data sets need to be approached with specially designed Big Data technologies.In the research presented in this thesis we assembled Big Data technologies, Graph Theory and space-time dependent human dynamic data.Са све већом и већом количином података која је доступна везано за динамику људске популације, постаје све више изазовно да се спроведе истраживање у овој области које би донело ново знање. У данашње време људи масовно живе у великим градовима где би знање о људској динамици, навикама и понашању могло значајно да унапреди организацију градова, енергетску ефикасност, транспорт и свеукупно квалитетнији и више одржив животни стил. Динамика људске популације може да се посматра са више аспеката, али сви они имају три заједничка елемента: време, простор и количину података. Људска активност и интеракције не могу се посматрати одвојено од просторне и временске компоненте јер се све дешава негде и у неко време. Такође, са великим присуством паметних телефона данас су доступни терабајти података о људској динамици. Иако су подаци осетљиви због приватности корисника, прави власници података су заправо телеком компаније, или компаније друштвених мрежа или неке друге компаније које развијају корисничке апликације за паметне телефоне. Ако би се такви подаци отварали за јавност или научну заједницу морали би прво да буду анонимизовани. Други изазов везан за кориснички генерисане податке је величина података. Подаци су обично веома велики меморијски (енг. „Volume“), долазе из различитих извора и у различитим форматима (енг. „Variety“) и генерисани су реалном времену и мењају се  еома брзо (енг. „Velocity“). Ово су три „V“ Великих података, и такви подаци захтевају посебан приступ аналитици са специјално дизајнираним алатима за Аналитику великих података. У оквиру истраживања које је презентовано у овој тези објединили смо Аналитику великих података, Теорију графова и просторно-временски зависне податке о људској динамици.Sa sve većom i većom količinom podataka koja je dostupna vezano za dinamiku ljudske populacije, postaje sve više izazovno da se sprovede istraživanje u ovoj oblasti koje bi donelo novo znanje. U današnje vreme ljudi masovno žive u velikim gradovima gde bi znanje o ljudskoj dinamici, navikama i ponašanju moglo značajno da unapredi organizaciju gradova, energetsku efikasnost, transport i sveukupno kvalitetniji i više održiv životni stil. Dinamika ljudske populacije može da se posmatra sa više aspekata, ali svi oni imaju tri zajednička elementa: vreme, prostor i količinu podataka. LJudska aktivnost i interakcije ne mogu se posmatrati odvojeno od prostorne i vremenske komponente jer se sve dešava negde i u neko vreme. Takođe, sa velikim prisustvom pametnih telefona danas su dostupni terabajti podataka o ljudskoj dinamici. Iako su podaci osetljivi zbog privatnosti korisnika, pravi vlasnici podataka su zapravo telekom kompanije, ili kompanije društvenih mreža ili neke druge kompanije koje razvijaju korisničke aplikacije za pametne telefone. Ako bi se takvi podaci otvarali za javnost ili naučnu zajednicu morali bi prvo da budu anonimizovani. Drugi izazov vezan za korisnički generisane podatke je veličina podataka. Podaci su obično veoma veliki memorijski (eng. „Volume“), dolaze iz različitih izvora i u različitim formatima (eng. „Variety“) i generisani su realnom vremenu i menjaju se  eoma brzo (eng. „Velocity“). Ovo su tri „V“ Velikih podataka, i takvi podaci zahtevaju poseban pristup analitici sa specijalno dizajniranim alatima za Analitiku velikih podataka. U okviru istraživanja koje je prezentovano u ovoj tezi objedinili smo Analitiku velikih podataka, Teoriju grafova i prostorno-vremenski zavisne podatke o ljudskoj dinamici

    Integrating Blockchain and Fog Computing Technologies for Efficient Privacy-preserving Systems

    Get PDF
    This PhD dissertation concludes a three-year long research journey on the integration of Fog Computing and Blockchain technologies. The main aim of such integration is to address the challenges of each of these technologies, by integrating it with the other. Blockchain technology (BC) is a distributed ledger technology in the form of a distributed transactional database, secured by cryptography, and governed by a consensus mechanism. It was initially proposed for decentralized cryptocurrency applications with practically proven high robustness. Fog Computing (FC) is a geographically distributed computing architecture, in which various heterogeneous devices at the edge of network are ubiquitously connected to collaboratively provide elastic computation services. FC provides enhanced services closer to end-users in terms of time, energy, and network load. The integration of FC with BC can result in more efficient services, in terms of latency and privacy, mostly required by Internet of Things systems

    Contributions to Edge Computing

    Get PDF
    Efforts related to Internet of Things (IoT), Cyber-Physical Systems (CPS), Machine to Machine (M2M) technologies, Industrial Internet, and Smart Cities aim to improve society through the coordination of distributed devices and analysis of resulting data. By the year 2020 there will be an estimated 50 billion network connected devices globally and 43 trillion gigabytes of electronic data. Current practices of moving data directly from end-devices to remote and potentially distant cloud computing services will not be sufficient to manage future device and data growth. Edge Computing is the migration of computational functionality to sources of data generation. The importance of edge computing increases with the size and complexity of devices and resulting data. In addition, the coordination of global edge-to-edge communications, shared resources, high-level application scheduling, monitoring, measurement, and Quality of Service (QoS) enforcement will be critical to address the rapid growth of connected devices and associated data. We present a new distributed agent-based framework designed to address the challenges of edge computing. This actor-model framework implementation is designed to manage large numbers of geographically distributed services, comprised from heterogeneous resources and communication protocols, in support of low-latency real-time streaming applications. As part of this framework, an application description language was developed and implemented. Using the application description language a number of high-order management modules were implemented including solutions for resource and workload comparison, performance observation, scheduling, and provisioning. A number of hypothetical and real-world use cases are described to support the framework implementation

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Security and Privacy for Modern Wireless Communication Systems

    Get PDF
    The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks
    corecore