13 research outputs found

    A Survey on Industrial Control System Testbeds and Datasets for Security Research

    Full text link
    The increasing digitization and interconnection of legacy Industrial Control Systems (ICSs) open new vulnerability surfaces, exposing such systems to malicious attackers. Furthermore, since ICSs are often employed in critical infrastructures (e.g., nuclear plants) and manufacturing companies (e.g., chemical industries), attacks can lead to devastating physical damages. In dealing with this security requirement, the research community focuses on developing new security mechanisms such as Intrusion Detection Systems (IDSs), facilitated by leveraging modern machine learning techniques. However, these algorithms require a testing platform and a considerable amount of data to be trained and tested accurately. To satisfy this prerequisite, Academia, Industry, and Government are increasingly proposing testbed (i.e., scaled-down versions of ICSs or simulations) to test the performances of the IDSs. Furthermore, to enable researchers to cross-validate security systems (e.g., security-by-design concepts or anomaly detectors), several datasets have been collected from testbeds and shared with the community. In this paper, we provide a deep and comprehensive overview of ICSs, presenting the architecture design, the employed devices, and the security protocols implemented. We then collect, compare, and describe testbeds and datasets in the literature, highlighting key challenges and design guidelines to keep in mind in the design phases. Furthermore, we enrich our work by reporting the best performing IDS algorithms tested on every dataset to create a baseline in state of the art for this field. Finally, driven by knowledge accumulated during this survey's development, we report advice and good practices on the development, the choice, and the utilization of testbeds, datasets, and IDSs

    Trust and integrity in distributed systems

    Get PDF
    In the last decades, we have witnessed an exploding growth of the Internet. The massive adoption of distributed systems on the Internet allows users to offload their computing intensive work to remote servers, e.g. cloud. In this context, distributed systems are pervasively used in a number of difference scenarios, such as web-based services that receive and process data, cloud nodes where company data and processes are executed, and softwarised networks that process packets. In these systems, all the computing entities need to trust each other and co-operate in order to work properly. While the communication channels can be well protected by protocols like TLS or IPsec, the problem lies in the expected behaviour of the remote computing platforms, because they are not under the direct control of end users and do not offer any guarantee that they will behave as agreed. For example, the remote party may use non-legitimate services for its own convenience (e.g. illegally storing received data and routed packets), or the remote system may misbehave due to an attack (e.g. changing deployed services). This is especially important because most of these computing entities need to expose interfaces towards the Internet, which makes them easier to be attacked. Hence, software-based security solutions alone are insufficient to deal with the current scenario of distributed systems. They must be coupled with stronger means such as hardware-assisted protection. In order to allow the nodes in distributed system to trust each other, their integrity must be presented and assessed to predict their behaviour. The remote attestation technique of trusted computing was proposed to specifically deal with the integrity issue of remote entities, e.g. whether the platform is compromised with bootkit attacks or cracked kernel and services. This technique relies on a hardware chip called Trusted Platform Module (TPM), which is available in most business class laptops, desktops and servers. The TPM plays as the hardware root of trust, which provides a special set of capabilities that allows a physical platform to present its integrity state. With a TPM equipped in the motherboard, the remote attestation is the procedure that a physical node provides hardware-based proof of the software components loaded in this platform, which can be evaluated by other entities to conclude its integrity state. Thanks to the hardware TPM, the remote attestation procedure is resistant to software attacks. However, even though the availability of this chip is high, its actual usage is low. The major reason is that trusted computing has very little flexibility, since its goal is to provide strong integrity guarantees. For instance, remote attestation result is positive if and only if the software components loaded in the platform are expected and loaded in a specific order, which limits its applicability in real-world scenarios. For such reasons, this technique is especially hard to be applied on software services running in application layer, that are loaded in random order and constantly updated. Because of this, current remote attestation techniques provide incomplete solution. They only focus on the boot phase of physical platforms but not on the services, not to mention the services running in virtual instances. This work first proposes a new remote attestation framework with the capability of presenting and evaluating the integrity state not only of the boot phase of physical platforms but also of software services at load time, e.g. whether the software is legitimate or not. The framework allows users to know and understand the integrity state of the whole life cycle of the services they are interacting with, thus the users can make informed decision whether to send their data or trust the received results. Second, based on the remote attestation framework this thesis proposes a method to bind the identity of secure channel endpoint to a specific physical platform and its integrity state. Secure channels are extensively adopted in distributed systems to protect data transmitted from one platform to another. However, they do not convey any information about the integrity state of the platform or the service that generates and receives this data, which leaves ample space for various attacks. With the binding of the secure channel endpoint and the hardware TPM, users are protected from relay attacks (with hardware-based identity) and malicious or cracked platform and software (with remote attestation). Third, with the help of the remote attestation framework, this thesis introduces a new method to include the integrity state of software services running in virtual containers in the evidence generated by the hardware TPM. This solution is especially important for softwarised network environments. Softwarised network was proposed to provide dynamic and flexible network deployment which is an ever complex task nowadays. Its main idea is to switch hardware appliances to softwarised network functions running inside virtual instances, that are full-fledged computational systems and accessible from the Internet, thus their integrity is at stake. Unfortunately, currently remote attestation work is not able to provide hardware-based integrity evidence for software services running inside virtual instances, because the direct link between the internal of virtual instances and hardware root of trust is missing. With the solution proposed in this thesis, the integrity state of the softwarised network functions running in virtual containers can be presented and evaluated with hardware-based evidence, implying the integrity of the whole softwarised network. The proposed remote attestation framework, trusted channel and trusted softwarised network are implemented in separate working prototypes. Their performance was evaluated and proved to be excellent, allowing them to be applied in real-world scenarios. Moreover, the implementation also exposes various APIs to simplify future integration with different management platforms, such as OpenStack and OpenMANO

    Evaluation of Efficiency of Cybersecurity

    Get PDF
    Uurimistöö eesmärgiks on uurida, kuidas tõhus küberjulgeolek on olnud edukas. Uurimistöö kasutab parima võimaliku tulemuse saamiseks mitmesuguseid uurimismeetodeid ja kirjanduse ülevaade on süstemaatiline. Kuid uurimistöö järeldus on see, et uuring ei suuda kinnitada või tagasi lükata peamist töö hüpoteesi. Uuring ei õnnestunud, sest puuduvad korralikud teooriad, mis näitavad ohutuse ja küberjulgeoleku nähtusi ning puuduvad head näitajad, mis annaksid küberohutuse tõhususe kohta kehtivaid ja ratsionaalseid tulemusi, kui hästi on küberkuritegevuse abil õnnestunud küberkuritegevuse tõhusaks võitmiseks ja küberkuritegude tõhusaks vähendamiseks. Seepärast on küberjulgeoleku teadusteooria ja julgeoleku teadusteooria vähearenenud 2018. aastal. Uuringud on teinud küberjulgeoleku ja turvalisuse arendamise põhilisi avastusi. Edasiste põhiuuringute suund on luua üldine turbeteooria, mis kirjeldab ohtlike muutujate ohtlike muutujate kavatsust, ressursse, pädevust ja edusamme ohtlike muutujate ja aksioomide puhul, kus ohtlike muutujate mõõtmisel saab teha selle sisse loodetavas ja teooria kirjeldab, millised on tõhusad meetmed, et vältida ja leevendada ning millised ei ole ja lõpuks kehtestada nõuetekohased mõõdikud, et mõõta turvalisuse ja küberjulgeoleku tõhusust loodetavus ja kehtivusega.The purpose of the thesis is to research how effectively cybersecurity has succeeded on its mission. The thesis used multiple research methods to get best possible answer and the literature review has been systematic. However, the conclusion of the research was that the study is unable to either confirm or reject the main working hypothesis. The study is unable to do it because of the lack of proper theories to describe what are the phenomena in secu-rity and cybersecurity and the lack of proper metrics to give valid and sound conclusion about the effective of cybersecurity and how well have cybersecurity succeed on its mis-sion to effectively prevent and mitigate cybercrime. Therefore, the science of security and science of cybersecurity are underdeveloped in 2018. The research has made basic discov-eries of development of cybersecurity and security. A direction of further basic research is to establish a general theory of security which describes threat variables, threat variables intention, resources, competence and progress of the threat variables and axioms where measurement of threat variables can be made with reliability and the theory would describe which are effective measures to prevent and mitigate and which are not and finally, estab-lish proper metrics to measure efficiency of security and cybersecurity with reliability and validity

    Detecting cloud virtual network isolation security for data leakage

    Get PDF
    This thesis considers information leakage in cloud virtually isolated networks. Virtual Network (VN) Isolation is a core element of cloud security yet research literature shows that no experimental work, to date, has been conducted to test, discover and evaluate VN isolation data leakage. Consequently, this research focussed on that gap. Deep Dives of the cloud infrastructures were performed, followed by (Kali) penetration tests to detect any leakage. This data was compared to information gathered in the Deep Dive, to determine the level of cloud network infrastructure being exposed. As a major contribution to research, this is the first empirical work to use a Deep Dive approach and a penetration testing methodology applied to both CloudStack and OpenStack to demonstrate cloud network isolation vulnerabilities. The outcomes indicated that Cloud manufacturers need to test their isolation mechanisms more fully and enhance them with available solutions. However, this field needs more industrial data to confirm if the found issues are applicable to non-open source cloud technologies. If the problems revealed are widespread then this is a major issue for cloud security. Due to the time constraints, only two cloud testbeds were built and analysed, but many potential future works are listed for analysing more complicated VN, analysing leveraged VN plugins and testing if system complexity will cause more leakage or protect the VN. This research is one of the first empirical building blocks in the field and gives future researchers the basis for building their research on top of the presented methodology and results and for proposing more effective solutions
    corecore