335 research outputs found

    Teollisuusautomaatiojärjestelmien tunnistus ja luokittelu IP-verkoissa

    Get PDF
    Industrial Control Systems (ICS) are an essential part of the critical infrastructure of society and becoming increasingly vulnerable to cyber attacks performed over computer networks. The introduction of remote access connections combined with mistakes in automation system configurations expose ICSs to attacks coming from public Internet. Insufficient IT security policies and weaknesses in security features of automation systems increase the risk of a successful cyber attack considerably. In recent years the amount of observed cyber attacks has been on constant rise, signaling the need of new methods for finding and protecting vulnerable automation systems. So far, search engines for Internet connected devices, such as Shodan, have been a great asset in mapping the scale of the problem. In this theses methods are presented to identify and classify industrial control systems over IP based networking protocols. A great portion of protocols used in automation networks contain specific diagnostic requests for pulling identification information from a device. Port scanning methods combined with more elaborate service scan probes can be used to extract identifying data fields from an automation device. Also, a model for automated finding and reporting of vulnerable ICS devices is presented. A prototype software was created and tested with real ICS devices to demonstrate the viability of the model. The target set was gathered from Finnish devices directly connected to the public Internet. Initial results were promising as devices or systems were identified at 99% success ratio. A specially crafted identification ruleset and detection database was compiled to work with the prototype. However, a more comprehensive detection library of ICS device types is needed before the prototype is ready to be used in different environments. Also, other features which help to further assess the device purpose and system criticality would be some key improvements for the future versions of the prototype.Yhteiskunnan kriittiseen infrastruktuuriin kuuluvat teollisuusautomaatiojärjestelmät ovat yhä enemmissä määrin alttiita tietoverkkojen kautta tapahtuville kyberhyökkäyksille. Etähallintayhteyksien yleistyminen ja virheet järjestelmien konfiguraatioissa mahdollistavat hyökkäykset jopa suoraa Internetistä käsin. Puutteelliset tietoturvakäytännöt ja teollisuusautomaatiojärjestelmien heikot suojaukset lisäävät onnistuneen kyberhyökkäyksen riskiä huomattavasti. Viime vuosina kyberhyökkäysten määrä maailmalla on ollut jatkuvassa kasvussa ja siksi tarve uusille menetelmille haavoittuvaisten järjestelmien löytämiseksi ja suojaamiseksi on olemassa. Internetiin kytkeytyneiden laitteiden hakukoneet, kuten Shodan, ovat olleet suurena apuna ongelman laajuuden kartoittamisessa. Tässä työssä esitellään menetelmiä teollisuusautomaatiojärjestelmien tunnistamiseksi ja luokittelemiseksi käyttäen IP-pohjaisia tietoliikenneprotokollia. Suuri osa automaatioverkoissa käytetyistä protokollista sisältää erityisiä diagnostiikkakutsuja laitteen tunnistetietojen selvittämiseksi. Porttiskannauksella ja tarkemmalla palvelukohtaisella skannauksella laitteesta voidaan saada yksilöivää tunnistetietoa. Työssä esitellään myös malli automaattiselle haavoittuvaisten teollisuusautomaatiojärjestelmien löytämiselle ja raportoimiselle. Mallin tueksi esitellään ohjelmistoprototyyppi, jolla mallin toimivuutta testattiin käyttäen testijoukkona oikeita Suomesta löytyviä, julkiseen Internetiin kytkeytyneitä teollisuusautomaatiolaitteita. Prototyypin alustavat tulokset olivat lupaavia: laitteille tai järjestelmille kyettiin antamaan jokin tunniste 99 % tapauksista käyttäen luokittelussa apuna prototyypille luotua tunnistekirjastoa. Ohjelmiston yleisempi käyttö vaatii kuitenkin kattavamman automaatiolaitteiden tunnistekirjaston luomista sekä prototyypin jatkokehitystä: tehokkaampi tunnistaminen edellyttää automaatiojärjestelmien toimintaympäristön ja kriittisyyden tarkempaa analysointia

    On Security and Privacy for Networked Information Society : Observations and Solutions for Security Engineering and Trust Building in Advanced Societal Processes

    Get PDF
    Our society has developed into a networked information society, in which all aspects of human life are interconnected via the Internet — the backbone through which a significant part of communications traffic is routed. This makes the Internet arguably the most important piece of critical infrastructure in the world. Securing Internet communications for everyone using it is extremely important, as the continuing growth of the networked information society relies upon fast, reliable and secure communications. A prominent threat to the security and privacy of Internet users is mass surveillance of Internet communications. The methods and tools used to implement mass surveillance capabilities on the Internet pose a danger to the security of all communications, not just the intended targets. When we continue to further build the networked information upon the unreliable foundation of the Internet we encounter increasingly complex problems,which are the main focus of this dissertation. As the reliance on communication technology grows in a society, so does the importance of information security. At this stage, information security issues become separated from the purely technological domain and begin to affect everyone in society. The approach taken in this thesis is therefore both technical and socio-technical. The research presented in this PhD thesis builds security in to the networked information society and provides parameters for further development of a safe and secure networked information society. This is achieved by proposing improvements on a multitude of layers. In the technical domain we present an efficient design flow for secure embedded devices that use cryptographic primitives in a resource-constrained environment, examine and analyze threats to biometric passport and electronic voting systems, observe techniques used to conduct mass Internet surveillance, and analyze the security of Finnish web user passwords. In the socio-technical domain we examine surveillance and how it affects the citizens of a networked information society, study methods for delivering efficient security education, examine what is essential security knowledge for citizens, advocate mastery over surveillance data by the targeted citizens in the networked information society, and examine the concept of forced trust that permeates all topics examined in this work.Yhteiskunta, jossa elämme, on muovautunut teknologian kehityksen myötä todelliseksi tietoyhteiskunnaksi. Monet verkottuneen tietoyhteiskunnan osa-alueet ovat kokeneet muutoksen tämän kehityksen seurauksena. Tämän muutoksen keskiössä on Internet: maailmanlaajuinen tietoverkko, joka mahdollistaa verkottuneiden laitteiden keskenäisen viestinnän ennennäkemättömässä mittakaavassa. Internet on muovautunut ehkä keskeisimmäksi osaksi globaalia viestintäinfrastruktuuria, ja siksi myös globaalin viestinnän turvaaminen korostuu tulevaisuudessa yhä enemmän. Verkottuneen tietoyhteiskunnan kasvu ja kehitys edellyttävät vakaan, turvallisen ja nopean viestintäjärjestelmän olemassaoloa. Laajamittainen tietoverkkojen joukkovalvonta muodostaa merkittävän uhan tämän järjestelmän vakaudelle ja turvallisuudelle. Verkkovalvonnan toteuttamiseen käytetyt menetelmät ja työkalut eivät vain anna mahdollisuutta tarkastella valvonnan kohteena olevaa viestiliikennettä, vaan myös vaarantavat kaiken Internet-liikenteen ja siitä riippuvaisen toiminnan turvallisuuden. Kun verkottunutta tietoyhteiskuntaa rakennetaan tämän kaltaisia valuvikoja ja haavoittuvuuksia sisältävän järjestelmän varaan, keskeinen uhkatekijä on, että yhteiskunnan ydintoiminnot ovat alttiina ulkopuoliselle vaikuttamiselle. Näiden uhkatekijöiden ja niiden taustalla vaikuttavien mekanismien tarkastelu on tämän väitöskirjatyön keskiössä. Koska työssä on teknisen sisällön lisäksi vahva yhteiskunnallinen elementti, tarkastellaan tiukan teknisen tarkastelun sijaan aihepiirä laajemmin myös yhteiskunnallisesta näkökulmasta. Tässä väitöskirjassa pyritään rakentamaan kokonaiskuvaa verkottuneen tietoyhteiskunnan turvallisuuteen, toimintaan ja vakauteen vaikuttavista tekijöistä, sekä tuomaan esiin uusia ratkaisuja ja avauksia eri näkökulmista. Työn tavoitteena on osaltaan mahdollistaa entistä turvallisemman verkottuneen tietoyhteiskunnan rakentaminen tulevaisuudessa. Teknisestä näkökulmasta työssä esitetään suunnitteluvuo kryptografisia primitiivejä tehokkaasti hyödyntäville rajallisen laskentatehon sulautetuviiille järjestelmille, analysoidaan biometrisiin passeihin, kansainväliseen passijärjestelmään, sekä sähköiseen äänestykseen kohdistuvia uhkia, tarkastellaan joukkovalvontaan käytettyjen tekniikoiden toimintaperiaatteita ja niiden aiheuttamia uhkia, sekä tutkitaan suomalaisten Internet-käyttäjien salasanatottumuksia verkkosovelluksissa. Teknis-yhteiskunnallisesta näkökulmasta työssä tarkastellaan valvonnan teoriaa ja perehdytään siihen, miten valvonta vaikuttaa verkottuneen tietoyhteiskunnan kansalaisiin. Lisäksi kehitetään menetelmiä parempaan tietoturvaopetukseen kaikilla koulutusasteilla, määritellään keskeiset tietoturvatietouden käsitteet, tarkastellaan mahdollisuutta soveltaa tiedon herruuden periaatetta verkottuneen tietoyhteiskunnan kansalaisistaan keräämän tiedon hallintaan ja käyttöön, sekä tutkitaan luottamuksen merkitystä yhteiskunnan ydintoimintojen turvallisuudelle ja toiminnalle, keskittyen erityisesti pakotetun luottamuksen vaikutuksiin

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    This volume has been created as a continuation of the previous one, with the aim of outlining a set of focus areas and actions that the Italian Nation research community considers essential. The book touches many aspects of cyber security, ranging from the definition of the infrastructure and controls needed to organize cyberdefence to the actions and technologies to be developed to be better protected, from the identification of the main technologies to be defended to the proposal of a set of horizontal actions for training, awareness raising, and risk management

    Characterizing, managing and monitoring the networks for the ATLAS data acquisition system

    Get PDF
    Particle physics studies the constituents of matter and the interactions between them. Many of the elementary particles do not exist under normal circumstances in nature. However, they can be created and detected during energetic collisions of other particles, as is done in particle accelerators. The Large Hadron Collider (LHC) being built at CERN will be the world's largest circular particle accelerator, colliding protons at energies of 14 TeV. Only a very small fraction of the interactions will give raise to interesting phenomena. The collisions produced inside the accelerator are studied using particle detectors. ATLAS is one of the detectors built around the LHC accelerator ring. During its operation, it will generate a data stream of 64 Terabytes/s. A Trigger and Data Acquisition System (TDAQ) is connected to ATLAS -- its function is to acquire digitized data from the detector and apply trigger algorithms to identify the interesting events. Achieving this requires the power of over 2000 computers plus an interconnecting network capable of sustaining a throughput of over 150 Gbit/s with minimal loss and delay. The implementation of this network required a detailed study of the available switching technologies to a high degree of precision in order to choose the appropriate components. We developed an FPGA-based platform (the GETB) for testing network devices. The GETB system proved to be flexible enough to be used as the ba sis of three different network-related projects. An analysis of the traffic pattern that is generated by the ATLAS data-taking applications was also possible thanks to the GETB. Then, while the network was being assembled, parts of the ATLAS detector started commissioning -- this task relied on a functional network. Thus it was imperative to be able to continuously identify existing and usable infrastructure and manage its operations. In addition, monitoring was required to detect any overload conditions with an indication where the excess demand was being generated. We developed tools to ease the maintenance of the network and to automatically produce inventory reports. We created a system that discovers the network topology and this permitted us to verify the installation and to track its progress. A real-time traffic visualization system has been built, allowing us to see at a glance which network segments are heavily utilized. Later, as the network achieves production status, it will be necessary to extend the monitoring to identify individual applications' use of the available bandwidth. We studied a traffic monitoring technology that will allow us to have a better understanding on how the network is used. This technology, based on packet sampling, gives the possibility of having a complete view of the network: not only its total capacity utilization, but also how this capacity is divided among users and software applicati ons. This thesis describes the establishment of a set of tools designed to characterize, monitor and manage complex, large-scale, high-performance networks. We describe in detail how these tools were designed, calibrated, deployed and exploited. The work that led to the development of this thesis spans over more than four years and closely follows the development phases of the ATLAS network: its design, its installation and finally, its current and future operation

    Simulation of Dissemination Strategies on Temporal Networks

    Get PDF
    In distributed environments, such as distributed ledgers technologies and other peer-to-peer architectures, communication represents a crucial topic. The ability to efficiently disseminate contents is strongly influenced by the type of system architecture, the protocol used to spread such contents over the network and the actual dynamicity of the communication links (i.e. static vs. temporal nets). In particular, the dissemination strategies either focus on achieving an optimal coverage, minimizing the network traffic or providing assurances on anonymity (that is a fundamental requirement of many cryptocurrencies). In this work, the behaviour of multiple dissemination protocols is discussed and studied through simulation. The performance evaluation has been carried out on temporal networks with the help of LUNES-temporal, a discrete event simulator that allows to test algorithms running on a distributed environment. The experiments show that some gossip protocols allow to either save a considerable number of messages or to provide better anonymity guarantees, at the cost of a little lower coverage achieved and/or a little increase of the delivery time

    ReCooPla: a DSL for coordination-based reconfiguration of software architectures

    Get PDF
    In production environments where change is the rule rather than the exception, adaptation of software plays an important role. Such adaptations presuppose dynamic reconfiguration of the system architecture, owever, it is in the static setting (design-phase) that such reconfigurations must be designed and analysed, to reclude erroneous evolutions. Modern software systems, which are built from the coordinated composition of loosely-coupled software components, are naturally adaptable; and coordination specification is, usually, the main reference point to inserting changes in these systems. In this paper, a domain-specific language—referred to as ReCooPLa—is proposed to design reconfigurations that change the coordination structures, so that they are analysed before being applied in run time. Moreover, a reconfiguration engine is introduced, that takes conveniently translated ReCooPLa specifications and applies them to coordination structures.(undefined

    Hybrid post-quantum cryptography in network protocols

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2023.A segurança de redes é essencial para as comunicações do dia-a-dia. Protocolos como o Transport Layer Security (TLS) e o Automatic Certificate Management Environment (ACME) permitem comunicações seguras para várias aplicações. O TLS fornece canais seguros com autenticação de pares comunicantes, desde que estes pares já possuam um certificado digital para comprovar sua identidade. Já o protocolo ACME contribui com a adoção de TLS com funcionalidades para envio e gerenciamento de certificados digitais. Tanto o TLS quanto o ACME dependem da Criptografia de Chaves Públicas para autenticação e troca de chaves (Key Exchange - KEX). No entanto, o advento do Computador Quântico Criptograficamente Relevante (CQCR) enfraquece os protocolos de KEX e certificados digitais criados com a criptografia clássica usada atualmente, tais como RSA e Diffie-Hellman. Dada a grande adoção do TLS e ACME, esta ameaça alcança uma escala global. Neste contexto, trata-se de tese dos desafios da adoção da Criptografia Pós-Quântica (CPQ) no TLS e ACME, focando-se na abordagem recomendada chamada de CPQ híbrido (ou modo híbrido). A CPQ é criada usando suposições matemáticas diferentes das em uso atualmente. Essas suposições são viáveis ??para construção de esquemas criptográficos resistentes ao computador quântico, pois não se conhece algoritmo (clássico ou quântico) eficiente. Porém, a transição para CPQ é assunto complexo. No modo híbrido, a transição para CPQ é suavizada, pois ela é combinada com a criptografia tradicional. Assim, esta tese defende uma estratégia de adoção de CPQ pelo modo híbrido com as seguintes contribuições: um estudo secundário classificando e mostrando a eficiência e segurança do modo híbrido; uma ferramenta para verificar as garantias quantum-safe em conexões TLS de usuários; um estudo e uma otimização para a emissão de certificados digitais com CPQ no ACME; o projeto e implementação de uma abordagem híbrida para uma alternativa de TLS chamada KEMTLS; e um conceito híbrido inovador, com implementação, para autenticação usando certificados embrulhados. Na maioria dos cenários de avaliações com modo híbrido propostos neste trabalho, as previsões de desempenho não são significativas quando comparadas com a implantação de CPQ sem o modo híbrido. O conceito inovador da autenticação híbrida também habilitou um plano de contingência para o modo híbrido, contribuindo com a adoção do CPQ. Por meio das propostas e avaliações em diferentes cenários, abordagens e protocolos, esta tese soma esforços em direção ao uso de CPQ híbrido para mitigar os efeitos preocupantes da ameaça quântica à criptografia.Abstract: Network security is essential for today?s communications. Protocols such as Transport Layer Security (TLS) and Automatic Certificate Management Environment (ACME) enable secure communications for various applications. TLS provides secure channels with peer authentication, given that the peer already has a digital certificate to prove its identity. ACME contributes to TLS adoption with facilities for issuing and managing digital certificates. Both protocols depend on Public-Key Cryptography for authentication and Key Exchange (KEX) of symmetric key material. However, the advent of a Cryptographically Relevant Quantum Computer (CRQC) weakens KEX and digital certificates built with today?s classical cryptography (like RSA and Diffie-Hellman). Given the widespread adoption of TLS and ACME, such a threat reaches a global scale. In this context, this thesis aims at the challenges of adopting Post- Quantum Cryptography (PQC) in TLS and ACME, focusing on the recommended approach called Hybrid PQC (or hybrid mode). PQC is created using different mathematical assumptions in which there is no known efficient solution by classical and quantum computers. Hybrids ease the PQC transition by combining it with classical cryptography. This thesis defends the hybrid mode adoption by the following contributions: a secondary study classifying and showing hybrid mode efficiency and security; a tool for users checking their TLS connections for quantum-safe guarantees; a study and an optimized approach for issuance of PQC digital certificates in ACME; a design and implementation of a hybrid approach for the TLS alternative called KEMTLS; and a novel hybrid concept (and implementation) for authentication using wrapped digital certificates. In all proposed hybrid mode evaluations, the penalty in performance was non-significant when compared to PQC-only deployment, except in certain situations. The novel concept for hybrid authentication also allows a contingency plan for hybrids, contributing to the PQC adoption. By proposing and evaluating different scenarios, approaches and protocols, this thesis sums efforts towards using hybrid PQC to mitigate the worrisome effects of the quantum threat to cryptography

    ReCooPLa: a DSL for Coordination-based Reconfiguration of Software Architectures

    Get PDF

    ZeroComm: Decentralized, Secure and Trustful Group Communication

    Get PDF
    In the context of computer networks, decentralization is a network architecture that distributes both workload and control of a system among a set of coequal participants. Applications based on such networks enhance trust involved in communication by eliminating the external author- ities with self-interests, including governments and tech companies. The decentralized model delegates the ownership of data to individual users and thus mitigates undesirable behaviours such as harvesting personal information by external organizations. Consequently, decentral- ization has been adopted as the key feature in the next generation of the Internet model which is known as Web 3.0. DIDComm is a set of abstract protocols which enables secure messaging with decentralization and thus serves for the realization of Web 3.0 networks. It standardizes and transforms existing network applications to enforce secure, trustful and decentralized com- munication. Prior work on DIDComm has only been restricted to pair-wise communication and hence it necessitates a feasible strategy for adapting the Web 3.0 concepts in group-oriented networks. Inspired by the demand for a group communication model in Web 3.0, this study presents Zero- Comm which preserves decentralization, security and trust throughout the fundamental opera- tions of a group such as messaging and membership management. ZeroComm is built atop the publisher-subscriber pattern which serves as a messaging architecture for enabling communi- cation among multiple members based on the subjects of their interests. This is realized in our implementation through ZeroMQ, a low-level network library that facilitates the construction of advanced and distributed messaging patterns. The proposed solution leverages DIDComm protocols to deliver safe communication among group members at the expense of performance and efficiency. ZeroComm offers two different modes of group communication based on the organization of relationships among members with a compromise between performance and security. Our quantitative analysis shows that the proposed model performs efficiently for the messaging operation whereas joining a group is a relatively exhaustive procedure due to the es- tablishment of secure and decentralized relationships among members. ZeroComm primarily serves as a low-level messaging framework but can be extended with advanced features such as message ordering, crash recovery of members and secure routing of messages

    UAV Cloud Platform for Precision Farming

    Get PDF
    A new application for Unmanned Aerial Vehicles comes to light daily to solve some of modern society’s problems. One of the mentioned predicaments is the possibility for optimization in agricultural processes. Due to this, a new area arose in the last years of the twentieth century, and it is in constant progression called Precision Farming. Nowadays, a division of this field growth is relative to Unmanned Aerial Vehicles applications. Most traditional methods employed by farmers are ineffective and do not aid in the progression and solution of these issues. However, there are some fields that have the possibility to enhance many agriculture methods, such fields are Cyber-Physical Systems and Cloud Computing. Given its capabilities like aerial surveillance and mapping, Cyber- Physical Systems like Unmanned Aerial Vehicles are being used to monitor vast crops, to gather insightful data thatwould take a lot more time if being collected by hand. However, these systems typically lack computing power and storage capacity, meaning that much of its gathered data cannot be stored and further analyzed locally. That is the obstacle that Cloud Computing can solve. With the possibility to offload computing power by sending the collected data to a cloud, it is possible to leverage the enormous computing power and storage capabilities of remote data-centers to gather and analyze these datasets. This dissertation proposes an architecture for this use case by leveraging the advantages of Cloud Computing to aid the obstacles of Unmanned Aerial Vehicles. Moreover, this dissertation is a collaboration with an on-going Horizon 2020 European project that deals with precision farming and agriculture enhanced by Cyber-Physical Systems.A cada dia que passa, novas aplicações para Veículos aéreos não tripulados são inventadas, de forma a resolver alguns dos problemas actuais da sociedade. Um desses problemas, é a possibilidade de otimização em processos agrículas. Devido a isto, nos últimos anos do século 20 nasceu uma nova área de investigação intitulada Agricultura de alta precisão. Hoje em dia, uma secção desta área diz respeito à inovação nas aplicações com recurso a Veículos aéreos não tripulados. A maioria dos métodos tradicionais usados por agricultores são ineficientes e não auxiliam nem a evolução nem a resolução destes problemas. Contudo, existem algumas áreas científicas que permitem a evoluçao de algumos métodos agrículas, estas áreas são os Sistemas Ciber-Físicos e a Computação na Nuvem. Dadas as suas capacidades tais como a vigilância e mapeamento aéreo, certos Sistemas Ciber-Físicos como os Veículos aéreos não tripulados estão a ser usados para monitorizar vastas culturas de forma a recolher dados que levariam muito mais tempo caso fossem recolhidos manualmente. No entanto, estes sistemas geralmente não detêm grandes capacidades de computação e armazenamento, o que significa que muitos dos dados recolhidos não podem ser armazenados e analisados localmente. É aí que a Computação na Nuvem é útil, com a possibilidade de enviar estes dados para uma nuvem, é possível aproveitar o enorme poder de computação e os recursos de armazenamento dos datacenters remotos para armazenar e analisar estes conjuntos de dados. Esta dissertação propõe uma arquitetura para este caso de uso ao fazer uso das vantagens da Computação na Nuvem de forma a combater os obstáculos dos Veículos aéreos não tripulados. Além disso, esta dissertação é também uma colaboração com um projecto Europeu Horizonte 2020 na área da Agricultura de alta precisão com recurso a Veículos aéreos não tripulados
    corecore