665 research outputs found

    Safe security scanning of a production state automation system

    Get PDF
    The amount of cybersecurity threats against industrial automation systems, OT and ICS environments, as well as critical infrastructure grows at a rapid pace. Cyberattacks against such systems might cause significant economic, physical, or reputational damage to the target organization. Due to the seemingly non-ending dangers these systems face, detection methods and tools against such threats are also continuously developed. The purpose of this thesis is to study the current possibilities regarding security scanning of production state automation systems, such as industrial systems and critical infrastructure. The common scanning methods can be divided into active scanning and passive detection. Due to different issues in these methods, the best practice has conventionally been to use them both side-by-side, but more innovative practices have also been proposed and tested. As a theoretical background for the study, it is relevant to define the pros and cons of the so-called conventional solutions and the most common tools. It is also important to study the basic characteristics of the automation systems being scanned, and the effects the studied security scanning solutions have on the systems. After the preliminary study, existing commercial products, such as Tenable Active Querying and Nozomi Smart Polling, as well as emerging technologies and proposed solutions, such as delay-based scanning and UDP based scans, are studied and analysed in appropriate depth to determine possible improvements for the commonly used methods in the area. The thesis finally presents a proposal for optimal utilization of current and emerging technologies and solutions regarding security scanning of production state automation systems, based on the capabilities of current commercial products, as well as prior studies and the most recent developments in the area

    A study of EU data protection regulation and appropriate security for digital services and platforms

    Get PDF
    A law often has more than one purpose, more than one intention, and more than one interpretation. A meticulously formulated and context agnostic law text will still, when faced with a field propelled by intense innovation, eventually become obsolete. The European Data Protection Directive is a good example of such legislation. It may be argued that the technological modifications brought on by the EU General Data Protection Regulation (GDPR) are nominal in comparison to the previous Directive, but from a business perspective the changes are significant and important. The Directive’s lack of direct economic incentive for companies to protect personal data has changed with the Regulation, as companies may now have to pay severe fines for violating the legislation. The objective of the thesis is to establish the notion of trust as a key design goal for information systems handling personal data. This includes interpreting the EU legislation on data protection and using the interpretation as a foundation for further investigation. This interpretation is connected to the areas of analytics, security, and privacy concerns for intelligent service development. Finally, the centralised platform business model and its challenges is examined, and three main resolution themes for regulating platform privacy are proposed. The aims of the proposed resolutions are to create a more trustful relationship between providers and data subjects, while also improving the conditions for competition and thus providing data subjects with service alternatives. The thesis contributes new insights into the evolving privacy practices in the digital society at an important time of transition from the service driven business models to the platform business models. Firstly, privacy-related regulation and state of the art analytics development are examined to understand their implications for intelligent services that are based on automated processing and profiling. The ability to choose between providers of intelligent services is identified as the core challenge. Secondly, the thesis examines what is meant by appropriate security for systems that handle personal data, something the GDPR requires that organisations use without however specifying what can be considered appropriate. We propose a method for active network security in web software that is developed through the use of analytics for detection and by inserting data generators into a software installation. The active network security method is proposed as a framework for achieving compliance with the GDPR requirements for services and platforms to use appropriate security. Thirdly, the platform business model is considered from the privacy point of view and the implication of “processing silos” for intelligent services. The centralised platform model is considered problematic from both the data subject and from the competition standpoint. A resolution is offered for enabling user-initiated open data flow to counter the centralised “processing silos”, and thereby to facilitate the introduction of decentralised platforms. The thesis provides an interdisciplinary analysis considering the legal study (lex lata) and additionally the resolution (lex ferenda) is defined through argumentativist legal dogmatics and (de lege ferenda) of how the legal framework ought to be adapted to fit the described environment. User-friendly Legal Science is applied as a theory framework to provide a holistic approach to answering the research questions. The User-friendly Legal Science theory has its roots in design science and offers a way towards achieving interdisciplinary research in the fields of information systems and legal science

    Cross-VM network attacks & their countermeasures within cloud computing environments

    Get PDF
    Cloud computing is a contemporary model in which the computing resources are dynamically scaled-up and scaled-down to customers, hosted within large-scale multi-tenant systems. These resources are delivered as improved, cost-effective and available upon request to customers. As one of the main trends of IT industry in modern ages, cloud computing has extended momentum and started to transform the mode enterprises build and offer IT solutions. The primary motivation in using cloud computing model is cost-effectiveness. These motivations can compel Information and Communication Technologies (ICT) organizations to shift their sensitive data and critical infrastructure on cloud environments. Because of the complex nature of underlying cloud infrastructure, the cloud environments are facing a large number of challenges of misconfigurations, cyber-attacks, root-kits, malware instances etc which manifest themselves as a serious threat to cloud environments. These threats noticeably decline the general trustworthiness, reliability and accessibility of the cloud. Security is the primary concern of a cloud service model. However, a number of significant challenges revealed that cloud environments are not as much secure as one would expect. There is also a limited understanding regarding the offering of secure services in a cloud model that can counter such challenges. This indicates the significance of the fact that what establishes the threat in cloud model. One of the main threats in a cloud model is of cost-effectiveness, normally cloud providers reduce cost by sharing infrastructure between multiple un-trusted VMs. This sharing has also led to several problems including co-location attacks. Cloud providers mitigate co-location attacks by introducing the concept of isolation. Due to this, a guest VM cannot interfere with its host machine, and with other guest VMs running on the same system. Such isolation is one of the prime foundations of cloud security for major public providers. However, such logical boundaries are not impenetrable. A myriad of previous studies have demonstrated how co-resident VMs could be vulnerable to attacks through shared file systems, cache side-channels, or through compromising of hypervisor layer using rootkits. Thus, the threat of cross-VM attacks is still possible because an attacker uses one VM to control or access other VMs on the same hypervisor. Hence, multiple methods are devised for strategic VM placement in order to exploit co-residency. Despite the clear potential for co-location attacks for abusing shared memory and disk, fine grained cross-VM network-channel attacks have not yet been demonstrated. Current network based attacks exploit existing vulnerabilities in networking technologies, such as ARP spoofing and DNS poisoning, which are difficult to use for VM-targeted attacks. The most commonly discussed network-based challenges focus on the fact that cloud providers place more layers of isolation between co-resided VMs than in non-virtualized settings because the attacker and victim are often assigned to separate segmentation of virtual networks. However, it has been demonstrated that this is not necessarily sufficient to prevent manipulation of a victim VM’s traffic. This thesis presents a comprehensive method and empirical analysis on the advancement of co-location attacks in which a malicious VM can negatively affect the security and privacy of other co-located VMs as it breaches the security perimeter of the cloud model. In such a scenario, it is imperative for a cloud provider to be able to appropriately secure access to the data such that it reaches to the appropriate destination. The primary contribution of the work presented in this thesis is to introduce two innovative attack models in leading cloud models, impersonation and privilege escalation, that successfully breach the security perimeter of cloud models and also propose countermeasures that block such types of attacks. The attack model revealed in this thesis, is a combination of impersonation and mirroring. This experimental setting can exploit the network channel of cloud model and successfully redirects the network traffic of other co-located VMs. The main contribution of this attack model is to find a gap in the contemporary network cloud architecture that an attacker can exploit. Prior research has also exploited the network channel using ARP poisoning, spoofing but all such attack schemes have been countered as modern cloud providers place more layers of security features than in preceding settings. Impersonation relies on the already existing regular network devices in order to mislead the security perimeter of the cloud model. The other contribution presented of this thesis is ‘privilege escalation’ attack in which a non-root user can escalate a privilege level by using RoP technique on the network channel and control the management domain through which attacker can manage to control the other co-located VMs which they are not authorized to do so. Finally, a countermeasure solution has been proposed by directly modifying the open source code of cloud model that can inhibit all such attacks

    Will SDN be part of 5G?

    Get PDF
    For many, this is no longer a valid question and the case is considered settled with SDN/NFV (Software Defined Networking/Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.Comment: 33 pages, 10 figure

    Analyzing audit trails in a distributed and hybrid intrusion detection platform

    Get PDF
    Efforts have been made over the last decades in order to design and perfect Intrusion Detection Systems (IDS). In addition to the widespread use of Intrusion Prevention Systems (IPS) as perimeter defense devices in systems and networks, various IDS solutions are used together as elements of holistic approaches to cyber security incident detection and prevention, including Network-Intrusion Detection Systems (NIDS) and Host-Intrusion Detection Systems (HIDS). Nevertheless, specific IDS and IPS technology face several effectiveness challenges to respond to the increasing scale and complexity of information systems and sophistication of attacks. The use of isolated IDS components, focused on one-dimensional approaches, strongly limits a common analysis based on evidence correlation. Today, most organizations’ cyber-security operations centers still rely on conventional SIEM (Security Information and Event Management) technology. However, SIEM platforms also have significant drawbacks in dealing with heterogeneous and specialized security event-sources, lacking the support for flexible and uniform multi-level analysis of security audit-trails involving distributed and heterogeneous systems. In this thesis, we propose an auditing solution that leverages on different intrusion detection components and synergistically combines them in a Distributed and Hybrid IDS (DHIDS) platform, taking advantage of their benefits while overcoming the effectiveness drawbacks of each one. In this approach, security events are detected by multiple probes forming a pervasive, heterogeneous and distributed monitoring environment spread over the network, integrating NIDS, HIDS and specialized Honeypot probing systems. Events from those heterogeneous sources are converted to a canonical representation format, and then conveyed through a Publish-Subscribe middleware to a dedicated logging and auditing system, built on top of an elastic and scalable document-oriented storage system. The aggregated events can then be queried and matched against suspicious attack signature patterns, by means of a proposed declarative query-language that provides event-correlation semantics

    Computational Resource Abuse in Web Applications

    Get PDF
    Internet browsers include Application Programming Interfaces (APIs) to support Web applications that require complex functionality, e.g., to let end users watch videos, make phone calls, and play video games. Meanwhile, many Web applications employ the browser APIs to rely on the user's hardware to execute intensive computation, access the Graphics Processing Unit (GPU), use persistent storage, and establish network connections. However, providing access to the system's computational resources, i.e., processing, storage, and networking, through the browser creates an opportunity for attackers to abuse resources. Principally, the problem occurs when an attacker compromises a Web site and includes malicious code to abuse its visitor's computational resources. For example, an attacker can abuse the user's system networking capabilities to perform a Denial of Service (DoS) attack against third parties. What is more, computational resource abuse has not received widespread attention from the Web security community because most of the current specifications are focused on content and session properties such as isolation, confidentiality, and integrity. Our primary goal is to study computational resource abuse and to advance the state of the art by providing a general attacker model, multiple case studies, a thorough analysis of available security mechanisms, and a new detection mechanism. To this end, we implemented and evaluated three scenarios where attackers use multiple browser APIs to abuse networking, local storage, and computation. Further, depending on the scenario, an attacker can use browsers to perform Denial of Service against third-party Web sites, create a network of browsers to store and distribute arbitrary data, or use browsers to establish anonymous connections similarly to The Onion Router (Tor). Our analysis also includes a real-life resource abuse case found in the wild, i.e., CryptoJacking, where thousands of Web sites forced their visitors to perform crypto-currency mining without their consent. In the general case, attacks presented in this thesis share the attacker model and two key characteristics: 1) the browser's end user remains oblivious to the attack, and 2) an attacker has to invest little resources in comparison to the resources he obtains. In addition to the attack's analysis, we present how existing, and upcoming, security enforcement mechanisms from Web security can hinder an attacker and their drawbacks. Moreover, we propose a novel detection approach based on browser API usage patterns. Finally, we evaluate the accuracy of our detection model, after training it with the real-life crypto-mining scenario, through a large scale analysis of the most popular Web sites

    Technologies for digital twin applications in construction

    Get PDF
    The construction industry is facing enormous pressure to adopt digital solutions to solve the industry's inherent problems. The digital twin has emerged as a solution that can update a BIM model with real-time data to achieve cyber-physical integration, enabling real-time monitoring of assets and activities and improving decision-making. The application of digital twins in the construction industry is still in its nascent stages but has been steadily growing over the past few years. A wide variety of emerging technologies are being used in the development of digital twins in diverse applications in construction but it is not immediately clear from the literature which ones are key to the successful development of digital twins, necessitating a systematic literature review with a focus on technologies. This paper aims to identify the key technologies used in the development of digital twins in construction in the existing literature, the research gaps and the potential areas for future research. This is achieved by conducting a systematic review of studies with demonstrative case studies and experimental setups in construction. Based on the observed research gaps, prominent future research directions are suggested, focusing on technologies in data transmission, interoperability and data integration and data processing and visualisation

    Simulated penetration testing and mitigation analysis

    Get PDF
    Da Unternehmensnetzwerke und Internetdienste stetig komplexer werden, wird es immer schwieriger, installierte Programme, Schwachstellen und Sicherheitsprotokolle zu überblicken. Die Idee hinter simuliertem Penetrationstesten ist es, Informationen über ein Netzwerk in ein formales Modell zu transferiern und darin einen Angreifer zu simulieren. Diesem Modell fügen wir einen Verteidiger hinzu, der mittels eigener Aktionen versucht, die Fähigkeiten des Angreifers zu minimieren. Dieses zwei-Spieler Handlungsplanungsproblem nennen wir Stackelberg planning. Ziel ist es, Administratoren, Penetrationstestern und der Führungsebene dabei zu helfen, die Schwachstellen großer Netzwerke zu identifizieren und kosteneffiziente Gegenmaßnahmen vorzuschlagen. Wir schaffen in dieser Dissertation erstens die formalen und algorithmischen Grundlagen von Stackelberg planning. Indem wir dabei auf klassischen Planungsproblemen aufbauen, können wir von gut erforschten Heuristiken und anderen Techniken zur Analysebeschleunigung, z.B. symbolischer Suche, profitieren. Zweitens entwerfen wir einen Formalismus für Privilegien-Eskalation und demonstrieren die Anwendbarkeit unserer Simulation auf lokale Computernetzwerke. Drittens wenden wir unsere Simulation auf internetweite Szenarien an und untersuchen die Robustheit sowohl der E-Mail-Infrastruktur als auch von Webseiten. Viertens ermöglichen wir mittels webbasierter Benutzeroberflächen den leichten Zugang zu unseren Tools und Analyseergebnissen.As corporate networks and Internet services are becoming increasingly more complex, it is hard to keep an overview over all deployed software, their potential vulnerabilities, and all existing security protocols. Simulated penetration testing was proposed to extend regular penetration testing by transferring gathered information about a network into a formal model and simulate an attacker in this model. Having a formal model of a network enables us to add a defender trying to mitigate the capabilities of the attacker with their own actions. We name this two-player planning task Stackelberg planning. The goal behind this is to help administrators, penetration testing consultants, and the management level at finding weak spots of large computer infrastructure and suggesting cost-effective mitigations to lower the security risk. In this thesis, we first lay the formal and algorithmic foundations for Stackelberg planning tasks. By building it in a classical planning framework, we can benefit from well-studied heuristics, pruning techniques, and other approaches to speed up the search, for example symbolic search. Second, we design a theory for privilege escalation and demonstrate the applicability of our framework to local computer networks. Third, we apply our framework to Internet-wide scenarios by investigating the robustness of both the email infrastructure and the web. Fourth, we make our findings and our toolchain easily accessible via web-based user interfaces

    Automatic Algorithm Selection for Complex Simulation Problems

    Get PDF
    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. The thesis consists of three parts. The first part surveys existing approaches to solve the algorithm selection problem and discusses techniques to analyze simulation algorithm performance.The second part introduces a software framework for automatic simulation algorithm selection, which is evaluated in the third part.Die Auswahl des passendsten Simulationsalgorithmus für eine bestimmte Aufgabe ist oftmals schwierig. Dies liegt an der komplexen Interaktion zwischen Modelleigenschaften, Implementierungsdetails und Laufzeitumgebung. Die Arbeit ist in drei Teile gegliedert. Der erste Teil befasst sich eingehend mit Vorarbeiten zur automatischen Algorithmenauswahl, sowie mit der Leistungsanalyse von Simulationsalgorithmen. Der zweite Teil der Arbeit stellt ein Rahmenwerk zur automatischen Auswahl von Simulationsalgorithmen vor, welches dann im dritten Teil evaluiert wird
    corecore