61 research outputs found

    Identifying and Detecting Attacks in Industrial Control Systems

    Get PDF
    The integrity of industrial control systems (ICS) found in utilities, oil and natural gas pipelines, manufacturing plants and transportation is critical to national wellbeing and security. Such systems depend on hundreds of field devices to manage and monitor a physical process. Previously, these devices were specific to ICS but they are now being replaced by general purpose computing technologies and, increasingly, these are being augmented with Internet of Things (IoT) nodes. Whilst there are benefits to this approach in terms of cost and flexibility, it has attracted a wider community of adversaries. These include those with significant domain knowledge, such as those responsible for attacks on Iran’s Nuclear Facilities, a Steel Mill in Germany, and Ukraine’s power grid; however, non specialist attackers are becoming increasingly interested in the physical damage it is possible to cause. At the same time, the approach increases the number and range of vulnerabilities to which ICS are subject; regrettably, conventional techniques for analysing such a large attack space are inadequate, a cause of major national concern. In this thesis we introduce a generalisable approach based on evolutionary multiobjective algorithms to assist in identifying vulnerabilities in complex heterogeneous ICS systems. This is both challenging and an area that is currently lacking research. Our approach has been to review the security of currently deployed ICS systems, and then to make use of an internationally recognised ICS simulation testbed for experiments, assuming that the attacking community largely lack specific ICS knowledge. Using the simulator, we identified vulnerabilities in individual components and then made use of these to generate attacks. A defence against these attacks in the form of novel intrusion detection systems were developed, based on a range of machine learning models. Finally, this was further subject to attacks created using the evolutionary multiobjective algorithms, demonstrating, for the first time, the feasibility of creating sophisticated attacks against a well-protected adversary using automated mechanisms

    An Integrated Cybersecurity Risk Management (I-CSRM) Framework for Critical Infrastructure Protection

    Get PDF
    Risk management plays a vital role in tackling cyber threats within the Cyber-Physical System (CPS) for overall system resilience. It enables identifying critical assets, vulnerabilities, and threats and determining suitable proactive control measures to tackle the risks. However, due to the increased complexity of the CPS, cyber-attacks nowadays are more sophisticated and less predictable, which makes risk management task more challenging. This research aims for an effective Cyber Security Risk Management (CSRM) practice using assets criticality, predication of risk types and evaluating the effectiveness of existing controls. We follow a number of techniques for the proposed unified approach including fuzzy set theory for the asset criticality, machine learning classifiers for the risk predication and Comprehensive Assessment Model (CAM) for evaluating the effectiveness of the existing controls. The proposed approach considers relevant CSRM concepts such as threat actor attack pattern, Tactic, Technique and Procedure (TTP), controls and assets and maps these concepts with the VERIS community dataset (VCDB) features for the purpose of risk predication. Also, the tool serves as an additional component of the proposed framework that enables asset criticality, risk and control effectiveness calculation for a continuous risk assessment. Lastly, the thesis employs a case study to validate the proposed i-CSRM framework and i-CSRMT in terms of applicability. Stakeholder feedback is collected and evaluated using critical criteria such as ease of use, relevance, and usability. The analysis results illustrate the validity and acceptability of both the framework and tool for an effective risk management practice within a real-world environment. The experimental results reveal that using the fuzzy set theory in assessing assets' criticality, supports stakeholder for an effective risk management practice. Furthermore, the results have demonstrated the machine learning classifiers’ have shown exemplary performance in predicting different risk types including denial of service, cyber espionage, and Crimeware. An accurate prediction can help organisations model uncertainty with machine learning classifiers, detect frequent cyber-attacks, affected assets, risk types, and employ the necessary corrective actions for its mitigations. Lastly, to evaluate the effectiveness of the existing controls, the CAM approach is used, and the result shows that some controls such as network intrusion, authentication, and anti-virus show high efficacy in controlling or reducing risks. Evaluating control effectiveness helps organisations to know how effective the controls are in reducing or preventing any form of risk before an attack occurs. Also, organisations can implement new controls earlier. The main advantage of using the CAM approach is that the parameters used are objective, consistent and applicable to CPS

    On Collaborative Intrusion Detection

    Get PDF
    Cyber-attacks have nowadays become more frightening than ever before. The growing dependency of our society on networked systems aggravates these threats; from interconnected corporate networks and Industrial Control Systems (ICSs) to smart households, the attack surface for the adversaries is increasing. At the same time, it is becoming evident that the utilization of classic fields of security research alone, e.g., cryptography, or the usage of isolated traditional defense mechanisms, e.g., firewalls and Intrusion Detection Systems ( IDSs ), is not enough to cope with the imminent security challenges. To move beyond monolithic approaches and concepts that follow a “cat and mouse” paradigm between the defender and the attacker, cyber-security research requires novel schemes. One such promis- ing approach is collaborative intrusion detection. Driven by the lessons learned from cyber-security research over the years, the aforesaid notion attempts to connect two instinctive questions: “if we acknowledge the fact that no security mechanism can detect all attacks, can we beneficially combine multiple approaches to operate together?” and “as the adversaries increasingly collaborate (e.g., Distributed Denial of Service (DDoS) attacks from whichever larger botnets) to achieve their goals, can the defenders beneficially collude too?”. Collabora- tive intrusion detection attempts to address the emerging security challenges by providing methods for IDSs and other security mech- anisms (e.g., firewalls and honeypots) to combine their knowledge towards generating a more holistic view of the monitored network. This thesis improves the state of the art in collaborative intrusion detection in several areas. In particular, the dissertation proposes methods for the detection of complex attacks and the generation of the corresponding intrusion detection signatures. Moreover, a novel approach for the generation of alert datasets is given, which can assist researchers in evaluating intrusion detection algorithms and systems. Furthermore, a method for the construction of communities of collab- orative monitoring sensors is given, along with a domain-awareness approach that incorporates an efficient data correlation mechanism. With regard to attacks and countermeasures, a detailed methodology is presented that is focusing on sensor-disclosure attacks in the con- text of collaborative intrusion detection. The scientific contributions can be structured into the following categories: Alert data generation: This thesis deals with the topic of alert data generation in a twofold manner: first it presents novel approaches for detecting complex attacks towards generating alert signatures for IDSs ; second a method for the synthetic generation of alert data is pro- posed. In particular, a novel security mechanism for mobile devices is proposed that is able to support users in assessing the security status of their networks. The system can detect sophisticated attacks and generate signatures to be utilized by IDSs . The dissertation also touches the topic of synthetic, yet realistic, dataset generation for the evaluation of intrusion detection algorithms and systems; it proposes a novel dynamic dataset generation concept that overcomes the short- comings of the related work. Collaborative intrusion detection: As a first step, the the- sis proposes a novel taxonomy for collaborative intrusion detection ac- companied with building blocks for Collaborative IDSs ( CIDSs ). More- over, the dissertation deals with the topics of (alert) data correlation and aggregation in the context of CIDSs . For this, a number of novel methods are proposed that aim at improving the clustering of mon- itoring sensors that exhibit similar traffic patterns. Furthermore, a novel alert correlation approach is presented that can minimize the messaging overhead of a CIDS. Attacks on CIDSs: It is common for research on cyber-defense to switch its perspective, taking on the viewpoint of attackers, trying to anticipate their remedies against novel defense approaches. The the- sis follows such an approach by focusing on a certain class of attacks on CIDSs that aim at identifying the network location of the monitor- ing sensors. In particular, the state of the art is advanced by proposing a novel scheme for the improvement of such attacks. Furthermore, the dissertation proposes novel mitigation techniques to overcome both the state of art and the proposed improved attacks. Evaluation: All the proposals and methods introduced in the dis- sertation were evaluated qualitatively, quantitatively and empirically. A comprehensive study of the state of the art in collaborative intru- sion detection was conducted via a qualitative approach, identifying research gaps and surveying the related work. To study the effective- ness of the proposed algorithms and systems extensive simulations were utilized. Moreover, the applicability and usability of some of the contributions in the area of alert data generation was additionally supported via Proof of Concepts (PoCs) and prototypes. The majority of the contributions were published in peer-reviewed journal articles, in book chapters, and in the proceedings of interna- tional conferences and workshops

    Evaluating the Efficacy of Implicit Authentication Under Realistic Operating Scenarios

    Get PDF
    Smartphones contain a wealth of personal and corporate data. Several surveys have reported that about half of the smartphone owners do not configure primary authentication mechanisms (such as PINs, passwords, and fingerprint- or facial-recognition systems) on their devices to protect data due to usability concerns. In addition, primary authentication mechanisms have been subject to operating system flaws, smudge attacks, and shoulder surfing attacks. These limitations have prompted researchers to develop implicit authentication (IA), which authenticates a user by using distinctive, measurable patterns of device use that are gathered from the device users without requiring deliberate actions. Researchers have claimed that IA has desirable security and usability properties and it seems a promising candidate to mitigate the security and usability issues of primary authentication mechanisms. Our observation is that the existing evaluations of IA have a preoccupation with accuracy numbers and they have neglected the deployment, usability and security issues that are critical for its adoption. Furthermore, the existing evaluations have followed an ad-hoc approach based on synthetic datasets and weak adversarial models. To confirm our observations, we first identify a comprehensive set of evaluation criteria for IA schemes. We gather real-world datasets and evaluate diverse and prominent IA schemes to question the efficacy of existing IA schemes and to gain insight into the pitfalls of the contemporary evaluation approach to IA. Our evaluation confirms that under realistic operating conditions, several prominent IA schemes perform poorly across key evaluation metrics and thereby fail to provide adequate security. We then examine the usability and security properties of IA by carefully evaluating promising IA schemes. Our usability evaluation shows that the users like the convenience offered by IA. However, it uncovers issues due to IA's transparent operation and false rejects, which are both inherent to IA. It also suggests that detection delay and false accepts are concerns to several users. In terms of security, our evaluation based on a realistic, stronger adversarial model shows the susceptibility of highly accurate, touch input-based IA schemes to shoulder surfing attacks and attacks that train an attacker by leveraging raw touch data of victims. These findings exemplify the significance of realistic adversarial models. These critical security and usability challenges remained unidentified by the previous research efforts due to the passive involvement of human subjects (only as behavioural data sources). This emphasizes the need for rapid prototyping and deployment of IA for an active involvement of human subjects in IA research. To this end, we design, implement, evaluate and release in open source a framework, which reduces the re-engineering effort in IA research and enables deployment of IA on off-the-shelf Android devices. The existing authentication schemes available on contemporary smartphones fail to provide both usability and security. Authenticating users based on their behaviour, as suggested by the literature on IA, is a promising idea. However, this thesis concludes that several results reported in the existing IA literature are misleading due to the unrealistic evaluation conditions and several critical challenges in the IA domain need yet to be resolved. This thesis identifies these challenges and provides necessary tools and design guidelines to establish the future viability of IA

    Modeling Deception for Cyber Security

    Get PDF
    In the era of software-intensive, smart and connected systems, the growing power and so- phistication of cyber attacks poses increasing challenges to software security. The reactive posture of traditional security mechanisms, such as anti-virus and intrusion detection systems, has not been sufficient to combat a wide range of advanced persistent threats that currently jeopardize systems operation. To mitigate these extant threats, more ac- tive defensive approaches are necessary. Such approaches rely on the concept of actively hindering and deceiving attackers. Deceptive techniques allow for additional defense by thwarting attackers’ advances through the manipulation of their perceptions. Manipu- lation is achieved through the use of deceitful responses, feints, misdirection, and other falsehoods in a system. Of course, such deception mechanisms may result in side-effects that must be handled. Current methods for planning deception chiefly portray attempts to bridge military deception to cyber deception, providing only high-level instructions that largely ignore deception as part of the software security development life cycle. Con- sequently, little practical guidance is provided on how to engineering deception-based techniques for defense. This PhD thesis contributes with a systematic approach to specify and design cyber deception requirements, tactics, and strategies. This deception approach consists of (i) a multi-paradigm modeling for representing deception requirements, tac- tics, and strategies, (ii) a reference architecture to support the integration of deception strategies into system operation, and (iii) a method to guide engineers in deception mod- eling. A tool prototype, a case study, and an experimental evaluation show encouraging results for the application of the approach in practice. Finally, a conceptual coverage map- ping was developed to assess the expressivity of the deception modeling language created.Na era digital o crescente poder e sofisticação dos ataques cibernéticos apresenta constan- tes desafios para a segurança do software. A postura reativa dos mecanismos tradicionais de segurança, como os sistemas antivírus e de detecção de intrusão, não têm sido suficien- tes para combater a ampla gama de ameaças que comprometem a operação dos sistemas de software actuais. Para mitigar estas ameaças são necessárias abordagens ativas de defesa. Tais abordagens baseiam-se na ideia de adicionar mecanismos para enganar os adversários (do inglês deception). As técnicas de enganação (em português, "ato ou efeito de enganar, de induzir em erro; artimanha usada para iludir") contribuem para a defesa frustrando o avanço dos atacantes por manipulação das suas perceções. A manipula- ção é conseguida através de respostas enganadoras, de "fintas", ou indicações erróneas e outras falsidades adicionadas intencionalmente num sistema. É claro que esses meca- nismos de enganação podem resultar em efeitos colaterais que devem ser tratados. Os métodos atuais usados para enganar um atacante inspiram-se fundamentalmente nas técnicas da área militar, fornecendo apenas instruções de alto nível que ignoram, em grande parte, a enganação como parte do ciclo de vida do desenvolvimento de software seguro. Consequentemente, há poucas referências práticas em como gerar técnicas de defesa baseadas em enganação. Esta tese de doutoramento contribui com uma aborda- gem sistemática para especificar e desenhar requisitos, táticas e estratégias de enganação cibernéticas. Esta abordagem é composta por (i) uma modelação multi-paradigma para re- presentar requisitos, táticas e estratégias de enganação, (ii) uma arquitetura de referência para apoiar a integração de estratégias de enganação na operação dum sistema, e (iii) um método para orientar os engenheiros na modelação de enganação. Uma ferramenta protó- tipo, um estudo de caso e uma avaliação experimental mostram resultados encorajadores para a aplicação da abordagem na prática. Finalmente, a expressividade da linguagem de modelação de enganação é avaliada por um mapeamento de cobertura de conceitos

    Exploitation of Unintentional Information Leakage from Integrated Circuits

    Get PDF
    Unintentional electromagnetic emissions are used to recognize or verify the identity of a unique integrated circuit (IC) based on fabrication process-induced variations in a manner analogous to biometric human identification. The effectiveness of the technique is demonstrated through an extensive empirical study, with results presented indicating correct device identification success rates of greater than 99:5%, and average verification equal error rates (EERs) of less than 0:05% for 40 near-identical devices. The proposed approach is suitable for security applications involving commodity commercial ICs, with substantial cost and scalability advantages over existing approaches. A systematic leakage mapping methodology is also proposed to comprehensively assess the information leakage of arbitrary block cipher implementations, and to quantitatively bound an arbitrary implementation\u27s resistance to the general class of differential side channel analysis techniques. The framework is demonstrated using the well-known Hamming Weight and Hamming Distance leakage models, and approach\u27s effectiveness is demonstrated through the empirical assessment of two typical unprotected implementations of the Advanced Encryption Standard. The assessment results are empirically validated against correlation-based differential power and electromagnetic analysis attacks

    Analytics of Sequential Time Data from Physical Assets

    Get PDF
    RÉSUMÉ: Avec l’avancement dans les technologies des capteurs et de l’intelligence artificielle, l'analyse des données est devenue une source d’information et de connaissance qui appuie la prise de décisions dans l’industrie. La prise de ces décisions, en se basant seulement sur l’expertise humaine n’est devenu suffisant ou souhaitable, et parfois même infaisable pour de nouvelles industries. L'analyse des données collectées à partir des actifs physiques vient renforcer la prise de décisions par des connaissances pratiques qui s’appuient sur des données réelles. Ces données sont utilisées pour accomplir deux tâches principales; le diagnostic et le pronostic. Les deux tâches posent un défi, principalement à cause de la provenance des données et de leur adéquation avec l’exploitation, et aussi à cause de la difficulté à choisir le type d'analyse. Ce dernier exige un analyste ayant une expertise dans les déférentes techniques d’analyse de données, et aussi dans le domaine de l’application. Les problèmes de données sont dus aux nombreuses sources inconnues de variations interagissant avec les données collectées, qui peuvent parfois être dus à des erreurs humaines. Le choix du type de modélisation est un autre défi puisque chaque modèle a ses propres hypothèses, paramètres et limitations. Cette thèse propose quatre nouveaux types d'analyse de séries chronologiques dont deux sont supervisés et les deux autres sont non supervisés. Ces techniques d'analyse sont testées et appliquées sur des différents problèmes industriels. Ces techniques visent à minimiser la charge de choix imposée à l'analyste. Pour l’analyse de séries chronologiques par des techniques supervisées, la prédiction de temps de défaillance d’un actif physique est faite par une technique qui porte le nom de ‘Logical Analysis of Survival Curves (LASC)’. Cette technique est utilisée pour stratifier de manière adaptative les courbes de survie tout au long d’un processus d’inspection. Ceci permet une modélisation plus précise au lieu d'utiliser un seul modèle augmenté pour toutes les données. L'autre technique supervisée de pronostic est un nouveau réseau de neurones de type ‘Long Short-Term Memory (LSTM) bidirectionnel’ appelé ‘Bidirectional Handshaking LSTM (BHLSTM)’. Ce modèle fait un meilleur usage des séquences courtes en faisant un tour de ronde à travers les données. De plus, le réseau est formé à l'aide d'une nouvelle fonction objective axée sur la sécurité qui force le réseau à faire des prévisions plus sûres. Enfin, étant donné que LSTM est une technique supervisée, une nouvelle approche pour générer la durée de vie utile restante (RUL) est proposée. Cette technique exige la formulation des hypothèses moins importantes par rapport aux approches précédentes. À des fins de diagnostic non supervisé, une nouvelle technique de classification interprétable est proposée. Cette technique est intitulée ‘Interpretable Clustering for Rule Extraction and Anomaly Detection (IC-READ)’. L'interprétation signifie que les groupes résultants sont formulés en utilisant une logique conditionnelle simple. Cela est pratique lors de la fourniture des résultats à des non-spécialistes. Il facilite toute mise en oeuvre du matériel si nécessaire. La technique proposée est également non paramétrique, ce qui signifie qu'aucun réglage n'est requis. Cette technique pourrait également être utiliser dans un contexte de ‘one class classification’ pour construire un détecteur d'anomalie. L'autre technique non supervisée proposée est une approche de regroupement de séries chronologiques à plusieurs variables de longueur variable à l'aide d'une distance de type ‘Dynamic Time Warping (DTW)’ modifiée. Le DTW modifié donne des correspondances plus élevées pour les séries temporelles qui ont des tendances et des grandeurs similaires plutôt que de se concentrer uniquement sur l'une ou l'autre de ces propriétés. Cette technique est également non paramétrique et utilise la classification hiérarchique pour regrouper les séries chronologiques de manière non supervisée. Cela est particulièrement utile pour décider de la planification de la maintenance. Il est également montré qu'il peut être utilisé avec ‘Kernel Principal Components Analysis (KPCA)’ pour visualiser des séquences de longueurs variables dans des diagrammes bidimensionnels.---------- ABSTRACT: Data analysis has become a necessity for industry. Working with inherited expertise only has become insufficient, expensive, not easily transferable, and mostly unavailable for new industries and facilities. Data analysis can provide decision-makers with more insight on how to manage their production, maintenance and personnel. Data collection requires acquisition and storage of observatory information about the state of the different production assets. Data collection usually takes place in a timely manner which result in time-series of observations. Depending on the type of data records available, the type of possible analyses will differ. Data labeled with previous human experience in terms of identifiable faults or fatigues can be used to build models to perform the expert’s task in the future by means of supervised learning. Otherwise, if no human labeling is available then data analysis can provide insights about similar observations or visualize these similarities through unsupervised learning. Both are challenging types of analyses. The challenges are two-fold; the first originates from the data and its adequacy, and the other is selecting the type of analysis which is a decision made by the analyst. Data challenges are due to the substantial number of unknown sources of variations inherited in the collected data, which may sometimes include human errors. Deciding upon the type of modelling is another issue as each model has its own assumptions, parameters to tune, and limitations. This thesis proposes four new types of time-series analysis, two of which are supervised requiring data labelling by certain events such as failure when, and the other two are unsupervised that require no such labelling. These analysis techniques are tested and applied on various industrial applications, namely road maintenance, bearing outer race failure detection, cutting tool failure prediction, and turbo engine failure prediction. These techniques target minimizing the burden of choice laid on the analyst working with industrial data by providing reliable analysis tools that require fewer choices to be made by the analyst. This in turn allows different industries to easily make use of their data without requiring much expertise. For prognostic purposes a proposed modification to the binary Logical Analysis of Data (LAD) classifier is used to adaptively stratify survival curves into long survivors and short life sets. This model requires no parameters to choose and completely relies on empirical estimations. The proposed Logical Analysis of Survival Curves show a 27% improvement in prediction accuracy than the results obtained by well-known machine learning techniques in terms of the mean absolute error. The other prognostic model is a new bidirectional Long Short-Term Memory (LSTM) neural network termed the Bidirectional Handshaking LSTM (BHLSTM). This model makes better use of short sequences by making a round pass through the given data. Moreover, the network is trained using a new safety oriented objective function which forces the network to make safer predictions. Finally, since LSTM is a supervised technique, a novel approach for generating the target Remaining Useful Life (RUL) is proposed requiring lesser assumptions to be made compared to previous approaches. This proposed network architecture shows an average of 18.75% decrease in the mean absolute error of predictions on the NASA turbo engine dataset. For unsupervised diagnostic purposes a new technique for providing interpretable clustering is proposed named Interpretable Clustering for Rule Extraction and Anomaly Detection (IC-READ). Interpretation means that the resulting clusters are formulated using simple conditional logic. This is very important when providing the results to non-specialists especially those in management and ease any hardware implementation if required. The proposed technique is also non-parametric, which means there is no tuning required and shows an average of 20% improvement in cluster purity over other clustering techniques applied on 11 benchmark datasets. This technique also can use the resulting clusters to build an anomaly detector. The last proposed technique is a whole multivariate variable length time-series clustering approach using a modified Dynamic Time Warping (DTW) distance. The modified DTW gives higher matches for time-series that have the similar trends and magnitudes rather than just focusing on either property alone. This technique is also non-parametric and uses hierarchal clustering to group time-series in an unsupervised fashion. This can be specifically useful for management to decide maintenance scheduling. It is shown also that it can be used along with Kernel Principal Components Analysis (KPCA) for visualizing variable length sequences in two-dimensional plots. The unsupervised techniques can help, in some cases where there is a lot of variation within certain classes, to ease the supervised learning task by breaking it into smaller problems having the same nature
    corecore