94 research outputs found

    Mitigating Insider Threat in Relational Database Systems

    Get PDF
    The dissertation concentrates on addressing the factors and capabilities that enable insiders to violate systems security. It focuses on modeling the accumulative knowledge that insiders get throughout legal accesses, and it concentrates on analyzing the dependencies and constraints among data items and represents them using graph-based methods. The dissertation proposes new types of Knowledge Graphs (KGs) to represent insiders\u27 knowledgebases. Furthermore, it introduces the Neural Dependency and Inference Graph (NDIG) and Constraints and Dependencies Graph (CDG) to demonstrate the dependencies and constraints among data items. The dissertation discusses in detail how insiders use knowledgebases and dependencies and constraints to get unauthorized knowledge. It suggests new approaches to predict and prevent the aforementioned threat. The proposed models use KGs, NDIG and CDG in analyzing the threat status, and leverage the effect of updates on the lifetimes of data items in insiders\u27 knowledgebases to prevent the threat without affecting the availability of data items. Furthermore, the dissertation uses the aforementioned idea in ordering the operations of concurrent tasks such that write operations that update risky data items in knowledgebases are executed before the risky data items can be used in unauthorized inferences. In addition to unauthorized knowledge, the dissertation discusses how insiders can make unauthorized modifications in sensitive data items. It introduces new approaches to build Modification Graphs that demonstrate the authorized and unauthorized data items which insiders are able to update. To prevent this threat, the dissertation provides two methods, which are hiding sensitive dependencies and denying risky write requests. In addition to traditional RDBMS, the dissertation investigates insider threat in cloud relational database systems (cloud RDMS). It discusses the vulnerabilities in the cloud computing structure that may enable insiders to launch attacks. To prevent such threats, the dissertation suggests three models and addresses the advantages and limitations of each one. To prove the correctness and the effectiveness of the proposed approaches, the dissertation uses well stated algorithms, theorems, proofs and simulations. The simulations have been executed according to various parameters that represent the different conditions and environments of executing tasks

    Security and Privacy for Modern Wireless Communication Systems

    Get PDF
    The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks

    Feature Space Modeling for Accurate and Efficient Learning From Non-Stationary Data

    Get PDF
    A non-stationary dataset is one whose statistical properties such as the mean, variance, correlation, probability distribution, etc. change over a specific interval of time. On the contrary, a stationary dataset is one whose statistical properties remain constant over time. Apart from the volatile statistical properties, non-stationary data poses other challenges such as time and memory management due to the limitation of computational resources mostly caused by the recent advancements in data collection technologies which generate a variety of data at an alarming pace and volume. Additionally, when the collected data is complex, managing data complexity, emerging from its dimensionality and heterogeneity, can pose another challenge for effective computational learning. The problem is to enable accurate and efficient learning from non-stationary data in a continuous fashion over time while facing and managing the critical challenges of time, memory, concept change, and complexity simultaneously. Feature space modeling is one of the most effective solutions to address this problem. For non-stationary data, selecting relevant features is even more critical than stationary data due to the reduction of feature dimension which can ensure the best use a computational resource to produce higher accuracy and efficiency by data mining algorithms. In this dissertation, we investigated a variety of feature space modeling techniques to improve the overall performance of data mining algorithms. In particular, we built Relief based feature sub selection method in combination with data complexity iv analysis to improve the classification performance using ovarian cancer image data collected in a non-stationary batch mode. We also collected time series health sensor data in a streaming environment and deployed feature space transformation using Singular Value Decomposition (SVD). This led to reduced dimensionality of feature space resulting in better accuracy and efficiency produced by Density Ration Estimation Method in identifying potential change points in data over time. We have also built an unsupervised feature space modeling using matrix factorization and Lasso Regression which was successfully deployed in conjugate with Relative Density Ratio Estimation to address the botnet attacks in a non-stationary environment. Relief based feature model improved 16% accuracy of Fuzzy Forest classifier. For change detection framework, we observed 9% improvement in accuracy for PCA feature transformation. Due to the unsupervised feature selection model, for 2% and 5% malicious traffic ratio, the proposed botnet detection framework exhibited average 20% better accuracy than One Class Support Vector Machine (OSVM) and average 25% better accuracy than Autoencoder. All these results successfully demonstrate the effectives of these feature space models. The fundamental theme that repeats itself in this dissertation is about modeling efficient feature space to improve both accuracy and efficiency of selected data mining models. Every contribution in this dissertation has been subsequently and successfully employed to capitalize on those advantages to solve real-world problems. Our work bridges the concepts from multiple disciplines ineffective and surprising ways, leading to new insights, new frameworks, and ultimately to a cross-production of diverse fields like mathematics, statistics, and data mining

    Performance Evaluation of Network Anomaly Detection Systems

    Get PDF
    Nowadays, there is a huge and growing concern about security in information and communication technology (ICT) among the scientific community because any attack or anomaly in the network can greatly affect many domains such as national security, private data storage, social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad research area, and many different techniques and approaches for this purpose have emerged through the years. Attacks, problems, and internal failures when not detected early may badly harm an entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection system based on the statistical method Principal Component Analysis (PCADS-AD). This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques, addition of a different anomaly detection approach, and comparisons to other methods performed in this thesis using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema. The observed results seek to contribute to the advance of the state of the art in methods and strategies for anomaly detection that aim to surpass some challenges that emerge from the constant growth in complexity, speed and size of today’s large scale networks, also providing high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia da informação e comunicação (TIC) entre a comunidade científica. Isto porque qualquer ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade em muitos domínios, como segurança nacional, armazenamento de dados privados, bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito surgiram ao longo dos anos. Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo de deteção de anomalias baseado em perfil utilizando o método estatístico Análise de Componentes Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e portas de origem e destino para fornecer ao administrador de rede as informações necessárias para resolvê-los. Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem de deteção distinta da proposta principal e comparações com outros métodos realizados nesta tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção. Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir para o avanço do estado da arte em métodos e estratégias de deteção de anomalias, visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade e tamanho das redes de grande porte da atualidade, proporcionando também alta performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para que possa ser aplicado a deteção em tempo real

    Modélisation formelle des systèmes de détection d'intrusions

    Get PDF
    L’écosystème de la cybersécurité évolue en permanence en termes du nombre, de la diversité, et de la complexité des attaques. De ce fait, les outils de détection deviennent inefficaces face à certaines attaques. On distingue généralement trois types de systèmes de détection d’intrusions : détection par anomalies, détection par signatures et détection hybride. La détection par anomalies est fondée sur la caractérisation du comportement habituel du système, typiquement de manière statistique. Elle permet de détecter des attaques connues ou inconnues, mais génère aussi un très grand nombre de faux positifs. La détection par signatures permet de détecter des attaques connues en définissant des règles qui décrivent le comportement connu d’un attaquant. Cela demande une bonne connaissance du comportement de l’attaquant. La détection hybride repose sur plusieurs méthodes de détection incluant celles sus-citées. Elle présente l’avantage d’être plus précise pendant la détection. Des outils tels que Snort et Zeek offrent des langages de bas niveau pour l’expression de règles de reconnaissance d’attaques. Le nombre d’attaques potentielles étant très grand, ces bases de règles deviennent rapidement difficiles à gérer et à maintenir. De plus, l’expression de règles avec état dit stateful est particulièrement ardue pour reconnaître une séquence d’événements. Dans cette thèse, nous proposons une approche stateful basée sur les diagrammes d’état-transition algébriques (ASTDs) afin d’identifier des attaques complexes. Les ASTDs permettent de représenter de façon graphique et modulaire une spécification, ce qui facilite la maintenance et la compréhension des règles. Nous étendons la notation ASTD avec de nouvelles fonctionnalités pour représenter des attaques complexes. Ensuite, nous spécifions plusieurs attaques avec la notation étendue et exécutons les spécifications obtenues sur des flots d’événements à l’aide d’un interpréteur pour identifier des attaques. Nous évaluons aussi les performances de l’interpréteur avec des outils industriels tels que Snort et Zeek. Puis, nous réalisons un compilateur afin de générer du code exécutable à partir d’une spécification ASTD, capable d’identifier de façon efficiente les séquences d’événements.Abstract : The cybersecurity ecosystem continuously evolves with the number, the diversity, and the complexity of cyber attacks. Generally, we have three types of Intrusion Detection System (IDS) : anomaly-based detection, signature-based detection, and hybrid detection. Anomaly detection is based on the usual behavior description of the system, typically in a static manner. It enables detecting known or unknown attacks but also generating a large number of false positives. Signature based detection enables detecting known attacks by defining rules that describe known attacker’s behavior. It needs a good knowledge of attacker behavior. Hybrid detection relies on several detection methods including the previous ones. It has the advantage of being more precise during detection. Tools like Snort and Zeek offer low level languages to represent rules for detecting attacks. The number of potential attacks being large, these rule bases become quickly hard to manage and maintain. Moreover, the representation of stateful rules to recognize a sequence of events is particularly arduous. In this thesis, we propose a stateful approach based on algebraic state-transition diagrams (ASTDs) to identify complex attacks. ASTDs allow a graphical and modular representation of a specification, that facilitates maintenance and understanding of rules. We extend the ASTD notation with new features to represent complex attacks. Next, we specify several attacks with the extended notation and run the resulting specifications on event streams using an interpreter to identify attacks. We also evaluate the performance of the interpreter with industrial tools such as Snort and Zeek. Then, we build a compiler in order to generate executable code from an ASTD specification, able to efficiently identify sequences of events

    Immunology Inspired Detection of Data Theft from Autonomous Network Activity

    Get PDF
    The threat of data theft posed by self-propagating, remotely controlled bot malware is increasing. Cyber criminals are motivated to steal sensitive data, such as user names, passwords, account numbers, and credit card numbers, because these items can be parlayed into cash. For anonymity and economy of scale, bot networks have become the cyber criminal’s weapon of choice. In 2010 a single botnet included over one million compromised host computers, and one of the largest botnets in 2011 was specifically designed to harvest financial data from its victims. Unfortunately, current intrusion detection methods are unable to effectively detect data extraction techniques employed by bot malware. The research described in this Dissertation Report addresses that problem. This work builds on a foundation of research regarding artificial immune systems (AIS) and botnet activity detection. This work is the first to isolate and assess features derived from human computer interaction in the detection of data theft by bot malware and is the first to report on a novel use of the HTTP protocol by a contemporary variant of the Zeus bot

    Anomaly diagnosis in industrial control systems for digital forensics

    Get PDF
    Over several decades, Industrial Control Systems (ICS) have become more interconnected and highly programmable. An increasing number of sophisticated cyber-attacks have targeted ICS with a view to cause tangible damage. Despite the stringent functional safety requirements mandated within ICS environments, critical national infrastructure (CNI) sectors and ICS vendors have been slow to address the growing cyber threat. In contrast with the design of information technology (IT) systems, security of controls systems have not typically been an intrinsic design principle for ICS components, such as Programmable Logic Controllers (PLCs). These factors have motivated substantial research addressing anomaly detection in the context of ICS. However, detecting incidents alone does not assist with the response and recovery activities that are necessary for ICS operators to resume normal service. Understanding the provenance of anomalies has the potential to enable the proactive implementation of security controls, and reduce the risk of future attacks. Digital forensics provides solutions by dissecting and reconstructing evidence from an incident. However, this has typically been positioned from a post-incident perspective, which inhibits rapid triaging, and effective response and recovery, an essential requirement in critical ICS. This thesis focuses on anomaly diagnosis, which involves the analysis of and discrimination between different types of anomalous event, positioned at the intersection between anomaly detection and digital forensics. An anomaly diagnosis framework is proposed that includes mechanisms to aid ICS operators in the context of anomaly triaging and incident response. PLCs have a fundamental focus within this thesis due to their critical role and ubiquitous application in ICS. An examination of generalisable PLC data artefacts produced a taxonomy of artefact data types that focus on the device data generated and stored in PLC memory. Using the artefacts defined in this first stage, an anomaly contextualisation model is presented that differentiates between cyber-attack and system fault anomalies. Subsequently, an attack fingerprinting approach (PLCPrint) generates near real-time compositions of memory fingerprints within 200ms, by correlating the static and dynamic behaviour of PLC registers. This establishes attack type and technique provenance, and maintains the chain-of-evidence for digital forensic investigations. To evaluate the efficacy of the framework, a physical ICS testbed modelled on a water treatment system is implemented. Multiple PLC models are evaluated to demonstrate vendor neutrality of the framework. Furthermore, several generalised attack scenarios are conducted based on techniques identified from real PLC malware. The results indicate that PLC device artefacts are particularly powerful at detecting and contextualising an anomaly. In general, we achieve high F1 scores of at least 0.98 and 0.97 for anomaly detection and contextualisation, respectively, which are highly competitive with existing state-of-the-art literature. The performance of PLCPrint emphasises how PLC memory snapshots can precisely and rapidly provide provenance by classifying cyber-attacks with an accuracy of 0.97 in less than 400ms. The proposed framework offers a much needed novel approach through which ICS components can be rapidly triaged for effective response

    A Quantitative Research Study on Probability Risk Assessments in Critical Infrastructure and Homeland Security

    Get PDF
    This dissertation encompassed quantitative research on probabilistic risk assessment (PRA) elements in homeland security and the impact on critical infrastructure and key resources. There are 16 crucial infrastructure sectors in homeland security that represent assets, system networks, virtual and physical environments, roads and bridges, transportation, and air travel. The design included the Bayes theorem, a process used in PRAs when determining potential or probable events, causes, outcomes, and risks. The goal is to mitigate the effects of domestic terrorism and natural and man-made disasters, respond to events related to critical infrastructure that can impact the United States, and help protect and secure natural gas pipelines and electrical grid systems. This study provides data from current risk assessment trends in PRAs that can be applied and designed in elements of homeland security and the criminal justice system to help protect critical infrastructures. The dissertation will highlight the aspects of the U.S. Department of Homeland Security National Infrastructure Protection Plan (NIPP). In addition, this framework was employed to examine the criminal justice triangle, explore crime problems and emergency preparedness solutions to protect critical infrastructures, and analyze data relevant to risk assessment procedures for each critical infrastructure identified. Finally, the study addressed the drivers and gaps in research related to protecting and securing natural gas pipelines and electrical grid systems

    Visualisation d'événements de sécurité centrée autour de l'utilisateur

    Get PDF
    Managing the vast quantities of data generated in the context of information system security becomes more difficult every day. Visualisation tools are a solution to help face this challenge. They represent large quantities of data in a synthetic and often aesthetic way to help understand and manipulate them. In this document, we first present a classification of security visualisation tools according to each of their objectives. These can be one of three: monitoring (following events in real time to identify attacks as early as possible), analysis (the exploration and manipulation a posteriori of a an important quantity of data to discover important events) or reporting (representation a posteriori of known information in a clear and synthetic fashion to help communication and transmission). We then present ELVis, a tool capable of representing security events from various sources coherently. ELVis automatically proposes appropriate representations in function of the type of information (time, IP address, port, data volume, etc.). In addition, ELVis can be extended to accept new sources of data. Lastly, we present CORGI, an successor to ELVIS which allows the simultaneous manipulation of multiple sources of data to correlate them. With the help of CORGI, it is possible to filter security events from a datasource by multiple criteria, which facilitates following events on the currently analysed information systems.Il est aujourd'hui de plus en plus difficile de gérer les énormes quantités de données générées dans le cadre de la sécurité des systèmes. Les outils de visualisation sont une piste pour faire face à ce défi. Ils représentent de manière synthétique et souvent esthétique de grandes quantités de données et d'événements de sécurité pour en faciliter la compréhension et la manipulation. Dans ce document, nous présentons tout d'abord une classification des outils de visualisation pour la sécurité en fonction de leurs objectifs respectifs. Ceux-ci peuvent être de trois ordres : monitoring (c'est à dire suivi en temps réel des événements pour identifier au plus tôt les attaques alors qu'elles se déroulent), exploration (parcours et manipulation a posteriori d'une quantité importante de données pour découvrir les événements importants) ou reporting (représentation a posteriori d'informations déjà connues de manière claire et synthétique pour en faciliter la communication et la transmission). Ensuite, nous présentons ELVis, un outil capable de représenter de manière cohérente des évènements de sécurité issus de sources variées. ELVis propose automatiquement des représentations appropriées en fonction du type des données (temps, adresse IP, port, volume de données, etc.). De plus, ELVis peut être étendu pour accepter de nouvelles sources de données. Enfin, nous présentons CORGI, une extension d'ELVIs permettant de manipuler simultanément plusieurs sources de données pour les corréler. A l'aide de CORGI, il est possible de filtrer les évènements de sécurité provenant d'une source de données en fonction de critères résultant de l'analyse des évènements de sécurité d'une autre source de données, facilitant ainsi le suivi des évènements sur le système d'information en cours d'analyse

    Smart Urban Water Networks

    Get PDF
    This book presents the paper form of the Special Issue (SI) on Smart Urban Water Networks. The number and topics of the papers in the SI confirm the growing interest of operators and researchers for the new paradigm of smart networks, as part of the more general smart city. The SI showed that digital information and communication technology (ICT), with the implementation of smart meters and other digital devices, can significantly improve the modelling and the management of urban water networks, contributing to a radical transformation of the traditional paradigm of water utilities. The paper collection in this SI includes different crucial topics such as the reliability, resilience, and performance of water networks, innovative demand management, and the novel challenge of real-time control and operation, along with their implications for cyber-security. The SI collected fourteen papers that provide a wide perspective of solutions, trends, and challenges in the contest of smart urban water networks. Some solutions have already been implemented in pilot sites (i.e., for water network partitioning, cyber-security, and water demand disaggregation and forecasting), while further investigations are required for other methods, e.g., the data-driven approaches for real time control. In all cases, a new deal between academia, industry, and governments must be embraced to start the new era of smart urban water systems
    • …
    corecore