21 research outputs found

    Impact and key challenges of insider threats on organizations and critical businesses

    Get PDF
    The insider threat has consistently been identified as a key threat to organizations and governments. Understanding the nature of insider threats and the related threat landscape can help in forming mitigation strategies, including non-technical means. In this paper, we survey and highlight challenges associated with the identification and detection of insider threats in both public and private sector organizations, especially those part of a nation’s critical infrastructure. We explore the utility of the cyber kill chain to understand insider threats, as well as understanding the underpinning human behavior and psychological factors. The existing defense techniques are discussed and critically analyzed, and improvements are suggested, in line with the current state-of-the-art cyber security requirements. Finally, open problems related to the insider threat are identified and future research directions are discussed

    High Performance Data Mining Techniques For Intrusion Detection

    Get PDF
    The rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques varying from statistical methods to machine learning algorithms. Intrusion detection systems use audit data generated by operating systems, application softwares or network devices. These sources produce huge amount of datasets with tens of millions of records in them. To analyze this data, data mining is used which is a process to dig useful patterns from a large bulk of information. A major obstacle in the process is that the traditional data mining and learning algorithms are overwhelmed by the bulk volume and complexity of available data. This makes these algorithms impractical for time critical tasks like intrusion detection because of the large execution time. Our approach towards this issue makes use of high performance data mining techniques to expedite the process by exploiting the parallelism in the existing data mining algorithms and the underlying hardware. We will show that how high performance and parallel computing can be used to scale the data mining algorithms to handle large datasets, allowing the data mining component to search a much larger set of patterns and models than traditional computational platforms and algorithms would allow. We develop parallel data mining algorithms by parallelizing existing machine learning techniques using cluster computing. These algorithms include parallel backpropagation and parallel fuzzy ARTMAP neural networks. We evaluate the performances of the developed models in terms of speedup over traditional algorithms, prediction rate and false alarm rate. Our results showed that the traditional backpropagation and fuzzy ARTMAP algorithms can benefit from high performance computing techniques which make them well suited for time critical tasks like intrusion detection

    Detecting insider threat within institutions using CERT dataset and different ML techniques

    Get PDF
    The reason of countries development in industrial and commercial enterprises fields in those countries. The security of a particular country depends on its security institutions, the confidentiality of its employees, their information, the target's information, and information about the forensic evidence for those targets. One of the most important and critical problems in such institutions is the problem of discovering an insider threat that causes loss, damage, or theft the information to hostile or competing parties. This threat is represented by a person who represents one of the employees of the institution, the goal of that person is to steal information or destroy it for the benefit of another institution's desires. The difficulty in detecting this type of threat is due to the difficulty of analyzing the behavior of people within the organization according to their physiological characteristics. In this research, CERT dataset that produced by the University of Carnegie Mellon University has been used in this investigation to detect insider threat. The dataset has been preprocessed. Five effective features were selected to apply three ML techniques Random Forest, NaĂŻve Bayes, and 1 Nearest Neighbor. The results obtained and listed sequentially as 89.75917519%, 91.96650826%, and 94.68205476% with an error rate of 10.24082481%, 8.03349174%, and 5.317945236%

    Document flow tracking within corporate networks

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2009Notícias sobre documentos sensíveis publicados na Internet são cada vez mais frequentes nos cabeçalhos da imprensa de hoje em dia. Em Outubro de 2009, o Manual de Segurança do Ministério da Defesa do Reino Unido, com 2389 páginas, que descreve a totalidade do protocolo militar do Reino Unido relativamente a operações e informações de segurança, foi tornado público por acidente. Este é apenas um caso, mas existem exemplos de fugas de informação em praticamente qualquer área, desde a médica à financeira. Estas fugas de informação podem ter consequências sérias para quem seja afectado por elas, como a exposição de segredos de negócio, danos da imagem de marca ou a aplicação de multas elevadas por parte de entidades reguladoras. Uma fuga de informação pode ter várias causas, sendo uma delas devido a empregados que expõem documentos sensíveis para o exterior da empresa, de forma não intencional. Neste trabalho propomos uma solução capaz de rastrear ficheiros numa rede empresarial e detectar situações que podem levar a que um documento sensível se torne público. Fazemos uso de um agente que é instalado nas máquinas que pretendemos monitorizar, que detecta e regista a utilização de ficheiros em operações potencialmente perigosas, como a cópia para um dispositivo amovível ou o envio por correio electrónico como anexo. Essas operações são registadas e recolhidas para uma localização central, onde podemos fazer uso de um motor de correlação para encontrar relações entre diferentes cópias de um mesmo ficheiro. Para finalizar, desenvolvemos e avaliámos um protótipo que implementa a solução proposta, provando que pode efectivamente ser usado para detectar fugas de informação.News about sensitive documents being leaked to the Internet are becoming a commonplace in today’s headlines. In October of 2009, the United Kingdom Ministry of Defense Manual of Security, with 2389 pages, which fully describes the United Kingdom military protocol for all security and counter-intelligence operations, was inadvertently made public. This is only one, but there are examples of information leaks from almost any area, from medical to financial. These information leaks can have serious consequences to those affected by them, such as exposing business secrets, brand damaging or large fines from regulation entities. An information leak can have multiple causes, being one the employee that inadvertently exposes sensitive documents to the exterior of the company. In this work, we propose a solution capable of tracking files within a corporate network and detecting situations that can lead to a sensitive document being leaked to the exterior. We resort to an agent installed on the hosts to be monitored that detects and logs the usage of files by potentially dangerous operations, such as copying it to a removable drive or sending it by e-mail as an attachment. Those operations are logged and collected to a central repository, where we make use of a correlation engine to find relationships between different copies of a same file. Finally, we have developed and evaluated a prototype that implements the proposed solution, proving that it can indeed be used to detect information leaks

    Cloud Computing Security, An Intrusion Detection System for Cloud Computing Systems

    Get PDF
    Cloud computing is widely considered as an attractive service model because it minimizes investment since its costs are in direct relation to usage and demand. However, the distributed nature of cloud computing environments, their massive resource aggregation, wide user access and efficient and automated sharing of resources enable intruders to exploit clouds for their advantage. To combat intruders, several security solutions for cloud environments adopt Intrusion Detection Systems. However, most IDS solutions are not suitable for cloud environments, because of problems such as single point of failure, centralized load, high false positive alarms, insufficient coverage for attacks, and inflexible design. The thesis defines a framework for a cloud based IDS to face the deficiencies of current IDS technology. This framework deals with threats that exploit vulnerabilities to attack the various service models of a cloud system. The framework integrates behaviour based and knowledge based techniques to detect masquerade, host, and network attacks and provides efficient deployments to detect DDoS attacks. This thesis has three main contributions. The first is a Cloud Intrusion Detection Dataset (CIDD) to train and test an IDS. The second is the Data-Driven Semi-Global Alignment, DDSGA, approach and three behavior based strategies to detect masquerades in cloud systems. The third and final contribution is signature based detection. We introduce two deployments, a distributed and a centralized one to detect host, network, and DDoS attacks. Furthermore, we discuss the integration and correlation of alerts from any component to build a summarized attack report. The thesis describes in details and experimentally evaluates the proposed IDS and alternative deployments. Acknowledgment: =============== • This PH.D. is achieved through an international joint program with a collaboration between University of Pisa in Italy (Department of Computer Science, Galileo Galilei PH.D. School) and University of Arizona in USA (College of Electrical and Computer Engineering). • The PHD topic is categorized in both Computer Engineering and Information Engineering topics. • The thesis author is also known as "Hisham A. Kholidy"

    A Correlation Framework for Continuous User Authentication Using Data Mining

    Get PDF
    Merged with duplicate records: 10026.1/572, 10026.1/334 and 10026.1/724 on 01.02.2017 by CS (TIS)The increasing security breaches revealed in recent surveys and security threats reported in the media reaffirms the lack of current security measures in IT systems. While most reported work in this area has focussed on enhancing the initial login stage in order to counteract against unauthorised access, there is still a problem detecting when an intruder has compromised the front line controls. This could pose a senous threat since any subsequent indicator of an intrusion in progress could be quite subtle and may remain hidden to the casual observer. Having passed the frontline controls and having the appropriate access privileges, the intruder may be in the position to do virtually anything without further challenge. This has caused interest'in the concept of continuous authentication, which inevitably involves the analysis of vast amounts of data. The primary objective of the research is to develop and evaluate a suitable correlation engine in order to automate the processes involved in authenticating and monitoring users in a networked system environment. The aim is to further develop the Anoinaly Detection module previously illustrated in a PhD thesis [I] as part of the conceptual architecture of an Intrusion Monitoring System (IMS) framework

    Performance Metrics for Network Intrusion Systems

    Get PDF
    Intrusion systems have been the subject of considerable research during the past 33 years, since the original work of Anderson. Much has been published attempting to improve their performance using advanced data processing techniques including neural nets, statistical pattern recognition and genetic algorithms. Whilst some significant improvements have been achieved they are often the result of assumptions that are difficult to justify and comparing performance between different research groups is difficult. The thesis develops a new approach to defining performance focussed on comparing intrusion systems and technologies. A new taxonomy is proposed in which the type of output and the data scale over which an intrusion system operates is used for classification. The inconsistencies and inadequacies of existing definitions of detection are examined and five new intrusion levels are proposed from analogy with other detection-based technologies. These levels are known as detection, recognition, identification, confirmation and prosecution, each representing an increase in the information output from, and functionality of, the intrusion system. These levels are contrasted over four physical data scales, from application/host through to enterprise networks, introducing and developing the concept of a footprint as a pictorial representation of the scope of an intrusion system. An intrusion is now defined as “an activity that leads to the violation of the security policy of a computer system”. Five different intrusion technologies are illustrated using the footprint with current challenges also shown to stimulate further research. Integrity in the presence of mixed trust data streams at the highest intrusion level is identified as particularly challenging. Two metrics new to intrusion systems are defined to quantify performance and further aid comparison. Sensitivity is introduced to define basic detectability of an attack in terms of a single parameter, rather than the usual four currently in use. Selectivity is used to describe the ability of an intrusion system to discriminate between attack types. These metrics are quantified experimentally for network intrusion using the DARPA 1999 dataset and SNORT. Only nine of the 58 attack types present were detected with sensitivities in excess of 12dB indicating that detection performance of the attack types present in this dataset remains a challenge. The measured selectivity was also poor indicting that only three of the attack types could be confidently distinguished. The highest value of selectivity was 3.52, significantly lower than the theoretical limit of 5.83 for the evaluated system. Options for improving selectivity and sensitivity through additional measurements are examined.Stochastic Systems Lt

    A Novel Cooperative Intrusion Detection System for Mobile Ad Hoc Networks

    Get PDF
    Mobile ad hoc networks (MANETs) have experienced rapid growth in their use for various military, medical, and commercial scenarios. This is due to their dynamic nature that enables the deployment of such networks, in any target environment, without the need for a pre-existing infrastructure. On the other hand, the unique characteristics of MANETs, such as the lack of central networking points, limited wireless range, and constrained resources, have made the quest for securing such networks a challenging task. A large number of studies have focused on intrusion detection systems (IDSs) as a solid line of defense against various attacks targeting the vulnerable nature of MANETs. Since cooperation between nodes is mandatory to detect complex attacks in real time, various solutions have been proposed to provide cooperative IDSs (CIDSs) in efforts to improve detection efficiency. However, all of these solutions suffer from high rates of false alarms, and they violate the constrained-bandwidth nature of MANETs. To overcome these two problems, this research presented a novel CIDS utilizing the concept of social communities and the Dempster-Shafer theory (DST) of evidence. The concept of social communities was intended to establish reliable cooperative detection reporting while consuming minimal bandwidth. On the other hand, DST targeted decreasing false accusations through honoring partial/lack of evidence obtained solely from reliable sources. Experimental evaluation of the proposed CIDS resulted in consistently high detection rates, low false alarms rates, and low bandwidth consumption. The results of this research demonstrated the viability of applying the social communities concept combined with DST in achieving high detection accuracy and minimized bandwidth consumption throughout the detection process
    corecore