200 research outputs found

    A log mining approach for process monitoring in SCADA

    Get PDF
    SCADA (Supervisory Control and Data Acquisition) systems are used for controlling and monitoring industrial processes. We propose a methodology to systematically identify potential process-related threats in SCADA. Process-related threats take place when an attacker gains user access rights and performs actions, which look legitimate, but which are intended to disrupt the SCADA process. To detect such threats, we propose a semi-automated approach of log processing. We conduct experiments on a real-life water treatment facility. A preliminary case study suggests that our approach is effective in detecting anomalous events that might alter the regular process workflow

    Privacy Violation and Detection Using Pattern Mining Techniques

    Get PDF
    Privacy, its violations and techniques to bypass privacy violation have grabbed the centre-stage of both academia and industry in recent months. Corporations worldwide have become conscious of the implications of privacy violation and its impact on them and to other stakeholders. Moreover, nations across the world are coming out with privacy protecting legislations to prevent data privacy violations. Such legislations however expose organizations to the issues of intentional or unintentional violation of privacy data. A violation by either malicious external hackers or by internal employees can expose the organizations to costly litigations. In this paper, we propose PRIVDAM; a data mining based intelligent architecture of a Privacy Violation Detection and Monitoring system whose purpose is to detect possible privacy violations and to prevent them in the future. Experimental evaluations show that our approach is scalable and robust and that it can detect privacy violations or chances of violations quite accurately. Please contact the author for full text at [email protected]

    Cloud Computing Security, An Intrusion Detection System for Cloud Computing Systems

    Get PDF
    Cloud computing is widely considered as an attractive service model because it minimizes investment since its costs are in direct relation to usage and demand. However, the distributed nature of cloud computing environments, their massive resource aggregation, wide user access and efficient and automated sharing of resources enable intruders to exploit clouds for their advantage. To combat intruders, several security solutions for cloud environments adopt Intrusion Detection Systems. However, most IDS solutions are not suitable for cloud environments, because of problems such as single point of failure, centralized load, high false positive alarms, insufficient coverage for attacks, and inflexible design. The thesis defines a framework for a cloud based IDS to face the deficiencies of current IDS technology. This framework deals with threats that exploit vulnerabilities to attack the various service models of a cloud system. The framework integrates behaviour based and knowledge based techniques to detect masquerade, host, and network attacks and provides efficient deployments to detect DDoS attacks. This thesis has three main contributions. The first is a Cloud Intrusion Detection Dataset (CIDD) to train and test an IDS. The second is the Data-Driven Semi-Global Alignment, DDSGA, approach and three behavior based strategies to detect masquerades in cloud systems. The third and final contribution is signature based detection. We introduce two deployments, a distributed and a centralized one to detect host, network, and DDoS attacks. Furthermore, we discuss the integration and correlation of alerts from any component to build a summarized attack report. The thesis describes in details and experimentally evaluates the proposed IDS and alternative deployments. Acknowledgment: =============== • This PH.D. is achieved through an international joint program with a collaboration between University of Pisa in Italy (Department of Computer Science, Galileo Galilei PH.D. School) and University of Arizona in USA (College of Electrical and Computer Engineering). • The PHD topic is categorized in both Computer Engineering and Information Engineering topics. • The thesis author is also known as "Hisham A. Kholidy"

    A Log Mining Approach for Process Monitoring in SCADA

    Get PDF
    SCADA (Supervisory Control and Data Acquisition) systems are used for controlling and monitoring industrial processes. We propose a methodology to systematically identify potential process-related threats in SCADA. Process-related threats take place when an attacker gains user access rights and performs actions, which look legitimate, but which areintended to disrupt the SCADA process. To detect such threats, we propose a semi-automated approach of log processing. We conduct experiments on a real-life water treatment facility. A preliminary case study suggests that our approach is effective indetecting anomalous events that might alter the regular process workflow

    Diagnosis and Treatment of Chronic Myelomonocytic Leukemias in Adults: Recommendations From the European Hematology Association and the European LeukemiaNet

    Get PDF
    Chronic myelomonocytic leukemia (CMML) is a disease of the elderly, and by far the most frequent overlap myelodysplastic/myeloproliferative neoplasm in adults. Aside from the chronic monocytosis that remains the cornerstone of its diagnosis, the clinical presentation of CMML includes dysplastic features, cytopenias, excess of blasts, or myeloproliferative features including high white blood cell count or splenomegaly. Prognosis is variable, with several prognostic scoring systems reported in recent years, and treatment is poorly defined, with options ranging from watchful waiting to allogeneic stem cell transplantation, which remains the only curative therapy for CMML. Here, we present on behalf of the European Hematology Association and the European LeukemiaNet, evidence- and consensus-based guidelines, established by an international group of experts, from Europe and the United States, for standardized diagnostic and prognostic procedures and for an appropriate choice of therapeutic interventions in adult patients with CMML

    Enhanced Prediction of Network Attacks Using Incomplete Data

    Get PDF
    For years, intrusion detection has been considered a key component of many organizations’ network defense capabilities. Although a number of approaches to intrusion detection have been tried, few have been capable of providing security personnel responsible for the protection of a network with sufficient information to make adjustments and respond to attacks in real-time. Because intrusion detection systems rarely have complete information, false negatives and false positives are extremely common, and thus valuable resources are wasted responding to irrelevant events. In order to provide better actionable information for security personnel, a mechanism for quantifying the confidence level in predictions is needed. This work presents an approach which seeks to combine a primary prediction model with a novel secondary confidence level model which provides a measurement of the confidence in a given attack prediction being made. The ability to accurately identify an attack and quantify the confidence level in the prediction could serve as the basis for a new generation of intrusion detection devices, devices that provide earlier and better alerts for administrators and allow more proactive response to events as they are occurring

    Performance Evaluation of Network Anomaly Detection Systems

    Get PDF
    Nowadays, there is a huge and growing concern about security in information and communication technology (ICT) among the scientific community because any attack or anomaly in the network can greatly affect many domains such as national security, private data storage, social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad research area, and many different techniques and approaches for this purpose have emerged through the years. Attacks, problems, and internal failures when not detected early may badly harm an entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection system based on the statistical method Principal Component Analysis (PCADS-AD). This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques, addition of a different anomaly detection approach, and comparisons to other methods performed in this thesis using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema. The observed results seek to contribute to the advance of the state of the art in methods and strategies for anomaly detection that aim to surpass some challenges that emerge from the constant growth in complexity, speed and size of today’s large scale networks, also providing high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia da informação e comunicação (TIC) entre a comunidade científica. Isto porque qualquer ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade em muitos domínios, como segurança nacional, armazenamento de dados privados, bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito surgiram ao longo dos anos. Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo de deteção de anomalias baseado em perfil utilizando o método estatístico Análise de Componentes Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e portas de origem e destino para fornecer ao administrador de rede as informações necessárias para resolvê-los. Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem de deteção distinta da proposta principal e comparações com outros métodos realizados nesta tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção. Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir para o avanço do estado da arte em métodos e estratégias de deteção de anomalias, visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade e tamanho das redes de grande porte da atualidade, proporcionando também alta performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para que possa ser aplicado a deteção em tempo real

    Detecting Anomalies in VoIP traffic usign Principal Components Analysis

    Get PDF
    The idea of using a method based on Principal Components Analysis to detect anomalies in network's traffic was first introduced by A. Lakina, M. Crovella and C. Diot in an article published in 2004 called “Diagnosing Network­Wide Traffic Anomalies” [1]. They proposed a general method to diagnose traffic anomalies, using PCA to effectively separate the high­dimensional space occupied by a set of network traffic measurements into disjoint subspaces corresponding to normal and anomalous network conditions. This algorithm was tested in subsequent works, taking into consideration different characteristics of IP traffic over a network (such as byte counts, packet counts, IP­flow counts, etc...) [2]. The proposal of using entropy as a summarization tool inside the algorithm led to significant advances in terms or possibility of analyzing massive data sources [3]; but this type of AD method still lacked the possibility of recognizing the users responsible of the anomalies detected. This last step was obtained using random aggregations of the IP flows, by means of sketches [4], leading to better performances in the detection of anomalies and to the possibility of identifying the responsible IP flows. This version of the algorithm has been implemented by C. Callegari and L. Gazzarini, in Universitá di Pisa, in an AD software, described in [5], for analyzing IP traffic traces and detecting anomalies in them. Our work consisted in adapting this software (designed for working with IP traffic traces) for using it with VoIP Call Data Records, in order to test its applicability as an Anomaly Detection system for voice traffic. We then used our modified version of the software to scan a real VoIP traffic trace, obtained by a telephonic operator, in order to analyze the software's performances in a real environment situation. We used two different types of analysis on the same traffic trace, in order to understand software's features and limits, other than its possibility of application in AD problematics. As we discovered that the software's performances are heavily dependent on the input parameters used in the analysis, we concluded with several tests performed using artificially created anomalies, in order to understand the relationships between each input parameter's value and the software's capability of detecting different types of anomalies. The different analysis performed, in the ending, led us to some considerations upon the possibility of applying this PCA's based software as an Anomaly Detector in VoIP environments. At the best of our knowledge this is the first time a technique based on Principal Components Analysis is used to detect anomalous users in VoIP traffic; in more detail our contribution consisted in: • Creating a version of an AD software based on PCA that could be used on VoIP traffic traces • Testing the software's performances on a real traffic trace, obtained by a telephonic operator • From the first tests, analyzing the appropriate parameters' values that permitted us to obtain results that could be useful for detecting anomalous users in a VoIP environment Observing the types of users detected using the software on this trace and classify them, according to their behavior during the whole duration of the trace Analyzing how the parameters' choice impact the type of detections obtained from the analysis and testing which are the best choices for detecting each type of anomalous users Proposing a new kind of application of the software that avoids the biggest limitation of the first type of analysis (that we will see that is the impossibility of detecting more than one anomalous user per time­bin) Testing the software's performances with this new type of analysis, observing also how this different type of applications impacts the results' dependence from the input parameters Comparing the software's ability of detecting anomalous users with another type of AD software that works on the same type of trace (VoIP SEAL) Modifying the trace in order to obtain, from the real trace, a version cleaned from all the detectable anomalies, in order to add in that trace artificial anomalies Testing the software's performances in detecting different type of artificial anomalies Analyzing in more detail the software's sensibility from the input parameters, when used for detecting artificially created anomalies Comparing results and observations obtained from these different types of analysis to derive a global analysis of the characteristics of an Anomaly Detector based on Principal Components Analysis, its values and its lacks when applying it on a VoIP trace The structure of our work is the following: 1. We will start analyzing the PCA theory, describing the structure of the algorithm used in our software, his features and the type of data it needs to be used as an Anomaly Detection system for VoIP traffic. 2. Then, after shortly describing the type of trace we used to test our software, we will introduce the first type of analysis performed, the single round analysis, pointing out the results obtained and their dependence from the parameters' values. 3. In the following section we will focus on a different type of analysis, the multiple round analysis, that we introduced to test the software's performances, removing its biggest limitation (the impossibility of detecting more than one user per time­bin); we will describe the results obtained, comparing them with the ones obtained with the single round analysis, check their dependence from the parameters and compare the performances with the ones obtained using another type of AD software (VoIP SEAL) on the same trace. 4. We will then consider the results and observations obtained testing our software using artificial anomalies added on a “cleaned” version of our original trace (in which we removed all the anomalous users detectable with our software), comparing the software's performances in detecting different types of anomalies and analyzing in detail their dependence from the parameters' values. 5. At last we will describe our conclusions, derived using all the observations obtained with different types of analysis, about the applicability of a software based on PCA as an Anomaly Detector in a VoIP environment
    corecore