12 research outputs found

    Performance Evaluation of Network Anomaly Detection Systems

    Get PDF
    Nowadays, there is a huge and growing concern about security in information and communication technology (ICT) among the scientific community because any attack or anomaly in the network can greatly affect many domains such as national security, private data storage, social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad research area, and many different techniques and approaches for this purpose have emerged through the years. Attacks, problems, and internal failures when not detected early may badly harm an entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection system based on the statistical method Principal Component Analysis (PCADS-AD). This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques, addition of a different anomaly detection approach, and comparisons to other methods performed in this thesis using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema. The observed results seek to contribute to the advance of the state of the art in methods and strategies for anomaly detection that aim to surpass some challenges that emerge from the constant growth in complexity, speed and size of today’s large scale networks, also providing high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia da informação e comunicação (TIC) entre a comunidade científica. Isto porque qualquer ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade em muitos domínios, como segurança nacional, armazenamento de dados privados, bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito surgiram ao longo dos anos. Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo de deteção de anomalias baseado em perfil utilizando o método estatístico Análise de Componentes Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e portas de origem e destino para fornecer ao administrador de rede as informações necessárias para resolvê-los. Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem de deteção distinta da proposta principal e comparações com outros métodos realizados nesta tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção. Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir para o avanço do estado da arte em métodos e estratégias de deteção de anomalias, visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade e tamanho das redes de grande porte da atualidade, proporcionando também alta performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para que possa ser aplicado a deteção em tempo real

    Secure VoIP Performance Measurement

    Get PDF
    This project presents a mechanism for instrumentation of secure VoIP calls. The experiments were run under different network conditions and security systems. VoIP services such as Google Talk, Express Talk and Skype were under test. The project allowed analysis of the voice quality of the VoIP services based on the Mean Opinion Score (MOS) values generated by Perceptual valuation of Speech Quality (PESQ). The quality of the audio streams produced were subjected to end-to-end delay, jitter, packet loss and extra processing in the networking hardware and end devices due to Internetworking Layer security or Transport Layer security implementations. The MOS values were mapped to Perceptual Evaluation of Speech Quality for wideband (PESQ-WB) scores. From these PESQ-WB scores, the graphs of the mean of 10 runs and box and whisker plots for each parameter were drawn. Analysis on the graphs was performed in order to deduce the quality of each VoIP service. The E-model was used to predict the network readiness and Common vulnerability Scoring System (CVSS) was used to predict the network vulnerabilities. The project also provided the mechanism to measure the throughput for each test case. The overall performance of each VoIP service was determined by PESQ-WB scores, CVSS scores and the throughput. The experiment demonstrated the relationship among VoIP performance, VoIP security and VoIP service type. The experiment also suggested that, when compared to an unsecure IPIP tunnel, Internetworking Layer security like IPSec ESP or Transport Layer security like OpenVPN TLS would improve a VoIP security by reducing the vulnerabilities of the media part of the VoIP signal. Morever, adding a security layer has little impact on the VoIP voice quality

    On the Generation of Cyber Threat Intelligence: Malware and Network Traffic Analyses

    Get PDF
    In recent years, malware authors drastically changed their course on the subject of threat design and implementation. Malware authors, namely, hackers or cyber-terrorists perpetrate new forms of cyber-crimes involving more innovative hacking techniques. Being motivated by financial or political reasons, attackers target computer systems ranging from personal computers to organizations’ networks to collect and steal sensitive data as well as blackmail, scam people, or scupper IT infrastructures. Accordingly, IT security experts face new challenges, as they need to counter cyber-threats proactively. The challenge takes a continuous allure of a fight, where cyber-criminals are obsessed by the idea of outsmarting security defenses. As such, security experts have to elaborate an effective strategy to counter cyber-criminals. The generation of cyber-threat intelligence is of a paramount importance as stated in the following quote: “the field is owned by who owns the intelligence”. In this thesis, we address the problem of generating timely and relevant cyber-threat intelligence for the purpose of detection, prevention and mitigation of cyber-attacks. To do so, we initiate a research effort, which falls into: First, we analyze prominent cyber-crime toolkits to grasp the inner-secrets and workings of advanced threats. We dissect prominent malware like Zeus and Mariposa botnets to uncover their underlying techniques used to build a networked army of infected machines. Second, we investigate cyber-crime infrastructures, where we elaborate on the generation of a cyber-threat intelligence for situational awareness. We adapt a graph-theoretic approach to study infrastructures used by malware to perpetrate malicious activities. We build a scoring mechanism based on a page ranking algorithm to measure the badness of infrastructures’ elements, i.e., domains, IPs, domain owners, etc. In addition, we use the min-hashing technique to evaluate the level of sharing among cyber-threat infrastructures during a period of one year. Third, we use machine learning techniques to fingerprint malicious IP traffic. By fingerprinting, we mean detecting malicious network flows and their attribution to malware families. This research effort relies on a ground truth collected from the dynamic analysis of malware samples. Finally, we investigate the generation of cyber-threat intelligence from passive DNS streams. To this end, we design and implement a system that generates anomalies from passive DNS traffic. Due to the tremendous nature of DNS data, we build a system on top of a cluster computing framework, namely, Apache Spark [70]. The integrated analytic system has the ability to detect anomalies observed in DNS records, which are potentially generated by widespread cyber-threats

    An investigation into the usability and acceptability of multi-channel authentication to online banking users in Oman

    Get PDF
    Authentication mechanisms provide the cornerstone for security for many distributed systems, especially for increasingly popular online applications. For decades, widely used, traditional authentication methods included passwords and PINs that are now inadequate to protect online users and organizations from ever more sophisticated attacks. This study proposes an improvement to traditional authentication mechanisms. The solution introduced here includes a one-time-password (OTP) and incorporates the concept of multiple levels and multiple channels – features that are much more successful than traditional authentication mechanisms in protecting users' online accounts from being compromised. This research study reviews and evaluates current authentication classes and mechanisms and proposes an authentication mechanism that uses a variety of techniques, including multiple channels, to resist attacks more effectively than most commonly used mechanisms. Three aspects of the mechanism were evaluated: 1. The security of multi-channel authentication (MCA) was evaluated in theoretical terms, using a widely accepted methodology. 2. The usability was evaluated by carrying out a user study. 3. Finally, the acceptability thereof was evaluated by asking the participants in study (2) specific questions which aligned with the technology acceptance model (TAM). The study’s analysis of the data, gathered from online questionnaires and application log tables, showed that most participants found the MCA mechanism superior to other available authentication mechanisms and clearly supported the proposed MCA mechanism and the benefits that it provides. The research presents guidelines on how to implement the proposed mechanism, provides a detailed analysis of its effectiveness in protecting users' online accounts against specific, commonly deployed attacks, and reports on its usability and acceptability. It represents a significant step forward in the evolution of authentication mechanisms meeting the security needs of online users while maintaining usability

    Improving the accuracy of spoofed traffic inference in inter-domain traffic

    Get PDF
    Ascertaining that a network will forward spoofed traffic usually requires an active probing vantage point in that network, effectively preventing a comprehensive view of this global Internet vulnerability. We argue that broader visibility into the spoofing problem may lie in the capability to infer lack of Source Address Validation (SAV) compliance from large, heavily aggregated Internet traffic data, such as traffic observable at Internet Exchange Points (IXPs). The key idea is to use IXPs as observatories to detect spoofed packets, by leveraging Autonomous System (AS) topology knowledge extracted from Border Gateway Protocol (BGP) data to infer which source addresses should legitimately appear across parts of the IXP switch fabric. In this thesis, we demonstrate that the existing literature does not capture several fundamental challenges to this approach, including noise in BGP data sources, heuristic AS relationship inference, and idiosyncrasies in IXP interconnec- tivity fabrics. We propose Spoofer-IX, a novel methodology to navigate these challenges, leveraging Customer Cone semantics of AS relationships to guide precise classification of inter-domain traffic as In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, and Unas- signed. We apply our methodology on extensive data analysis using real traffic data from two distinct IXPs in Brazil, a mid-size and a large-size infrastructure. In the mid-size IXP with more than 200 members, we find an upper bound volume of Out-of-cone traffic to be more than an order of magnitude less than the previous method inferred on the same data, revealing the practical importance of Customer Cone semantics in such analysis. We also found no significant improvement in deployment of SAV in networks using the mid-size IXP between 2017 and 2019. In hopes that our methods and tools generalize to use by other IXPs who want to avoid use of their infrastructure for launching spoofed-source DoS attacks, we explore the feasibility of scaling the system to larger and more diverse IXP infrastructures. To promote this goal, and broad replicability of our results, we make the source code of Spoofer-IX publicly available. This thesis illustrates the subtleties of scientific assessments of operational Internet infrastructure, and the need for a community focus on reproducing and repeating previous methods.A constatação de que uma rede encaminhará tráfego falsificado geralmente requer um ponto de vantagem ativo de medição nessa rede, impedindo efetivamente uma visão abrangente dessa vulnerabilidade global da Internet. Isto posto, argumentamos que uma visibilidade mais ampla do problema de spoofing pode estar na capacidade de inferir a falta de conformidade com as práticas de Source Address Validation (SAV) a partir de dados de tráfego da Internet altamente agregados, como o tráfego observável nos Internet Exchange Points (IXPs). A ideia chave é usar IXPs como observatórios para detectar pacotes falsificados, aproveitando o conhecimento da topologia de sistemas autônomos extraído dos dados do protocolo BGP para inferir quais endereços de origem devem aparecer legitimamente nas comunicações através da infra-estrutura de um IXP. Nesta tese, demonstramos que a literatura existente não captura diversos desafios fundamentais para essa abordagem, incluindo ruído em fontes de dados BGP, inferência heurística de relacionamento de sistemas autônomos e características específicas de interconectividade nas infraestruturas de IXPs. Propomos o Spoofer-IX, uma nova metodologia para superar esses desafios, utilizando a semântica do Customer Cone de relacionamento de sistemas autônomos para guiar com precisão a classificação de tráfego inter-domínio como In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, e Unassigned. Aplicamos nossa metodologia em análises extensivas sobre dados reais de tráfego de dois IXPs distintos no Brasil, uma infraestrutura de médio porte e outra de grande porte. No IXP de tamanho médio, com mais de 200 membros, encontramos um limite superior do volume de tráfego Out-of-cone uma ordem de magnitude menor que o método anterior inferiu sob os mesmos dados, revelando a importância prática da semântica do Customer Cone em tal análise. Além disso, não encontramos melhorias significativas na implantação do Source Address Validation (SAV) em redes usando o IXP de tamanho médio entre 2017 e 2019. Na esperança de que nossos métodos e ferramentas sejam aplicáveis para uso por outros IXPs que desejam evitar o uso de sua infraestrutura para iniciar ataques de negação de serviço através de pacotes de origem falsificada, exploramos a viabilidade de escalar o sistema para infraestruturas IXP maiores e mais diversas. Para promover esse objetivo e a ampla replicabilidade de nossos resultados, disponibilizamos publicamente o código fonte do Spoofer-IX. Esta tese ilustra as sutilezas das avaliações científicas da infraestrutura operacional da Internet e a necessidade de um foco da comunidade na reprodução e repetição de métodos anteriores

    Changing the way the world thinks about computer security.

    Get PDF
    Small changes in an established system can result in larger changes in the overall system (e.g. network effects, émergence, criticality, broken Windows theory). However, in an immature discipline, such as computer security, such changes can be difficult to envision and even more difficult to amplement, as the immature discipline is likely to lack the scientific framework that would allow for the introduction of even minute changes. (Cairns, P. and Thimbleby, H, 2003) describe three of the signs of an immature discipline as postulated by (Kuhn, 1970): a. squabbles over what are legitimate tools for research b. disagreement over which phenomenon are legitimate to study, and c. inability to scope the domain of study. The research presented in this document demonstrates how the computer security field, at the time this research began, was the embodiment of thèse characteristics. It presents a cohesive analysis of the intentional introduction of a séries of small changes chosen to aid in maturation of the discipline. Summarily, it builds upon existing theory, exploring the combined effect of coordinated and strategie changes in an immature system and establishing a scientific framework by which the impact of the changes can be quantified. By critically examining the nature of the computer security system overall, this work establishes the need for both increased scientific rigor, and a multidisciplinary approach to the global computer security problem. In order for these changes to take place, many common assumptions related to computer security had to be questioned. However, as the discipline was immature, and controlled by relatively few entities, questioning the status quo was not without difficulties. However, in order for the discipline to mature, more feedback into the overall computer security (and in particular, the computer malware/virus) system was needed, requiring a shift from a mostly closed system to one that was forced to undergo greater scrutiny from various other communities. The input from these communities resulted in long-term changes and increased maturation of the system. Figure 1 illustrates the specific areas in which the research presented herein addressed these needs, provides an overview of the research context, and outlines the specific impact of the research, specifically the development of new and significant scientific paradigms within the discipline

    Detección de intrusiones en redes de datos con captura distribuida y procesamiento estadístico

    Get PDF
    El enfoque de este estudio se orienta al análisis y desarrollo de tecnologías basadas en la investigación estadística, las redes neuronales y los sistemas autónomos aplicados a los problemas de detección de intrusiones en redes de datos. A lo largo de su desarrollo se pretende consolidar mejores métodos para detectar dichos ataques, para lo cual se seleccionan los más apropiados elementos de juicio que hagan efectivos y óptimos los métodos de defensa. Los objetivos específicos de este trabajo se sumarizan en el siguiente orden: - Proponer una arquitectura realista y bien estructurada de los métodos de defensa, a los fines de ser implementados en cualquier sitio. - Demostrar y comprobar paso a paso, las hipótesis y las propuestas teóricas mediante el análisis de los datos tomados de la realidad. - Poner de manifiesto el dominio en el conocimiento de la seguridad informática y de los IDS, de tal forma que ellos constituyan el ítem inteligente en la elección de los algoritmos apropiados, cuestión de evitar la incumbencia de un problema en algún algoritmo, en particular. - Implementar un prototipo de los algoritmos propuestos.Facultad de Informátic

    Engineering Model-Based Adaptive Software Systems

    Get PDF
    Adaptive software systems are able to cope with changes in the environment by self-adjusting their structure and behavior. Robustness refers to the ability of the systems to deal with uncertainty, i.e. perturbations (e.g., Denial of Service attacks) or not-modeled system dynamics (e.g., independent cloud applications hosted on the same physical machine) that can affect the quality of the adaptation. To build robust adaptive systems we need models that accurately describe the managed system and methods for how to react to different types of change. In this thesis we introduce techniques that will help an engineer design adaptive systems for web applications. We describe methods to accurately model web applications deployed in cloud in such a way that it accounts for cloud variability and to keep the model synchronized with the actual system at runtime. Using the model, we present methods to optimize the deployed architecture at design- and run-time, uncover bottlenecks and the workloads that saturate them, maintain the service level objective by changing the quantity of available resources (for regular operating conditions or during a Denial of Service attack). We validate the proposed contributions on experiments performed on Amazon EC2 and simulators. The types of applications that benefit the most from our contributions are web-based information systems deployed in cloud

    Changing the way the world thinks about computer security

    Get PDF
    Small changes in an established system can result in larger changes in the overall system (e.g. network effects, émergence, criticality, broken Windows theory). However, in an immature discipline, such as computer security, such changes can be difficult to envision and even more difficult to amplement, as the immature discipline is likely to lack the scientific framework that would allow for the introduction of even minute changes. (Cairns, P. and Thimbleby, H, 2003) describe three of the signs of an immature discipline as postulated by (Kuhn, 1970): a. squabbles over what are legitimate tools for research b. disagreement over which phenomenon are legitimate to study, and c. inability to scope the domain of study. The research presented in this document demonstrates how the computer security field, at the time this research began, was the embodiment of thèse characteristics. It presents a cohesive analysis of the intentional introduction of a séries of small changes chosen to aid in maturation of the discipline. Summarily, it builds upon existing theory, exploring the combined effect of coordinated and strategie changes in an immature system and establishing a scientific framework by which the impact of the changes can be quantified. By critically examining the nature of the computer security system overall, this work establishes the need for both increased scientific rigor, and a multidisciplinary approach to the global computer security problem. In order for these changes to take place, many common assumptions related to computer security had to be questioned. However, as the discipline was immature, and controlled by relatively few entities, questioning the status quo was not without difficulties. However, in order for the discipline to mature, more feedback into the overall computer security (and in particular, the computer malware/virus) system was needed, requiring a shift from a mostly closed system to one that was forced to undergo greater scrutiny from various other communities. The input from these communities resulted in long-term changes and increased maturation of the system. Figure 1 illustrates the specific areas in which the research presented herein addressed these needs, provides an overview of the research context, and outlines the specific impact of the research, specifically the development of new and significant scientific paradigms within the discipline.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore