23 research outputs found

    A Defense Framework Against Denial-of-Service in Computer Networks

    Get PDF
    Denial-of-Service (DoS) is a computer security problem that poses a serious challenge totrustworthiness of services deployed over computer networks. The aim of DoS attacks isto make services unavailable to legitimate users, and current network architectures alloweasy-to-launch, hard-to-stop DoS attacks. Particularly challenging are the service-level DoSattacks, whereby the victim service is flooded with legitimate-like requests, and the jammingattack, in which wireless communication is blocked by malicious radio interference. Theseattacks are overwhelming even for massively-resourced services, and effective and efficientdefenses are highly needed. This work contributes a novel defense framework, which I call dodging, against service-level DoS and wireless jamming. Dodging has two components: (1) the careful assignment ofservers to clients to achieve accurate and quick identification of service-level DoS attackersand (2) the continuous and unpredictable-to-attackers reconfiguration of the client-serverassignment and the radio-channel mapping to withstand service-level and jamming DoSattacks. Dodging creates hard-to-evade baits, or traps, and dilutes the attack "fire power".The traps identify the attackers when they violate the mapping function and even when theyattack while correctly following the mapping function. Moreover, dodging keeps attackers"in the dark", trying to follow the unpredictably changing mapping. They may hit a fewtimes but lose "precious" time before they are identified and stopped. Three dodging-based DoS defense algorithms are developed in this work. They are moreresource-efficient than state-of-the-art DoS detection and mitigation techniques. Honeybees combines channel hopping and error-correcting codes to achieve bandwidth-efficientand energy-efficient mitigation of jamming in multi-radio networks. In roaming honeypots, dodging enables the camouflaging of honeypots, or trap machines, as real servers,making it hard for attackers to locate and avoid the traps. Furthermore, shuffling requestsover servers opens up windows of opportunity, during which legitimate requests are serviced.Live baiting, efficiently identifies service-level DoS attackers by employing results fromthe group-testing theory, discovering defective members in a population using the minimumnumber of tests. The cost and benefit of the dodging algorithms are analyzed theoretically,in simulation, and using prototype experiments

    Extraction of patterns in selected network traffic for a precise and efficient intrusion detection approach

    Get PDF
    This thesis investigates a precise and efficient pattern-based intrusion detection approach by extracting patterns from sequential adversarial commands. As organisations are further placing assets within the cyber domain, mitigating the potential exposure of these assets is becoming increasingly imperative. Machine learning is the application of learning algorithms to extract knowledge from data to determine patterns between data points and make predictions. Machine learning algorithms have been used to extract patterns from sequences of commands to precisely and efficiently detect adversaries using the Secure Shell (SSH) protocol. Seeing as SSH is one of the most predominant methods of accessing systems it is also a prime target for cyber criminal activities. For this study, deep packet inspection was applied to data acquired from three medium interaction honeypots emulating the SSH service. Feature selection was used to enhance the performance of the selected machine learning algorithms. A pre-processing procedure was developed to organise the acquired datasets to present the sequences of adversary commands per unique SSH session. The preprocessing phase also included generating a reduced version of each dataset that evenly and coherently represents their respective full dataset. This study focused on whether the machine learning algorithms can extract more precise patterns efficiently extracted from the reduced sequence of commands datasets compared to their respective full datasets. Since a reduced sequence of commands dataset requires less storage space compared to the relative full dataset. Machine learning algorithms selected for this study were the Naïve Bayes, Markov chain, Apriori and Eclat algorithms The results show the machine learning algorithms applied to the reduced datasets could extract additional patterns that are more precise, compared to their respective full datasets. It was also determined the Naïve Bayes and Markov chain algorithms are more efficient at processing the reduced datasets compared to their respective full datasets. The best performing algorithm was the Markov chain algorithm at extracting more precise patterns efficiently from the reduced datasets. The greatest improvement in processing a reduced dataset was 97.711%. This study has contributed to the domain of pattern-based intrusion detection by providing an approach that can precisely and efficiently detect adversaries utilising SSH communications to gain unauthorised access to a system

    Preemptive mobile code protection using spy agents

    Get PDF
    This thesis introduces 'spy agents' as a new security paradigm for evaluating trust in remote hosts in mobile code scenarios. In this security paradigm, a spy agent, i.e. a mobile agent which circulates amongst a number of remote hosts, can employ a variety of techniques in order to both appear 'normal' and suggest to a malicious host that it can 'misuse' the agent's data or code without being held accountable. A framework for the operation and deployment of such spy agents is described. Subsequently, a number of aspects of the operation of such agents within this framework are analysed in greater detail. The set of spy agent routes needs to be constructed in a manner that enables hosts to be identified from a set of detectable agent-specific outcomes. The construction of route sets that both reduce the probability of spy agent detection and support identification of the origin of a malicious act is analysed in the context of combinatorial group testing theory. Solutions to the route set design problem are proposed. A number of spy agent application scenarios are introduced and analysed, including: a) the implementation of a mobile code email honeypot system for identifying email privacy infringers, b) the design of sets of agent routes that enable malicious host detection even when hosts collude, and c) the evaluation of the credibility of host classification results in the presence of inconsistent host behaviour. Spy agents can be used in a wide range of applications, and it appears that each application creates challenging new research problems, notably in the design of appropriate agent route sets

    Advanced persistent threats

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2015Os sistemas computacionais tornaram-se uma parte importante da nossa sociedade, para além de estarmos intrinsecamente ligados a eles, a maioria da informação que utilizamos no nosso dia-a-dia está no seu formato digital. Ao contrário de um documento físico, um documento digital está exposto a uma maior variedade de ameaças, principalmente se estiver de alguma forma disponível `a Internet. Informação é poder, por isso não é de admirar que alguém, algures esteja a tentar roubá-la, assim, é facto que os adversários já operam neste novo mundo. Ladrões, terroristas e mesmo a máfia começaram a utilizar a internet como um meio para alcançar os seus fins. A cibersegurança tenta proteger a informação e os sistemas contra estes e outros tipos de ameaças, utilizando anti-vírus, firewalls ou detetores de intrusões, entre outros. Infelizmente as notícias continuam a sair, milhões de euros roubados a bancos por via informática, empresas saqueadas da sua propriedade intelectual e governos envergonhados por os seus segredos serem expostos ao mundo. A questão coloca-se, porque é que os sistemas de segurança estão a falhar? Como está o adversário a ultrapassá-los? A verdade hoje em dia é que os atacantes não só adquiriram talentos avançados na área como também têm acesso a ferramentas extremamente sofisticadas e vão fazer uso delas para serem bem-sucedidos nos seus objetivos, sejam estes o roubo de informação, o objetivo mais comum e por isso o mais abordado neste trabalho, seja o ataque a infraestruturas críticas. Advanced Persistent Threat(APT), ou ameaça avançada persistente, é um termo utilizado para caracterizar atacantes sofisticados, organizados e com recursos para concretizar ataques informáticos. Inventado pela força aérea Americana em 2006, o termo era uma forma de discutir intrusões informáticas com pessoal não militar. Nas suas origens, a palavra Ameaça indica que o adversário não é um pedaço de código automático, ou seja, o adversário ´e humano e ´e este humano que controla parte do ataque e contribui para o seu sucesso, avançada porque este humano é treinado e especializado na utilização de todo o espectro informático de forma a melhor conseguir atingir o seu objectivo e persistente, pois esse objectivo é formalmente definido, ou seja, o ataque só está concluído quando atingir o alvo em pleno. Infelizmente, o termo passou a ser utilizado para descrever qualquer ataque informático e a ter uma conotação extremamente comercial devido aos sistemas anti-APT que invadiram o mercado pouco tempo depois do ataque sofrido pela Google em 2010. Neste trabalho abordamos estes pressupostos, e explica-se o verdadeiro significado do termo juntamente com uma forma mais científica, claramente mais útil do ponto das abordagens da engenharia. Nomeadamente, sugere-se uma visão mais abrangente da campanha de ataque, não se focando apenas no software utilizado pelo adversário, mas tentando olhar para a campanha como um todo; equipas, organização, manutenção e orçamento, entre outros. Mostramos também porque estes ataques são diferentes, relativamente às suas tácticas, técnicas e procedimentos, e porque merecem ser distinguidos com a sua própria designação e o seu próprio ciclo de vida. Para além de identificarmos vários ciclos de vida associados às APTs, o ciclo de vida mais utilizado para caracterizar estas campanhas de ataque foi analisado em detalhe, desde as primeiras etapas de reconhecimento até à conclusão dos objectivos. Discute-se também a essência de cada passo e porque são, ou não, importantes. De seguida realiza-se uma análise ao tipo de atacante por trás destas campanhas, quem são, quais as suas histórias e objectivos. Avalia-se também porque é que os mecanismos de defesa tradicionais continuam a ser ultrapassados e n˜ao conseguem acompanhar o passo rápido dos atacantes. Isto acontece principalmente devido à utilização de listas do que é malicioso e o bloqueio apenas do que se encontra nessa lista, chamado de black listing. Ainda que se tenha já realizado trabalho na área de deteccão de anomalias, mostra-se também o porquê de esses sistemas continuarem a não ser suficientes, nomeadamente devido ao facto de definirem os seus pressupostos base erroneamente. Durante a realização deste trabalho percebeu-se a falta de estatísticas que pudessem responder a algumas questões. E por isso foi realizado um estudo aos relatórios disponíveis relativos a este tipo de ataques e apresentados os resultados de uma forma simples, organizada e resumida. Este estudo veio ajudar a perceber quais os maiores objectivos neste tipo de ataque, nomeadamente a espionagem e o roubo de informação confidencial; quais os maiores vectores de ataque (sendo o e-mail o grande vencedor devido à facilidade de explorar o vector humano); quais as aplicações alvo e a utilização, ou não, de vulnerabilidades desconhecidas. Esperamos que esta recolha de informação seja útil para trabalhos futuros ou para interessados no tema. Só depois de realizado este estudo foi possível pensar em formas de contribuir para a solução do problema imposto pelas APTs. Uma distinção ficou clara, existe não só a necessidade de detectar APTs, mas também a criticalidade da sua prevenção. A melhor forma de não ser vítima de infeção é a aplicação de boas práticas de segurança e, neste caso, a formação de todo o pessoal relativamente ao seu papel na segurança geral da organização. Aborda-se também a importância da preparação; segurança não é apenas proteger-se dos atacantes, mas principalmente saber como recuperar. Relativamente à deteção, foi realizado trabalho em duas vertentes, primeiramente e visto o trabalho ter sido realizado em ambiente de empresa, foi elaborado um plano para um sistema capaz de detectar campanhas de ataque que utilizassem o vetor de infeção do e-mail, fazendo uso dos sistemas já desenvolvidos pela AnubisNetworks que, sendo uma empresa de segurança informática com fortes ligações ao e-mail, tinha o conhecimento e as ferramentas necessárias para a concretização do sistema. O sistema faz uso de uma caracterização de pessoas, chamado de people mapping, que visa a identificar os principais alvos dentro da empresa e quem exibe maiores comportamentos de risco. Esta caracterização possibilita a criação de uma lista de pessoal prioritário, que teria o seu e-mail (caso tivesse anexos ou endereços) analisado em ambiente de sandbox. Este sistema acabou por não ser construído e é apenas deixada aqui a sua esquematização, sendo que fica lançado o desafio para a sua realização. De forma a contribuir não só para a empresa, mas também para a comunidade científica de segurança, foi de seguida realizado trabalho de deteção em vários pontos de qualquer rede informática seguindo os quatro principais passos na execução de uma campanha APT. Decidimos então utilizar um ciclo de vida composto por quatro etapas, sendo elas, a fase de reconhecimento, a infeção inicial, o controlo e o roubo de informação. Neste modelo, procuraram-se possíveis sistemas para a deteção de eventos relacionados com APTs nos três principais pontos de qualquer rede: a Internet, a Intranet e a máquina cliente. Ao analisar cada fase em cada ponto da rede, foi possível perceber realmente quais as principais áreas de estudo e desenvolvimento para melhor detectar APTs. Mais concretamente, concluiu-se que a internet seria o ponto ideal de deteção das fases de reconhecimento, a intranet para detetar controlo e roubo de informação e a máquina cliente para detetar infeção inicial. Conclui-se o trabalho apresentando o nosso ponto de vista relativamente ao futuro, isto é, quem vai fazer uso das táticas utilizadas nas campanhas APT visto serem extremamente bem sucedidas, como vão os atacantes adaptar-se aos novos mecanismos de defesa e quais os novos possíveis vetores de infeção.Computer systems have become a very important part of our society, most of the information we use in our everyday lives is in its digital form, and since information is power it only makes sense that someone, somewhere will try to steal it. Attackers are adapting and now have access to highly sophisticated tools and expertise to conduct highly targeted and very complex attack campaigns. Advanced Persistent Threat, or APT, is a term coined by the United States Air Force around 2006 as a way to talk about classified intrusions with uncleared personnel. It wrongly and quickly became the standard acronym to describe every sort of attack. This work tries to demystify the problem of APTs, why they are called as such, and what are the most common tactics, techniques and procedures. It also discusses previously proposed life-cycles, profile the most common adversaries and takes a look at why traditional defences will not stop them. A big problem encountered while developing this work was the lack of statistics regarding APT attacks. One of the big contributions here consists on the search for publicly available reports, its analysis, and presentation of relevant information gathered in a summarised fashion. From the most targeted applications to the most typical infection vector, insight is given on how and why the adversaries conduct these attacks. Only after a clear understanding of the problem is reached, prevention and detection schemes were discussed. Specifically, blueprints for a system to be used by AnubisNetworks are presented, capable of detecting these attacks at the e-mail level. It is based on sandboxing and people mapping, which is a way to better understand people, one of the weakest links in security. The work is concluded by trying to understand how the threat landscape will shape itself in upcoming years

    A Survey on Security and Privacy of 5G Technologies: Potential Solutions, Recent Advancements, and Future Directions

    Get PDF
    Security has become the primary concern in many telecommunications industries today as risks can have high consequences. Especially, as the core and enable technologies will be associated with 5G network, the confidential information will move at all layers in future wireless systems. Several incidents revealed that the hazard encountered by an infected wireless network, not only affects the security and privacy concerns, but also impedes the complex dynamics of the communications ecosystem. Consequently, the complexity and strength of security attacks have increased in the recent past making the detection or prevention of sabotage a global challenge. From the security and privacy perspectives, this paper presents a comprehensive detail on the core and enabling technologies, which are used to build the 5G security model; network softwarization security, PHY (Physical) layer security and 5G privacy concerns, among others. Additionally, the paper includes discussion on security monitoring and management of 5G networks. This paper also evaluates the related security measures and standards of core 5G technologies by resorting to different standardization bodies and provide a brief overview of 5G standardization security forces. Furthermore, the key projects of international significance, in line with the security concerns of 5G and beyond are also presented. Finally, a future directions and open challenges section has included to encourage future research.European CommissionNational Research Tomsk Polytechnic UniversityUpdate citation details during checkdate report - A

    Resilience-Building Technologies: State of Knowledge -- ReSIST NoE Deliverable D12

    Get PDF
    This document is the first product of work package WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellenc

    Modélisation formelle des systèmes de détection d'intrusions

    Get PDF
    L’écosystème de la cybersécurité évolue en permanence en termes du nombre, de la diversité, et de la complexité des attaques. De ce fait, les outils de détection deviennent inefficaces face à certaines attaques. On distingue généralement trois types de systèmes de détection d’intrusions : détection par anomalies, détection par signatures et détection hybride. La détection par anomalies est fondée sur la caractérisation du comportement habituel du système, typiquement de manière statistique. Elle permet de détecter des attaques connues ou inconnues, mais génère aussi un très grand nombre de faux positifs. La détection par signatures permet de détecter des attaques connues en définissant des règles qui décrivent le comportement connu d’un attaquant. Cela demande une bonne connaissance du comportement de l’attaquant. La détection hybride repose sur plusieurs méthodes de détection incluant celles sus-citées. Elle présente l’avantage d’être plus précise pendant la détection. Des outils tels que Snort et Zeek offrent des langages de bas niveau pour l’expression de règles de reconnaissance d’attaques. Le nombre d’attaques potentielles étant très grand, ces bases de règles deviennent rapidement difficiles à gérer et à maintenir. De plus, l’expression de règles avec état dit stateful est particulièrement ardue pour reconnaître une séquence d’événements. Dans cette thèse, nous proposons une approche stateful basée sur les diagrammes d’état-transition algébriques (ASTDs) afin d’identifier des attaques complexes. Les ASTDs permettent de représenter de façon graphique et modulaire une spécification, ce qui facilite la maintenance et la compréhension des règles. Nous étendons la notation ASTD avec de nouvelles fonctionnalités pour représenter des attaques complexes. Ensuite, nous spécifions plusieurs attaques avec la notation étendue et exécutons les spécifications obtenues sur des flots d’événements à l’aide d’un interpréteur pour identifier des attaques. Nous évaluons aussi les performances de l’interpréteur avec des outils industriels tels que Snort et Zeek. Puis, nous réalisons un compilateur afin de générer du code exécutable à partir d’une spécification ASTD, capable d’identifier de façon efficiente les séquences d’événements.Abstract : The cybersecurity ecosystem continuously evolves with the number, the diversity, and the complexity of cyber attacks. Generally, we have three types of Intrusion Detection System (IDS) : anomaly-based detection, signature-based detection, and hybrid detection. Anomaly detection is based on the usual behavior description of the system, typically in a static manner. It enables detecting known or unknown attacks but also generating a large number of false positives. Signature based detection enables detecting known attacks by defining rules that describe known attacker’s behavior. It needs a good knowledge of attacker behavior. Hybrid detection relies on several detection methods including the previous ones. It has the advantage of being more precise during detection. Tools like Snort and Zeek offer low level languages to represent rules for detecting attacks. The number of potential attacks being large, these rule bases become quickly hard to manage and maintain. Moreover, the representation of stateful rules to recognize a sequence of events is particularly arduous. In this thesis, we propose a stateful approach based on algebraic state-transition diagrams (ASTDs) to identify complex attacks. ASTDs allow a graphical and modular representation of a specification, that facilitates maintenance and understanding of rules. We extend the ASTD notation with new features to represent complex attacks. Next, we specify several attacks with the extended notation and run the resulting specifications on event streams using an interpreter to identify attacks. We also evaluate the performance of the interpreter with industrial tools such as Snort and Zeek. Then, we build a compiler in order to generate executable code from an ASTD specification, able to efficiently identify sequences of events

    Compilation of thesis abstracts, December 2006

    Get PDF
    NPS Class of December 2006This quarter’s Compilation of Abstracts summarizes cutting-edge, security-related research conducted by NPS students and presented as theses, dissertations, and capstone reports. Each expands knowledge in its field.http://archive.org/details/compilationofsis109452750

    Exploring Cyberterrorism, Topic Models and Social Networks of Jihadists Dark Web Forums: A Computational Social Science Approach

    Get PDF
    This three-article dissertation focuses on cyber-related topics on terrorist groups, specifically Jihadists’ use of technology, the application of natural language processing, and social networks in analyzing text data derived from terrorists\u27 Dark Web forums. The first article explores cybercrime and cyberterrorism. As technology progresses, it facilitates new forms of behavior, including tech-related crimes known as cybercrime and cyberterrorism. In this article, I provide an analysis of the problems of cybercrime and cyberterrorism within the field of criminology by reviewing existing literature focusing on (a) the issues in defining terrorism, cybercrime, and cyberterrorism, (b) ways that cybercriminals commit a crime in cyberspace, and (c) ways that cyberterrorists attack critical infrastructure, including computer systems, data, websites, and servers. The second article is a methodological study examining the application of natural language processing computational techniques, specifically latent Dirichlet allocation (LDA) topic models and topic network analysis of text data. I demonstrate the potential of topic models by inductively analyzing large-scale textual data of Jihadist groups and supporters from three Dark Web forums to uncover underlying topics. The Dark Web forums are dedicated to Islam and the Islamic world discussions. Some members of these forums sympathize with and support terrorist organizations. Results indicate that topic modeling can be applied to analyze text data automatically; the most prevalent topic in all forums was religion. Forum members also discussed terrorism and terrorist attacks, supporting the Mujahideen fighters. A few of the discussions were related to relationships and marriages, advice, seeking help, health, food, selling electronics, and identity cards. LDA topic modeling is significant for finding topics from larger corpora such as the Dark Web forums. Implications for counterterrorism include the use of topic modeling in real-time classification and removal of online terrorist content and the monitoring of religious forums, as terrorist groups use religion to justify their goals and recruit in such forums for supporters. The third article builds on the second article, exploring the network structures of terrorist groups on the Dark Web forums. The two Dark Web forums\u27 interaction networks were created, and network properties were measured using social network analysis. A member is considered connected and interacting with other forum members when they post in the same threads forming an interaction network. Results reveal that the network structure is decentralized, sparse, and divided based on topics (religion, terrorism, current events, and relationships) and the members\u27 interests in participating in the threads. As participation in forums is an active process, users tend to select platforms most compatible with their views, forming a subgroup or community. However, some members are essential and influential in the information and resources flow within the networks. The key members frequently posted about religion, terrorism, and relationships in multiple threads. Identifying key members is significant for counterterrorism, as mapping network structures and key users are essential for removing and destabilizing terrorist networks. Taken together, this dissertation applies a computational social science approach to the analysis of cyberterrorism and the use of Dark Web forums by jihadists
    corecore