21 research outputs found

    Routing in anonymous networks as a means to prevent traffic analysis

    Get PDF
    Traditionally, traffic analysis is something that has been used to measure and keep track of a network's situation regarding network congestion, networking hardware failures, etc. However, largely due to commercial interests such as targeted advertisement, traffic analysis techniques can also be used to identify and track a single user's movements within the Internet. To counteract this perceived breach of privacy and anonymity, several counters have been developed over time, e.g. proxies used to obfuscate the true source of traffic, making it harder for others to pinpoint your location. Another approach has been the development of so called anonymous overlay networks, application-level virtual networks running on top of the physical IP network. The core concept is that by the way of encryption and obfuscation of traffic patterns, the users of such anonymous networks will gain anonymity and protection against traffic analysis techniques. In this master's thesis we will be taking a look at how message forwarding or packet routing in IP networks functions and how this is exploited in different analysis techniques to single out a visitor to a website or just someone with a message being forwarded through a network device used for traffic analysis. After that we will discuss some examples of anonymous overlay networks and see how well they protect their users from traffic analysis, and how do their respective models hold up against traffic analysis attacks from a malicious entity. Finally, we will present a case study about Tor network's popularity by running a Tor relay node and gathering information on how much data the relay transmits and from where does the traffic originate. CCS-concepts: - Security and privacy ~ Privacy protections - Networks ~ Overlay and other logical network structures - Information systems ~ Traffic analysi

    Self-sovereign identity decentralized identifiers, claims and credentials using non decentralized ledger technology

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaCurrent identity management systems rely on centralized databases to store user’s personal data, which poses a great risks for data security, as these infrastructure create a critical point of failure for the whole system. Beside that service providers have to bear huge maintenance costs and comply with strict data protection regulations. Self-sovereign identity (SSI) is a new identity management paradigm that tries to answer some of these problems by providing a decentralized user-centric identity management system that gives users full control of their personal data. Some of its underlying concepts include Decentralized Identifiers (DIDs), Verifiable Claims and Credentials. This approach does not rely on any central authority to enforce trust as it often uses Blockchain or other Decentralized Ledger Technologies (DLT) as the trust anchor of the system, although other decentralized network or databases could also be used for the same purpose. This thesis focuses on finding alternative solutions to DLT, in the context of SSI. Despite being the most used solution some DLTs are known to lack scalability and performance, and since a global identity management system heavily relies on these two requirements it might not be the best solution to the problem. This document provides an overview of the state of the art and main standards of SSI, and then focuses on a non-DLT approach to SSI, referencing non-DLT implementations and alternative decentralized infrastructures that can be used to replace DLTs in SSI. It highlights some of the limitations associated with using DLTs for identity management and presents a SSI framework based on decentralized names systems and networks. This framework couples all the main functionalities needed to create different SSI agents, which were showcased in a proof of concept application.Actualmente os sistemas de gestão de identidade digital estão dependentes de bases de dados centralizadas para o armazenamento de dados pessoais dos seus utilizadores. Isto representa um elevado risco de segurança, uma vez que estas infra-estruturas representam um ponto crítico de falha para todo o sistema. Para além disso os service providers têm que suportam elevados custos de manutenção para armazenar toda esta informaçao e ainda são obrigados a cumprir as normas de protecção de dados existentes. Self-sovereign identity (SSI) é um novo paradigma de identidade digital que tenta dar resposta a alguns destes problemas, criando um sistema focado no utilizador e totalmente descentralizado que oferece aos utilizadores total controlo sobre os seus dados pessoais. Alguns dos conceitos subjacentes incluem Decentalized Identifiers (DIDs), Verifiable Credentials e Presentations. Esta abordagem não depende de qualquer autoridade central para estabelecer confiança, dado que utiliza Blockchains ou outras Decentralized Ledger Technilogies (DLT) como âncora de confiança do sistema. No entanto outras redes ou bases de dados descentralizadas podem também ser utilizadas para alcançar o mesmo objectivo. Esta tese concentra-se em encontrar soluções alternativas para a DLT no âmbito da SSI. Apesar de esta ser a solução mais utilizada, sabe-se que algumas DLTs carecem de escalabilidade e desempenho. Sendo que um sistema de identidade digital com abrangência global dependerá bastante destes dois requisitos, esta pode não ser a melhor solução. Este documento fornece uma visão geral do estado da arte e principais standards da SSI, focando-se de seguida numa abordagem não DLT, que inclui uma breve referência a implementações não-DLT e tecnologias alternativas que poderão ser utilizadas para substituir as DLTs na SSI. Alem disso aborda algumas das principais limitações associadas ao uso de DLTs na gestão de identidades digitais e apresenta uma framework baseada em name systems e redes descentralizadas. Esta framework inclui as principais funcionalidades necessárias para implementar os diferentes agentes SSI, que foram demonstradas através de algumas aplicações proof of concept

    Analyzing and Enhancing Routing Protocols for Friend-to-Friend Overlays

    Get PDF
    The threat of surveillance by governmental and industrial parties is more eminent than ever. As communication moves into the digital domain, the advances in automatic assessment and interpretation of enormous amounts of data enable tracking of millions of people, recording and monitoring their private life with an unprecedented accurateness. The knowledge of such an all-encompassing loss of privacy affects the behavior of individuals, inducing various degrees of (self-)censorship and anxiety. Furthermore, the monopoly of a few large-scale organizations on digital communication enables global censorship and manipulation of public opinion. Thus, the current situation undermines the freedom of speech to a detrimental degree and threatens the foundations of modern society. Anonymous and censorship-resistant communication systems are hence of utmost importance to circumvent constant surveillance. However, existing systems are highly vulnerable to infiltration and sabotage. In particular, Sybil attacks, i.e., powerful parties inserting a large number of fake identities into the system, enable malicious parties to observe and possibly manipulate a large fraction of the communication within the system. Friend-to-friend (F2F) overlays, which restrict direct communication to parties sharing a real-world trust relationship, are a promising countermeasure to Sybil attacks, since the requirement of establishing real-world trust increases the cost of infiltration drastically. Yet, existing F2F overlays suffer from a low performance, are vulnerable to denial-of-service attacks, or fail to provide anonymity. Our first contribution in this thesis is concerned with an in-depth analysis of the concepts underlying the design of state-of-the-art F2F overlays. In the course of this analysis, we first extend the existing evaluation methods considerably, hence providing tools for both our and future research in the area of F2F overlays and distributed systems in general. Based on the novel methodology, we prove that existing approaches are inherently unable to offer acceptable delays without either requiring exhaustive maintenance costs or enabling denial-of-service attacks and de-anonymization. Consequentially, our second contribution lies in the design and evaluation of a novel concept for F2F overlays based on insights of the prior in-depth analysis. Our previous analysis has revealed that greedy embeddings allow highly efficient communication in arbitrary connectivity-restricted overlays by addressing participants through coordinates and adapting these coordinates to the overlay structure. However, greedy embeddings in their original form reveal the identity of the communicating parties and fail to provide the necessary resilience in the presence of dynamic and possibly malicious users. Therefore, we present a privacy-preserving communication protocol for greedy embeddings based on anonymous return addresses rather than identifying node coordinates. Furthermore, we enhance the communication’s robustness and attack-resistance by using multiple parallel embeddings and alternative algorithms for message delivery. We show that our approach achieves a low communication complexity. By replacing the coordinates with anonymous addresses, we furthermore provably achieve anonymity in the form of plausible deniability against an internal local adversary. Complementary, our simulation study on real-world data indicates that our approach is highly efficient and effectively mitigates the impact of failures as well as powerful denial-of-service attacks. Our fundamental results open new possibilities for anonymous and censorship-resistant applications.Die Bedrohung der Überwachung durch staatliche oder kommerzielle Stellen ist ein drängendes Problem der modernen Gesellschaft. Heutzutage findet Kommunikation vermehrt über digitale Kanäle statt. Die so verfügbaren Daten über das Kommunikationsverhalten eines Großteils der Bevölkerung in Kombination mit den Möglichkeiten im Bereich der automatisierten Verarbeitung solcher Daten erlauben das großflächige Tracking von Millionen an Personen, deren Privatleben mit noch nie da gewesener Genauigkeit aufgezeichnet und beobachtet werden kann. Das Wissen über diese allumfassende Überwachung verändert das individuelle Verhalten und führt so zu (Selbst-)zensur sowie Ängsten. Des weiteren ermöglicht die Monopolstellung einiger weniger Internetkonzernen globale Zensur und Manipulation der öffentlichen Meinung. Deshalb stellt die momentane Situation eine drastische Einschränkung der Meinungsfreiheit dar und bedroht die Grundfesten der modernen Gesellschaft. Systeme zur anonymen und zensurresistenten Kommunikation sind daher von ungemeiner Wichtigkeit. Jedoch sind die momentanen System anfällig gegen Sabotage. Insbesondere ermöglichen es Sybil-Angriffe, bei denen ein Angreifer eine große Anzahl an gefälschten Teilnehmern in ein System einschleust und so einen großen Teil der Kommunikation kontrolliert, Kommunikation innerhalb eines solchen Systems zu beobachten und zu manipulieren. F2F Overlays dagegen erlauben nur direkte Kommunikation zwischen Teilnehmern, die eine Vertrauensbeziehung in der realen Welt teilen. Dadurch erschweren F2F Overlays das Eindringen von Angreifern in das System entscheidend und verringern so den Einfluss von Sybil-Angriffen. Allerdings leiden die existierenden F2F Overlays an geringer Leistungsfähigkeit, Anfälligkeit gegen Denial-of-Service Angriffe oder fehlender Anonymität. Der erste Beitrag dieser Arbeit liegt daher in der fokussierten Analyse der Konzepte, die in den momentanen F2F Overlays zum Einsatz kommen. Im Zuge dieser Arbeit erweitern wir zunächst die existierenden Evaluationsmethoden entscheidend und erarbeiten so Methoden, die Grundlagen für unsere sowie zukünftige Forschung in diesem Bereich bilden. Basierend auf diesen neuen Evaluationsmethoden zeigen wir, dass die existierenden Ansätze grundlegend nicht fähig sind, akzeptable Antwortzeiten bereitzustellen ohne im Zuge dessen enorme Instandhaltungskosten oder Anfälligkeiten gegen Angriffe in Kauf zu nehmen. Folglich besteht unser zweiter Beitrag in der Entwicklung und Evaluierung eines neuen Konzeptes für F2F Overlays, basierenden auf den Erkenntnissen der vorangehenden Analyse. Insbesondere ergab sich in der vorangehenden Evaluation, dass Greedy Embeddings hoch-effiziente Kommunikation erlauben indem sie Teilnehmer durch Koordinaten adressieren und diese an die Struktur des Overlays anpassen. Jedoch sind Greedy Embeddings in ihrer ursprünglichen Form nicht auf anonyme Kommunikation mit einer dynamischen Teilnehmermengen und potentiellen Angreifern ausgelegt. Daher präsentieren wir ein Privätssphäre-schützenden Kommunikationsprotokoll für F2F Overlays, in dem die identifizierenden Koordinaten durch anonyme Adressen ersetzt werden. Des weiteren erhöhen wir die Resistenz der Kommunikation durch den Einsatz mehrerer Embeddings und alternativer Algorithmen zum Finden von Routen. Wir beweisen, dass unser Ansatz eine geringe Kommunikationskomplexität im Bezug auf die eigentliche Kommunikation sowie die Instandhaltung des Embeddings aufweist. Ferner zeigt unsere Simulationstudie, dass der Ansatz effiziente Kommunikation mit kurzen Antwortszeiten und geringer Instandhaltungskosten erreicht sowie den Einfluss von Ausfälle und Angriffe erfolgreich abschwächt. Unsere grundlegenden Ergebnisse eröffnen neue Möglichkeiten in der Entwicklung anonymer und zensurresistenter Anwendungen

    An Accountability Architecture for the Internet

    Get PDF
    In the current Internet, senders are not accountable for the packets they send. As a result, malicious users send unwanted traffic that wastes shared resources and degrades network performance. Stopping such attacks requires identifying the responsible principal and filtering any unwanted traffic it sends. However, senders can obscure their identity: a packet identifies its sender only by the source address, but the Internet Protocol does not enforce that this address be correct. Additionally, affected destinations have no way to prevent the sender from continuing to cause harm. An accountable network binds sender identities to packets they send for the purpose of holding senders responsible for their traffic. In this dissertation, I present an accountable network-level architecture that strongly binds senders to packets and gives receivers control over who can send traffic to them. Holding senders accountable for their actions would prevent many of the attacks that disrupt the Internet today. Previous work in attack prevention proposes methods of binding packets to senders, giving receivers control over who sends what to them, or both. However, they all require trusted elements on the forwarding path, to either assist in identifying the sender or to filter unwanted packets. These elements are often not under the control of the receiver and may become corrupt. This dissertation shows that the Internet architecture can be extended to allow receivers to block traffic from unwanted senders, even in the presence of malicious devices in the forwarding path. This dissertation validates this thesis with three contributions. The first contribution is DNA, a network architecture that strongly binds packets to their sender, allowing routers to reject unaccountable traffic and recipients to block traffic from unwanted senders. Unlike prior work, which trusts on-path devices to behave correctly, the only trusted component in DNA is an identity certification authority. All other entities may misbehave and are either blocked or evicted from the network. The second contribution is NeighborhoodWatch, a secure, distributed, scalable object store that is capable of withstanding misbehavior by its constituent nodes. DNA uses NeighborhoodWatch to store receiver-specific requests block individual senders. The third contribution is VanGuard, an accountable capability architecture. Capabilities are small, receiver-generated tokens that grant the sender permission to send traffic to receiver. Existing capability architectures are not accountable, assume a protected channel for obtaining capabilities, and allow on-path devices to steal capabilities. VanGuard builds a capability architecture on top of DNA, preventing capability theft and protecting the capability request channel by allowing receivers to block senders that flood the channel. Once a sender obtains capabilities, it no longer needs to sign traffic, thus allowing greater efficiency than DNA alone. The DNA architecture demonstrates that it is possible to create an accountable network architecture in which none of the devices on the forwarding path must be trusted. DNA holds senders responsible for their traffic by allowing receivers to block senders; to store this blocking state, DNA relies on the NeighborhoodWatch DHT. VanGuard extends DNA and reduces its overhead by incorporating capabilities, which gives destinations further control over the traffic that sources send to them

    Network Infrastructures in the Dark Web

    Get PDF
    With the appearance of the Internet, open to everyone in 1991, criminals saw a big opportunity in moving their organisations to the World Wide Web, taking advantage of these infrastructures as it allowed higher mobility and scalability. Later on, in the year 2000, the first system appeared, creating what is known today as the Dark Web. This layer of the World Wide Web became quickly the option to go when criminals wanted to sell and deliver content such as match-fixing, children pornography, drugs market, guns market, etc. This obscure side of the Dark Web, makes it a relevant topic to study in order to tackle this huge network and help to identify these malicious activities and actors. In this master thesis, it is shown through the study of two datasets from the Dark Web, that we are surrounded by capable technologies that can be applied to these types of problems in order to increase our knowledge about the data and reveal interesting characteristics in an interactive and useful way. One dataset has 10 000 relations from domains living in the Dark Web, and the other dataset has thousands of data from just 11 specific domains from the Dark Web. We reveal detailed information about each dataset by applying di↵erent analysis and data mining algorithms. For the first dataset we studied domains availability patterns with temporal analysis, we categorised domains with machine learning neural networks and we reveal the network topology and nodes relevance with social networks analysis and core-periphery model. Regarding the second dataset, we created a cross matching information web graph and applied a name entity recognition algorithm which ended in a tool for identifying entities within dark web’s domains. All of these approaches culminated in an interactive web application where we publicly not only display the entire research but also the tools developed along with the project (https://darkor.org).Com o surgimento da Internet, aberta a todos em 1991, os criminosos viram uma grande oportunidade em passar as suas organizações para a World Wide Web, aproveitando-se assim dessas infraestruturas que permitiam uma maior mobilidade e escalabilidade. Mais tarde, no ano 2000, surgiu o primeiro sistema, criando o que hoje é conhecido como a Dark Web. Essa camada da World Wide Web tornou-se rapidamente a opção a seguir quando os criminosos queriam vender e entregar conteúdo como combinação de resultados, pornografia infantil, mercado de drogas, mercado de armas, etc. Este lado obscuro da Dark Web, torna-a num tema relevante de estudo a fim de ajudar a identificar atividades e atores maliciosos. Nesta dissertação de mestrado é mostrado, através do estudo de dois conjuntos de dados da Dark Web, que estamos rodeados de tecnologias que podem ser aplicadas neste tipo de problemas de forma a aumentar o nosso conhecimento sobre os dados e revelar características interessantes de forma interativa e útil. Um conjunto de dados tem 10 000 relações de domínios que vivem na Dark Web enquanto que o outro conjunto de dados tem milhares de dados de apenas 11 domínios específicos da Dark Web. Neste estudo revelamos informações detalhadas sobre cada conjunto de dados aplicando diferentes análises e algoritmos de data mining. Para o primeiro conjunto de dados, estudamos padrões de disponibilidade de domínios com análise temporal, categorizamos domínios com o auxílio de redes neuronais e revelamos a topologia da rede e a relevância dos nós com análise de redes sociais e a aplicação de um modelo núcleo-periferia. Em relação ao segundo conjunto de dados, criamos um grafo da rede com cruzamento de dados e aplicamos um algoritmo de reconhecimento de entidades que resultou em uma ferramenta para identificar entidades dentro dos domínios da Dark Web estudados. Todas estas abordagens culminaram em uma aplicação web interativa onde exibimos publicamente não apenas todo o estudo, mas também as ferramentas desenvolvidas ao longo do projeto (https://darkor.org)

    Traffic Monitoring and analysis for source identification

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Study of Peer-to-Peer Network Based Cybercrime Investigation: Application on Botnet Technologies

    Full text link
    The scalable, low overhead attributes of Peer-to-Peer (P2P) Internet protocols and networks lend themselves well to being exploited by criminals to execute a large range of cybercrimes. The types of crimes aided by P2P technology include copyright infringement, sharing of illicit images of children, fraud, hacking/cracking, denial of service attacks and virus/malware propagation through the use of a variety of worms, botnets, malware, viruses and P2P file sharing. This project is focused on study of active P2P nodes along with the analysis of the undocumented communication methods employed in many of these large unstructured networks. This is achieved through the design and implementation of an efficient P2P monitoring and crawling toolset. The requirement for investigating P2P based systems is not limited to the more obvious cybercrimes listed above, as many legitimate P2P based applications may also be pertinent to a digital forensic investigation, e.g, voice over IP, instant messaging, etc. Investigating these networks has become increasingly difficult due to the broad range of network topologies and the ever increasing and evolving range of P2P based applications. In this work we introduce the Universal P2P Network Investigation Framework (UP2PNIF), a framework which enables significantly faster and less labour intensive investigation of newly discovered P2P networks through the exploitation of the commonalities in P2P network functionality. In combination with a reference database of known network characteristics, it is envisioned that any known P2P network can be instantly investigated using the framework, which can intelligently determine the best investigation methodology and greatly expedite the evidence gathering process. A proof of concept tool was developed for conducting investigations on the BitTorrent network.Comment: This is a thesis submitted in fulfilment of a PhD in Digital Forensics and Cybercrime Investigation in the School of Computer Science, University College Dublin in October 201
    corecore