29 research outputs found

    Routing in anonymous networks as a means to prevent traffic analysis

    Get PDF
    Traditionally, traffic analysis is something that has been used to measure and keep track of a network's situation regarding network congestion, networking hardware failures, etc. However, largely due to commercial interests such as targeted advertisement, traffic analysis techniques can also be used to identify and track a single user's movements within the Internet. To counteract this perceived breach of privacy and anonymity, several counters have been developed over time, e.g. proxies used to obfuscate the true source of traffic, making it harder for others to pinpoint your location. Another approach has been the development of so called anonymous overlay networks, application-level virtual networks running on top of the physical IP network. The core concept is that by the way of encryption and obfuscation of traffic patterns, the users of such anonymous networks will gain anonymity and protection against traffic analysis techniques. In this master's thesis we will be taking a look at how message forwarding or packet routing in IP networks functions and how this is exploited in different analysis techniques to single out a visitor to a website or just someone with a message being forwarded through a network device used for traffic analysis. After that we will discuss some examples of anonymous overlay networks and see how well they protect their users from traffic analysis, and how do their respective models hold up against traffic analysis attacks from a malicious entity. Finally, we will present a case study about Tor network's popularity by running a Tor relay node and gathering information on how much data the relay transmits and from where does the traffic originate. CCS-concepts: - Security and privacy ~ Privacy protections - Networks ~ Overlay and other logical network structures - Information systems ~ Traffic analysi

    A Survey on Routing in Anonymous Communication Protocols

    No full text
    The Internet has undergone dramatic changes in the past 15 years, and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, ranging from profiling of users for monetizing personal information to nearly omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. Several such systems have been proposed in the literature, each of which offers anonymity guarantees in different scenarios and under different assumptions, reflecting the plurality of approaches for how messages can be anonymously routed to their destination. Understanding this space of competing approaches with their different guarantees and assumptions is vital for users to understand the consequences of different design options. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. To this end, we provide a taxonomy for clustering all prevalently considered approaches (including Mixnets, DC-nets, onion routing, and DHT-based protocols) with respect to their unique routing characteristics, deployability, and performance. This, in particular, encompasses the topological structure of the underlying network; the routing information that has to be made available to the initiator of the conversation; the underlying communication model; and performance-related indicators such as latency and communication layer. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols, and it also helps to clarify the relationship between the routing characteristics of these protocols, and their performance and scalability

    Traffic Analysis Resistant Infrastructure

    Get PDF
    Network traffic analysis is using metadata to infer information from traffic flows. Network traffic flows are the tuple of source IP, source port, destination IP, and destination port. Additional information is derived from packet length, flow size, interpacket delay, Ja3 signature, and IP header options. Even connections using TLS leak site name and cipher suite to observers. This metadata can profile groups of users or individual behaviors. Statistical properties yield even more information. The hidden Markov model can track the state of protocols where each state transition results in an observation. Format Transforming Encryption (FTE) encodes data as the payload of another protocol. The emulated protocol is called the host protocol. Observation-based FTE is a particular case of FTE that uses real observations from the host protocol for the transformation. By communicating using a shared dictionary according to the predefined protocol, it can difficult to detect anomalous traffic. Combining observation-based FTEs with hidden Markov models (HMMs) emulates every aspect of a host protocol. Ideal host protocols would cause significant collateral damage if blocked (protected) and do not contain dynamic handshakes or states (static). We use protected static protocols with the Protocol Proxy--a proxy that defines the syntax of a protocol using an observation-based FTE and transforms data to payloads with actual field values. The Protocol Proxy massages the outgoing packet\u27s interpacket delay to match the host protocol using an HMM. The HMM ensure the outgoing traffic is statistically equivalent to the host protocol. The Protocol Proxy is a covert channel, a method of communication with a low probability of detection (LPD). These covert channels trade-off throughput for LPD. The multipath TCP (mpTCP) Linux kernel module splits a TCP streams across multiple interfaces. Two potential architectures involve splitting a covert channel across several interfaces (multipath) or splitting a single TCP stream across multiple covert channels (multisession). Splitting a covert channel across multiple interfaces leads to higher throughput but is classified as mpTCP traffic. Splitting a TCP flow across multiple covert channels is not as performant as the previous case, but it provides added obfuscation and resiliency. Each covert channel is independent of the others, and a channel failure is recoverable. The multipath and multisession frameworks provide independently address the issues associated with covert channels. Each tool addresses a challenge. The Protocol Proxy provides anonymity in a setting were detection could have critical consequences. The mpTCP kernel module offers an architecture that increases throughput despite the channel\u27s low-bandwidth restrictions. Fusing these architectures improves the goodput of the Protocol Proxy without sacrificing the low probability of detection

    Code Reuse in Open Source Software

    Get PDF
    Code reuse is a form of knowledge reuse in software development that is fundamental to innovation in many fields. However, to date there has been no systematic investigation of code reuse in open source software projects. This study uses quantitative and qualitative data gathered from a sample of six open source software projects to explore two sets of research questions derived from the literature on software reuse in firms and open source software development. We find that code reuse is extensive across the sample and that open source software developers, much like developers in firms, apply tools that lower their search costs for knowledge and code, assess the quality of software components, and have incentives to reuse code. Open source software developers reuse code because they want to integrate functionality quickly, because they want to write preferred code, because they operate under limited resources in terms of time and skills, and because they can mitigate development costs through code reuse

    Private and censorship-resistant communication over public networks

    Get PDF
    Society’s increasing reliance on digital communication networks is creating unprecedented opportunities for wholesale surveillance and censorship. This thesis investigates the use of public networks such as the Internet to build robust, private communication systems that can resist monitoring and attacks by powerful adversaries such as national governments. We sketch the design of a censorship-resistant communication system based on peer-to-peer Internet overlays in which the participants only communicate directly with people they know and trust. This ‘friend-to-friend’ approach protects the participants’ privacy, but it also presents two significant challenges. The first is that, as with any peer-to-peer overlay, the users of the system must collectively provide the resources necessary for its operation; some users might prefer to use the system without contributing resources equal to those they consume, and if many users do so, the system may not be able to survive. To address this challenge we present a new game theoretic model of the problem of encouraging cooperation between selfish actors under conditions of scarcity, and develop a strategy for the game that provides rational incentives for cooperation under a wide range of conditions. The second challenge is that the structure of a friend-to-friend overlay may reveal the users’ social relationships to an adversary monitoring the underlying network. To conceal their sensitive relationships from the adversary, the users must be able to communicate indirectly across the overlay in a way that resists monitoring and attacks by other participants. We address this second challenge by developing two new routing protocols that robustly deliver messages across networks with unknown topologies, without revealing the identities of the communication endpoints to intermediate nodes or vice versa. The protocols make use of a novel unforgeable acknowledgement mechanism that proves that a message has been delivered without identifying the source or destination of the message or the path by which it was delivered. One of the routing protocols is shown to be robust to attacks by malicious participants, while the other provides rational incentives for selfish participants to cooperate in forwarding messages

    Contributions to security and privacy protection in recommendation systems

    Get PDF
    A recommender system is an automatic system that, given a customer model and a set of available documents, is able to select and offer those documents that are more interesting to the customer. From the point of view of security, there are two main issues that recommender systems must face: protection of the users' privacy and protection of other participants of the recommendation process. Recommenders issue personalized recommendations taking into account not only the profile of the documents, but also the private information that customers send to the recommender. Hence, the users' profiles include personal and highly sensitive information, such as their likes and dislikes. In order to have a really useful recommender system and improve its efficiency, we believe that users shouldn't be afraid of stating their preferences. The second challenge from the point of view of security involves the protection against a new kind of attack. Copyright holders have shifted their targets to attack the document providers and any other participant that aids in the process of distributing documents, even unknowingly. In addition, new legislation trends such as ACTA or the ¿Sinde-Wert law¿ in Spain show the interest of states all over the world to control and prosecute these intermediate nodes. we proposed the next contributions: 1.A social model that captures user's interests into the users' profiles, and a metric function that calculates the similarity between users, queries and documents. This model represents profiles as vectors of a social space. Document profiles are created by means of the inspection of the contents of the document. Then, user profiles are calculated as an aggregation of the profiles of the documents that the user owns. Finally, queries are a constrained view of a user profile. This way, all profiles are contained in the same social space, and the similarity metric can be used on any pair of them. 2.Two mechanisms to protect the personal information that the user profiles contain. The first mechanism takes advantage of the Johnson-Lindestrauss and Undecomposability of random matrices theorems to project profiles into social spaces of less dimensions. Even if the information about the user is reduced in the projected social space, under certain circumstances the distances between the original profiles are maintained. The second approach uses a zero-knowledge protocol to answer the question of whether or not two profiles are affine without leaking any information in case of that they are not. 3.A distributed system on a cloud that protects merchants, customers and indexers against legal attacks, by means of providing plausible deniability and oblivious routing to all the participants of the system. We use the term DocCloud to refer to this system. DocCloud organizes databases in a tree-shape structure over a cloud system and provide a Private Information Retrieval protocol to avoid that any participant or observer of the process can identify the recommender. This way, customers, intermediate nodes and even databases are not aware of the specific database that answered the query. 4.A social, P2P network where users link together according to their similarity, and provide recommendations to other users in their neighborhood. We defined an epidemic protocol were links are established based on the neighbors similarity, clustering and randomness. Additionally, we proposed some mechanisms such as the use SoftDHT to aid in the identification of affine users, and speed up the process of creation of clusters of similar users. 5.A document distribution system that provides the recommended documents at the end of the process. In our view of a recommender system, the recommendation is a complete process that ends when the customer receives the recommended document. We proposed SCFS, a distributed and secure filesystem where merchants, documents and users are protectedEste documento explora c omo localizar documentos interesantes para el usuario en grandes redes distribuidas mediante el uso de sistemas de recomendaci on. Se de fine un sistema de recomendaci on como un sistema autom atico que, dado un modelo de cliente y un conjunto de documentos disponibles, es capaz de seleccionar y ofrecer los documentos que son m as interesantes para el cliente. Las caracter sticas deseables de un sistema de recomendaci on son: (i) ser r apido, (ii) distribuido y (iii) seguro. Un sistema de recomendaci on r apido mejora la experiencia de compra del cliente, ya que una recomendaci on no es util si es que llega demasiado tarde. Un sistema de recomendaci on distribuido evita la creaci on de bases de datos centralizadas con informaci on sensible y mejora la disponibilidad de los documentos. Por ultimo, un sistema de recomendaci on seguro protege a todos los participantes del sistema: usuarios, proveedores de contenido, recomendadores y nodos intermedios. Desde el punto de vista de la seguridad, existen dos problemas principales a los que se deben enfrentar los sistemas de recomendaci on: (i) la protecci on de la intimidad de los usuarios y (ii) la protecci on de los dem as participantes del proceso de recomendaci on. Los recomendadores son capaces de emitir recomendaciones personalizadas teniendo en cuenta no s olo el per l de los documentos, sino tambi en a la informaci on privada que los clientes env an al recomendador. Por tanto, los per les de usuario incluyen informaci on personal y altamente sensible, como sus gustos y fobias. Con el n de desarrollar un sistema de recomendaci on util y mejorar su e cacia, creemos que los usuarios no deben tener miedo a la hora de expresar sus preferencias. Para ello, la informaci on personal que est a incluida en los per les de usuario debe ser protegida y la privacidad del usuario garantizada. El segundo desafi o desde el punto de vista de la seguridad implica un nuevo tipo de ataque. Dado que la prevenci on de la distribuci on ilegal de documentos con derechos de autor por medio de soluciones t ecnicas no ha sido efi caz, los titulares de derechos de autor cambiaron sus objetivos para atacar a los proveedores de documentos y cualquier otro participante que ayude en el proceso de distribuci on de documentos. Adem as, tratados y leyes como ACTA, la ley SOPA de EEUU o la ley "Sinde-Wert" en España ponen de manfi esto el inter es de los estados de todo el mundo para controlar y procesar a estos nodos intermedios. Los juicios recientes como MegaUpload, PirateBay o el caso contra el Sr. Pablo Soto en España muestran que estas amenazas son una realidad

    Understanding the Usage of Anonymous Onion Services: Empirical Experiments to Study Criminal Activities in the Tor Network

    Get PDF
    Technology is the new host of life, and with each passing year, developments in digitalization make it easier to destroy our understanding of authenticity. A man is more than his distorted shadow on Facebook wall. Another essential shadow dwells under anonymity.The aim of this thesis is to understand the usage of onion services in the Tor anonymity network. To be more precise the aim is to discover and measure human activities on Tor and on anonymous onion websites. We establish novel facts in the anonymous online environment. We solve technical problems, such as web-crawling and scraping to gather data. We represent new findings on how onion services hide illegal activities. The results are merged with wider range of anonymous onion services usage.We selected to cast light to the criminal dark side of the Tor network, mainly black marketplaces and hacking. This is a somewhat factitious selection from the wide range of Tor use. However, an archetype villain is found in nearly every story so naturally, for the sake of being interesting, we selected criminal phenomenon to study. To be clear, the Tor network is developed and utilised for legal online privacy and several other essential ways.The first finding is that as the Tor network becomes more popular also illegal activities become wide spread. Tor and virtual currencies are already transforming drug trade. Anonymous high-class marketplaces are difficult for the law enforcement to interrupt.On the other hand, now illegal activities are paradoxically more public than ever: everyone can access these onion sites and browse the product listings. The illegal trade is transparent to be followed. For example, by the means of web-crawling and scraping, we produced nearly real-time picture of the trade in Finland following one of the marketplaces on Tor. As a result, statistics shed light on substance consumption habits: the second study estimates that sales totalled over two million euros between Finnish buyers and sellers.Due to the network’s anonymity and nature of illegal sales, reputation systems have replaced the rule of law: a buyer trusts the seller’s reputation because the law is not guaranteeing the delivery. The only available information is the seller’s reputation and capacity which were both associated with drug sales as we prove.Finally, we will identify the limits of online anonymity ranging from technical limitations to operation security dangers. Technology is merely a communication channel and major criminal activities still happen in the physical world. For instance, a drug trade requires that the seller sends the products using post service to the buyer’s address. Before that the seller has acquired enormous amounts of illegal drugs. The buyer has to give away his address to the seller who could later be placed under arrest with a list of customers addresses. Furthermore, we show case by case how criminals reveal and leak their critical identity information. The law enforcement agencies are experienced to investigate all of these aspects even if the Tor network itself is secure
    corecore