322 research outputs found

    An Empirical Study of the I2P Anonymity Network and its Censorship Resistance

    Full text link
    Tor and I2P are well-known anonymity networks used by many individuals to protect their online privacy and anonymity. Tor's centralized directory services facilitate the understanding of the Tor network, as well as the measurement and visualization of its structure through the Tor Metrics project. In contrast, I2P does not rely on centralized directory servers, and thus obtaining a complete view of the network is challenging. In this work, we conduct an empirical study of the I2P network, in which we measure properties including population, churn rate, router type, and the geographic distribution of I2P peers. We find that there are currently around 32K active I2P peers in the network on a daily basis. Of these peers, 14K are located behind NAT or firewalls. Using the collected network data, we examine the blocking resistance of I2P against a censor that wants to prevent access to I2P using address-based blocking techniques. Despite the decentralized characteristics of I2P, we discover that a censor can block more than 95% of peer IP addresses known by a stable I2P client by operating only 10 routers in the network. This amounts to severe network impairment: a blocking rate of more than 70% is enough to cause significant latency in web browsing activities, while blocking more than 90% of peer IP addresses can make the network unusable. Finally, we discuss the security consequences of the network being blocked, and directions for potential approaches to make I2P more resistant to blocking.Comment: 14 pages, To appear in the 2018 Internet Measurement Conference (IMC'18

    An investigation into darknets and the content available via anonymous peer-to-peer file sharing

    Get PDF
    Media sites, both technical and non-technical, make references to Darknets as havens for clandestine file sharing. They are often given an aura of mystique; where content of any type is just a mouse click away. However, can Darknets really be easily accessed, and do they provide access to material that would otherwise be difficult to obtain? This paper investigates which Darknets are easily discovered, the technical designs and methods used to hide content on the networks, the tools needed to join, and ultimately what type and quantities of files can be found on anonymous peer-to-peer file sharing networks. This information was gathered by conducting weekly searches for specific file extensions on each Darknet over a 4 week period. It was found that connectivity to Darknets was easy to establish, and installing peer-to-peer file sharing applications was a simple process. The quantity of content found on Darknet peer-to-peer file sharing networks indicates that file sharing is rampant. Of particular concern was what appears to be a large quantity of child pornography made available

    Machine learning approach for detection of nonTor traffic

    Get PDF
    Intrusion detection has attracted a considerable interest from researchers and industry. After many years of research the community still faces the problem of building reliable and efficient intrusion detection systems (IDS) capable of handling large quantities of data with changing patterns in real time situations. The Tor network is popular in providing privacy and security to end user by anonymizing the identity of internet users connecting through a series of tunnels and nodes. This work identifies two problems; classification of Tor traffic and nonTor traffic to expose the activities within Tor traffic that minimizes the protection of users in using the UNB-CIC Tor Network Traffic dataset and classification of the Tor traffic flow in the network. This paper proposes a hybrid classifier; Artificial Neural Network in conjunction with Correlation feature selection algorithm for dimensionality reduction and improved classification performance. The reliability and efficiency of the propose hybrid classifier is compared with Support Vector Machine and naïve Bayes classifiers in detecting nonTor traffic in UNB-CIC Tor Network Traffic dataset. Experimental results show the hybrid classifier, ANN-CFS proved a better classifier in detecting nonTor traffic and classifying the Tor traffic flow in UNB-CIC Tor Network Traffic dataset

    Identificação de aplicações de vídeo em canais protegidos com aprendizagem automática

    Get PDF
    As encrypted traffic is becoming a standard and traffic obfuscation techniques become more accessible and common, companies are struggling to enforce their network usage policies and ensure optimal operational network performance. Users are more technologically knowledgeable, being able to circumvent web content filtering tools with the usage of protected tunnels such as VPNs. Consequently, techniques such as DPI, which already were considered outdated due to their impracticality, become even more ineffective. Furthermore, the continuous regulations being established by governments and international unions regarding citizen privacy rights makes network monitoring increasingly challenging. This work presents a scalable and easily deployable network-based framework for application identification in a corporate environment, focusing on video applications. This framework should be effective regardless of the environment and network setup, with the objective of being a useful tool in the network monitoring process. The proposed framework offers a compromise between allowing network supervision and assuring workers’ privacy. The results evaluation indicates that we can identify web services that are running over a protected channel with an accuracy of 95%, using low-level packet information that does not jeopardize sensitive worker data.Com a adoção de tráfego cifrado a tornar-se a norma e a crescente utilização de técnicas de obfuscação de tráfego, as empresas têm cada vez mais dificuldades em aplicar políticas de uso nas suas redes, bem como garantir o seu bom funcionamento. Os utilizadores têm mais conhecimentos tecnológicos, sendo facilmente capazes de contornar ferramentas de filtros de conteúdo online com a utilização de túneis protegidos como VPNs. Consequentemente, técnicas como DPI, que já estão ultrapassadas devido à sua impraticabilidade, tornam-se cada vez mais ineficazes. Além disso, todos os regulamentos que têm vindo a ser estabelecidos por governos e organizações internacionais sobre a privacidade dos cidadãos tornam a tarefa de monitorização de uma rede cada vez mais difícil. Este documento apresenta uma plataforma escalável e facilmente instalável para identificação de aplicações numa rede empresarial, focando-se em aplicações de vídeo. Esta abordagem deve ser eficaz independentemente do contexto e organização da rede, com o objectivo de ser uma ferramenta útil no processo de supervisão de redes. O modelo proposto oferece um compromisso entre a capacidade de supervisionar uma rede e assegurar a privacidade dos trabalhadores. A avaliação de resultados indica que é possível identificar serviços web em ligações estabelecidas sobre canais protegidos com uma precisão geral de 95%, usando informações de baixo-nível dos pacotes que não comprometem informação sensível dos trabalhadores.Mestrado em Engenharia de Computadores e Telemátic

    Time Series Analysis for Encrypted Traffic Classification: A Deep Learning Approach

    Full text link
    © 2018 IEEE. We develop a novel time series feature extraction technique to address the encrypted traffic/application classification problem. The proposed method consists of two main steps. First, we propose a feature engineering technique to extract significant attributes of the encrypted network traffic behavior by analyzing the time series of receiving packets. In the second step, we develop a deep learning-based technique to exploit the correlation of time series data samples of the encrypted network applications. To evaluate the efficiency of the proposed solution on the encrypted traffic classification problem, we carry out intensive experiments on a raw network traffic dataset, namely VPN-nonVPN, with three conventional classifier metrics including Precision, Recall, and F1 score. The experimental results demonstrate that our proposed approach can significantly improve the performance in identifying encrypted application traffic in terms of accuracy and computation efficiency

    A Review on Features’ Robustness in High Diversity Mobile Traffic Classifications

    Get PDF
    Mobile traffics are becoming more dominant due to growing usage of mobile devices and proliferation of IoT. The influx of mobile traffics introduce some new challenges in traffic classifications; namely the diversity complexity and behavioral dynamism complexity. Existing traffic classifications methods are designed for classifying standard protocols and user applications with more deterministic behaviors in small diversity. Currently, flow statistics, payload signature and heuristic traffic attributes are some of the most effective features used to discriminate traffic classes. In this paper, we investigate the correlations of these features to the less-deterministic user application traffic classes based on corresponding classification accuracy. Then, we evaluate the impact of large-scale classification on feature's robustness based on sign of diminishing accuracy. Our experimental results consolidate the needs for unsupervised feature learning to address the dynamism of mobile application behavioral traits for accurate classification on rapidly growing mobile traffics

    Adaptive Traffic Fingerprinting for Darknet Threat Intelligence

    Full text link
    Darknet technology such as Tor has been used by various threat actors for organising illegal activities and data exfiltration. As such, there is a case for organisations to block such traffic, or to try and identify when it is used and for what purposes. However, anonymity in cyberspace has always been a domain of conflicting interests. While it gives enough power to nefarious actors to masquerade their illegal activities, it is also the cornerstone to facilitate freedom of speech and privacy. We present a proof of concept for a novel algorithm that could form the fundamental pillar of a darknet-capable Cyber Threat Intelligence platform. The solution can reduce anonymity of users of Tor, and considers the existing visibility of network traffic before optionally initiating targeted or widespread BGP interception. In combination with server HTTP response manipulation, the algorithm attempts to reduce the candidate data set to eliminate client-side traffic that is most unlikely to be responsible for server-side connections of interest. Our test results show that MITM manipulated server responses lead to expected changes received by the Tor client. Using simulation data generated by shadow, we show that the detection scheme is effective with false positive rate of 0.001, while sensitivity detecting non-targets was 0.016+-0.127. Our algorithm could assist collaborating organisations willing to share their threat intelligence or cooperate during investigations.Comment: 26 page

    Deep pockets, packets, and harbours

    Get PDF
    Deep Packet Inspection (DPI) is a set of methodologies used for the analysis of data flow over the Internet. It is the intention of this paper to describe technical details of this issue and to show that by using DPI technologies it is possible to understand the content of Transmission Control Protocol/Internet Protocol communications. This communications can carry public available content, private users information, legitimate copyrighted works, as well as infringing copyrighted works. Legislation in many jurisdictions regarding Internet service providers’ liability, or more generally the liability of communication intermediaries, usually contains “safe harbour” provisions. The World Intellectual Property Organization Copyright Treaty of 1996 has a short but significant provision excluding liability for suppliers of physical facilities. The provision is aimed at communication to the public and the facilitation of physical means. Its extensive interpretation to cases of contributory or vicarious liability, in absence of specific national implementation, can prove problematic. Two of the most relevant legislative interventions in the field, the Digital Millennium Copyright Act and the European Directive on Electronic Commerce, regulate extensively the field of intermediary liability. This paper looks at the relationship between existing packet inspection technologies, especially the ‘deep version,’ and the international and national legal and regulatory interventions connected with intellectual property protection and with the correlated liabilities ‘exemptions. In analyzing the referred two main statutes, we will take a comparative look at similar interventions in Australia and Canada that can offer some interesting elements of reflection
    corecore