286 research outputs found

    Command & Control: Understanding, Denying and Detecting - A review of malware C2 techniques, detection and defences

    Full text link
    In this survey, we first briefly review the current state of cyber attacks, highlighting significant recent changes in how and why such attacks are performed. We then investigate the mechanics of malware command and control (C2) establishment: we provide a comprehensive review of the techniques used by attackers to set up such a channel and to hide its presence from the attacked parties and the security tools they use. We then switch to the defensive side of the problem, and review approaches that have been proposed for the detection and disruption of C2 channels. We also map such techniques to widely-adopted security controls, emphasizing gaps or limitations (and success stories) in current best practices.Comment: Work commissioned by CPNI, available at c2report.org. 38 pages. Listing abstract compressed from version appearing in repor

    Advanced Network Inference Techniques Based on Network Protocol Stack Information Leaks

    Get PDF
    Side channels are channels of implicit information flow that can be used to find out information that is not allowed to flow through explicit channels. This thesis focuses on network side channels, where information flow occurs in the TCP/IP network stack implementations of operating systems. I will describe three new types of idle scans: a SYN backlog idle scan, a RST rate-limit idle scan, and a hybrid idle scan. Idle scans are special types of side channels that are designed to help someone performing a network measurement (typically an attacker or a researcher) to infer something about the network that they are not otherwise able to see from their vantage point. The thesis that this dissertation tests is this: because modern network stacks have shared resources, there is a wealth of information that can be inferred off-path by both attackers and Internet measurement researchers. With respect to attackers, no matter how carefully the security model is designed, the non-interference property is unlikely to hold, i.e., an attacker can easily find side channels of information flow to learn about the network from the perspective of the system remotely. One suggestion is that trust relationships for using resources be made explicit all the way down to IP layer with the goal of dividing resources and removing sharendess to prevent advanced network reconnaissance. With respect to Internet measurement researchers, in this dissertation I show that the information flow is rich enough to test connectivity between two arbitrary hosts on the Internet and even infer in which direction any blocking is occurring. To explore this thesis, I present three research efforts: --- First, I modeled a typical TCP/IP network stack. The building process for this modeling effort led to the discovery of two new idles scans: a SYN backlog idle scan and a RST rate-limited idle scan. The SYN backlog scan is particularly interesting because it does not require whoever is performing the measurements (i.e., the attacker or researcher) to send any packets to the victim (or target) at all. --- Second, I developed a hybrid idle scan that combines elements of the SYN backlog idle scan with Antirez\u27s original IPID-based idle scan. This scan enables researchers to test whether two arbitrary machines in the world are able to communicate via TCP/IP, and, if not, in which direction the communication is being prevented. To test the efficacy of the hybrid idle scan, I tested three different kinds of servers (Tor bridges, Tor directory servers, and normal web servers) both inside and outside China. The results were congruent with published understandings of global Internet censorship, demonstrating that the hybrid idle scan is effective. --- Third, I applied the hybrid idle scan to the difficult problem of characterizing inconsistencies in the Great Firewall of China (GFW), which is the largest firewall in the world. This effort resolved many open questions about the GFW. The result of my dissertation work is an effective method for measuring Internet censorship around the world, without requiring any kind of distributed measurement platform or access to any of the machines that connectivity is tested to or from

    ICLab: A Global, Longitudinal Internet Censorship Measurement Platform

    Get PDF
    Researchers have studied Internet censorship for nearly as long as attempts to censor contents have taken place. Most studies have however been limited to a short period of time and/or a few countries; the few exceptions have traded off detail for breadth of coverage. Collecting enough data for a comprehensive, global, longitudinal perspective remains challenging. In this work, we present ICLab, an Internet measurement platform specialized for censorship research. It achieves a new balance between breadth of coverage and detail of measurements, by using commercial VPNs as vantage points distributed around the world. ICLab has been operated continuously since late 2016. It can currently detect DNS manipulation and TCP packet injection, and overt "block pages" however they are delivered. ICLab records and archives raw observations in detail, making retrospective analysis with new techniques possible. At every stage of processing, ICLab seeks to minimize false positives and manual validation. Within 53,906,532 measurements of individual web pages, collected by ICLab in 2017 and 2018, we observe blocking of 3,602 unique URLs in 60 countries. Using this data, we compare how different blocking techniques are deployed in different regions and/or against different types of content. Our longitudinal monitoring pinpoints changes in censorship in India and Turkey concurrent with political shifts, and our clustering techniques discover 48 previously unknown block pages. ICLab's broad and detailed measurements also expose other forms of network interference, such as surveillance and malware injection.Comment: To appear in Proceedings of the 41st IEEE Symposium on Security and Privacy (Oakland 2020). San Francisco, CA. May 202

    The Internet vs. the Nation-State: Prevention and Prosecution Challenges on the Internet in Republic of TürkiyI

    Get PDF
    Social, economic, and technological developments are widely accepted as powerful forces that affect the role, power, and functions of nation-states. Being one of the most influential technological developments in the recent decades, the internet has come into prominence in this regard. With the use of the Internet, the monopoly of media and information controlled by official ideologies, capitalist barons, or elites is seriously challenged. Consequently the power balance between individuals and authorities in the mass media and communication has been transformed in a significant way. Though their reliability may sometimes be questionable, the number and type of information resources has increased dramatically, and accessing information has become easier substantially. People are more interconnected today than ever before. They can easily find, join, or construct their personal, social or political networks. With a number of internet applications and social media, collective reactions, social movements and activities are more organized and effective today than ever before. That is why we have seen so much social fluctuation, unrest. protest, and political activism all over the world in the last few years. Moreover new terms and phenomena like cyber-crime, cyber warfare, and cyber-attacks have urged nation-states to be more careful about the internet and increase their efforts to control it. This level of social chaos in different states and increasing cyber-crimes lead us to question the effectiveness of nation-states\u27 controlling measures. Focusing on one state, the Republic of Türkiye, this study analyzes two important dimensions of state control efforts, prevention and prosecution. On the prevention side, I explore the effectiveness of internet access blocking. On the prosecution side, I analyze the effectiveness of prosecution in internet child pornography. The result of testing to measure the effectiveness of Internet website blocking reveals that there are significant gaps, complications, and dilemmas in these policies. A similar situation is also seen in the investigations of internet child pornography. Analysis conducted of the operational investigation files reveals that in most of the files, suspects could not be identified, traced or brought before judicial authorities. As seen in these two fields, state policing efforts of the Internet in a country are not absolute, and the Internet can be a vulnerable space in which any local or foreign actor or agents like criminals, opposition groups, terrorists can create problems for nation-states

    TORKAMELEON. IMPROVING TOR’S CENSORSHIP RESISTANCE WITH K-ANONYMIZATION MEDIA MORPHING COVERT INPUT CHANNELS

    Get PDF
    Anonymity networks such as Tor and other related tools are powerful means of increas- ing the anonymity and privacy of Internet users’ communications. Tor is currently the most widely used solution by whistleblowers to disclose confidential information and denounce censorship measures, including violations of civil rights, freedom of expres- sion, or guarantees of free access to information. However, recent research studies have shown that Tor is vulnerable to so-called powerful correlation attacks carried out by global adversaries or collaborative Internet censorship parties. In the Tor ”arms race” scenario, we can see that as new censorship, surveillance, and deep correlation tools have been researched, new, improved solutions for preserving anonymity have also emerged. In recent research proposals, unobservable encapsulation of IP packets in covert media channels is one of the most promising defenses against such threat models. They leverage WebRTC-based covert channels as a robust and practical approach against powerful traf- fic correlation analysis. At the same time, these solutions are difficult to combat through the traffic-blocking measures commonly used by censorship authorities. In this dissertation, we propose TorKameleon, a censorship evasion solution de- signed to protect Tor users with increased censorship resistance against powerful traffic correlation attacks executed by global adversaries. The system is based on flexible K- anonymization input circuits that can support TLS tunneling and WebRTC-based covert channels before forwarding users’ original input traffic to the Tor network. Our goal is to protect users from machine and deep learning correlation attacks between incom- ing user traffic and observed traffic at different Tor network relays, such as middle and egress relays. TorKameleon is the first system to implement a Tor pluggable transport based on parameterizable TLS tunneling and WebRTC-based covert channels. We have implemented the TorKameleon prototype and performed extensive validations to ob- serve the correctness and experimental performance of the proposed solution in the Tor environment. With these evaluations, we analyze the necessary tradeoffs between the performance of the standard Tor network and the achieved effectiveness and performance of TorKameleon, capable of preserving the required unobservability properties.Redes de anonimização como o Tor e soluções ou ferramentas semelhantes são meios poderosos de aumentar a anonimidade e a privacidade das comunicações de utilizadores da Internet . O Tor é atualmente a rede de anonimato mais utilizada por delatores para divulgar informações confidenciais e denunciar medidas de censura tais como violações de direitos civis e da liberdade de expressão, ou falhas nas garantias de livre acesso à informação. No entanto, estudos recentes mostram que o Tor é vulnerável a adversários globais ou a entidades que colaboram entre si para garantir a censura online. Neste cenário competitivo e de jogo do “gato e do rato”, é possível verificar que à medida que novas soluções de censura e vigilância são investigadas, novos sistemas melhorados para a preservação de anonimato são também apresentados e refinados. O encapsulamento de pacotes IP em túneis encapsulados em protocolos de media são uma das mais promissoras soluções contra os novos modelos de ataque à anonimidade. Estas soluções alavancam canais encobertos em protocolos de media baseados em WebRTC para resistir a poderosos ataques de correlação de tráfego e a medidas de bloqueios normalmente usadas pelos censores. Nesta dissertação propomos o TorKameleon, uma solução desenhada para protoger os utilizadores da rede Tor contra os mais recentes ataques de correlação feitos por um modelo de adversário global. O sistema é baseado em estratégias de anonimização e reencaminhamento do tráfego do utilizador através de K nós, utilizando também encap- sulamento do tráfego em canais encobertos em túneis TLS ou WebRTC. O nosso objetivo é proteger os utilizadores da rede Tor de ataques de correlação implementados através de modelos de aprendizagem automática feitos entre o tráfego do utilizador que entra na rede Tor e esse mesmo tráfego noutro segmento da rede, como por exemplo nos nós de saída da rede. O TorKameleon é o primeiro sistema a implementar um Tor pluggable transport parametrizável, baseado em túneis TLS ou em canais encobertos em protocolos media. Implementamos um protótipo do sistema e realizamos uma extensa avalição expe- rimental, inserindo a solução no ambiente da rede Tor. Com base nestas avaliações, anali- zamos o tradeoff necessário entre a performance da rede Tor e a eficácia e a performance obtida do TorKameleon, que garante as propriedades de preservação de anonimato

    A survey on web tracking: mechanisms, implications, and defenses

    Get PDF
    Privacy seems to be the Achilles' heel of today's web. Most web services make continuous efforts to track their users and to obtain as much personal information as they can from the things they search, the sites they visit, the people they contact, and the products they buy. This information is mostly used for commercial purposes, which go far beyond targeted advertising. Although many users are already aware of the privacy risks involved in the use of internet services, the particular methods and technologies used for tracking them are much less known. In this survey, we review the existing literature on the methods used by web services to track the users online as well as their purposes, implications, and possible user's defenses. We present five main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, and other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users. For each of the tracking methods, we present possible defenses. Some of them are specific to a particular tracking approach, while others are more universal (block more than one threat). Finally, we present the future trends in user tracking and show that they can potentially pose significant threats to the users' privacy.Peer ReviewedPostprint (author's final draft

    Static Web content distribution and request routing in a P2P overlay

    Get PDF
    The significance of collaboration over the Internet has become a corner-stone of modern computing, as the essence of information processing and content management has shifted to networked and Webbased systems. As a result, the effective and reliable access to networked resources has become a critical commodity in any modern infrastructure. In order to cope with the limitations introduced by the traditional client-server networking model, most of the popular Web-based services have employed separate Content Delivery Networks (CDN) to distribute the server-side resource consumption. Since the Web applications are often latency-critical, the CDNs are additionally being adopted for optimizing the content delivery latencies perceived by the Web clients. Because of the prevalent connection model, the Web content delivery has grown to a notable industry. The rapid growth in the amount of mobile devices further contributes to the amount of resources required from the originating server, as the content is also accessible on the go. While the Web has become one of the most utilized sources of information and digital content, the openness of the Internet is simultaneously being reduced by organizations and governments preventing access to any undesired resources. The access to information may be regulated or altered to suit any political interests or organizational benefits, thus conflicting with the initial design principle of an unrestricted and independent information network. This thesis contributes to the development of more efficient and open Internet by combining a feasibility study and a preliminary design of a peer-to-peer based Web content distribution and request routing mechanism. The suggested design addresses both the challenges related to effectiveness of current client-server networking model and the openness of information distributed over the Internet. Based on the properties of existing peer-to-peer implementations, the suggested overlay design is intended to provide low-latency access to any Web content without sacrificing the end-user privacy. The overlay is additionally designed to increase the cost of censorship by forcing a successful blockade to isolate the censored network from the rest of the Internet

    Detecting malware and cyber attacks using ISP data

    Get PDF
    corecore