63 research outputs found

    DDoS-Capable IoT Malwares: comparative analysis and Mirai Investigation

    Get PDF
    The Internet of Things (IoT) revolution has not only carried the astonishing promise to interconnect a whole generation of traditionally “dumb” devices, but also brought to the Internet the menace of billions of badly protected and easily hackable objects. Not surprisingly, this sudden flooding of fresh and insecure devices fueled older threats, such as Distributed Denial of Service (DDoS) attacks. In this paper, we first propose an updated and comprehensive taxonomy of DDoS attacks, together with a number of examples on how this classification maps to real-world attacks. Then, we outline the current situation of DDoS-enabled malwares in IoT networks, highlighting how recent data support our concerns about the growing in popularity of these malwares. Finally, we give a detailed analysis of the general framework and the operating principles of Mirai, the most disruptive DDoS-capable IoT malware seen so far

    Monitoring security of enterprise hosts via DNS data analysis

    Full text link
    Enterprise Networks are growing in scale and complexity, with heterogeneous connected assets needing to be secured in different ways. Nevertheless, virtually all connected assets use the Domain Name System (DNS) for address resolution. Thus DNS has become a convenient vehicle for attackers to covertly perform Command and Control (C&C) communication, data theft, and service disruption across a wide range of assets. Enterprise security appliances that monitor network traffic typically allow all DNS traffic through as it is vital for accessing any web service; they may at best match against a database of known malicious patterns, and are therefore ineffective against zero-day attacks. This thesis focuses on three high-impact cyber-attacks that leverage DNS, specifically data exfiltration, malware C&C communication, and service disruption. Using big data (over 10B packets) of DNS network traffic collected from a University campus and a Government research organization over six months, we illustrate the anatomy of these attacks, train machines for automatically detecting such attacks, and evaluate their efficacy in the field. The contributions of this thesis are three-fold: Our first contribution tackles data exfiltration using DNS. We analyze outgoing DNS queries to identify many stateless attributes such as the number of characters, the number of labels, and the entropy of the domain name to distinguish malicious data exfiltration queries from legitimate ones. We train our machines using ground-truth obtained from a public list of top 10K legitimate domains and empirically validate and tune our models to achieve over 98% accuracy in correctly distinguish legitimate DNS queries from malicious ones, the latter coming from known malware domains as well as synthetically generated using popular DNS exfiltration tools. Our second contribution tackles malware C&C communication using DNS. We analyze DNS outgoing queries to identify more than twenty families of DGA (Domain Generation Algorithm)-enabled malware when communicating with their C&C servers. We identify attributes of network traffic that commences following the resolution of a DGA-based DNS query. We train three protocol-specific one-class classifier models, for HTTP, HTTPS and UDP flows, using public packet traces of known malware. We develop a monitoring system that uses reactive rules to automatically and selectively mirror TCP/UDP flows (between internal hosts and malware servers) pertinent to DGA queries for diagnosis by the trained models. We deploy our system in the field and evaluate its performance to show that it flags more than 2000 internal assets as potentially infected, generating more than a million suspicious flows, of which more than 97% are verified to be malicious by an off-the-shelf intrusion detection system. Our third contribution studies the use of DNS for service disruption. We analyze incoming DNS messages, with a specific focus on non-existent (NXD) DNS responses, to distinguish benign from malicious NXDs. We highlight two attack scenarios based on their requested domain names. Using NXD behavioral attributes of internal hosts, we develop multi-staged iForest classification models to detect internal hosts launching service disruption attacks. We show how our models can detect infected hosts that generate high-volume and low-volume distributed NXD-based attacks on public resolvers and/or authoritative name servers with an accuracy of over 99% in correctly classifying legitimate hosts. Our work shines a light on a critical vector in enterprise security and equips the enterprise network operator with the means to detect and block sophisticated attackers who use DNS as a vehicle for malware C&C communication, data exfiltration, and service disruption

    Deteção de ataques de negação de serviços distribuídos na origem

    Get PDF
    From year to year new records of the amount of traffic in an attack are established, which demonstrate not only the constant presence of distributed denialof-service attacks, but also its evolution, demarcating itself from the other network threats. The increasing importance of resource availability alongside the security debate on network devices and infrastructures is continuous, given the preponderant role in both the home and corporate domains. In the face of the constant threat, the latest network security systems have been applying pattern recognition techniques to infer, detect, and react more quickly and assertively. This dissertation proposes methodologies to infer network activities patterns, based on their traffic: follows a behavior previously defined as normal, or if there are deviations that raise suspicions about the normality of the action in the network. It seems that the future of network defense systems continues in this direction, not only by increasing amount of traffic, but also by the diversity of actions, services and entities that reflect different patterns, thus contributing to the detection of anomalous activities on the network. The methodologies propose the collection of metadata, up to the transport layer of the osi model, which will then be processed by the machien learning algorithms in order to classify the underlying action. Intending to contribute beyond denial-of-service attacks and the network domain, the methodologies were described in a generic way, in order to be applied in other scenarios of greater or less complexity. The third chapter presents a proof of concept with attack vectors that marked the history and a few evaluation metrics that allows to compare the different classifiers as to their success rate, given the various activities in the network and inherent dynamics. The various tests show flexibility, speed and accuracy of the various classification algorithms, setting the bar between 90 and 99 percent.De ano para ano são estabelecidos novos recordes de quantidade de tráfego num ataque, que demonstram não só a presença constante de ataques de negação de serviço distribuídos, como também a sua evolução, demarcando-se das outras ameaças de rede. A crescente importância da disponibilidade de recursos a par do debate sobre a segurança nos dispositivos e infraestruturas de rede é contínuo, dado o papel preponderante tanto no dominio doméstico como no corporativo. Face à constante ameaça, os sistemas de segurança de rede mais recentes têm vindo a aplicar técnicas de reconhecimento de padrões para inferir, detetar e reagir de forma mais rápida e assertiva. Esta dissertação propõe metodologias para inferir padrões de atividades na rede, tendo por base o seu tráfego: se segue um comportamento previamente definido como normal, ou se existem desvios que levantam suspeitas sobre normalidade da ação na rede. Tudo indica que o futuro dos sistemas de defesa de rede continuará neste sentido, servindo-se não só do crescente aumento da quantidade de tráfego, como também da diversidade de ações, serviços e entidades que refletem padrões distintos contribuindo assim para a deteção de atividades anómalas na rede. As metodologias propõem a recolha de metadados, até á camada de transporte, que seguidamente serão processados pelos algoritmos de aprendizagem automática com o objectivo de classificar a ação subjacente. Pretendendo que o contributo fosse além dos ataques de negação de serviço e do dominio de rede, as metodologias foram descritas de forma tendencialmente genérica, de forma a serem aplicadas noutros cenários de maior ou menos complexidade. No quarto capítulo é apresentada uma prova de conceito com vetores de ataques que marcaram a história e, algumas métricas de avaliação que permitem comparar os diferentes classificadores quanto à sua taxa de sucesso, face às várias atividades na rede e inerentes dinâmicas. Os vários testes mostram flexibilidade, rapidez e precisão dos vários algoritmos de classificação, estabelecendo a fasquia entre os 90 e os 99 por cento.Mestrado em Engenharia de Computadores e Telemátic

    A Global Panopticon - The Changing Role of International Organizations in the Information Age

    Get PDF
    The outbreaks of Severe Acute Respiratory Syndrome (SARS) in 2002-2003 and Swine Flu (H1N1) in 2009 captured a great deal of global attention. The swift spread of these diseases wreaked havoc, generated public hysteria, disrupted global trade and travel, and inflicted severe economic losses to countries, corporations, and individuals. Although affected states were required to report to the World Health Organization (WHO) events that may have constituted a public health emergency, many failed to do so. The WHO and the rest of the international community were therefore desperate for accurate, up-to-date information as to the nature of the pandemics, their spread in different countries, and treatment possibilities. The solution came from a somewhat surprising source-the internet. The first signs of both diseases were discovered by automated web crawlers that screened local media sources in multiple languages, looking for specific keywords. In the case of SARS, a web crawler reported to the WHO about the early signs of the disease more than three months before the international community became aware of it. In the case of Swine Flu, a web crawler was similarly responsible for unearthing early reports on the disease and triggering further inquiry by the WHO. Information that flew from the internet impelled the WHO to approach local health agencies and demand that they conduct thorough investigations into the outbreaks. The role played by the internet expanded even further after the initial discovery of the diseases. The worldwide spread of SARS and, in particular, Swine Flu was closely monitored online by global networks of scientists and volunteers who shared their experiences and tagged relevant data on interactive maps. As the Director-General of the WHO declared, [f]or the first time in history, the international community could watch a pandemic unfold, and chart its evolution, in real time. This Article argues that these technological developments are not just helpful for better disease detection and surveillance, but rather, they reflect a deeper, broader conceptual shift in state compliance with international law. Information technologies allow international organizations (IOs) to play an unprecedented, and so far overlooked, role in this respect. In particular, they transform one of the core functions of IOs in international relations: compliance monitoring

    A Global Panopticon - The Changing Role of International Organizations in the Information Age

    Get PDF
    The outbreaks of Severe Acute Respiratory Syndrome (SARS) in 2002-2003 and Swine Flu (H1N1) in 2009 captured a great deal of global attention. The swift spread of these diseases wreaked havoc, generated public hysteria, disrupted global trade and travel, and inflicted severe economic losses to countries, corporations, and individuals. Although affected states were required to report to the World Health Organization (WHO) events that may have constituted a public health emergency, many failed to do so. The WHO and the rest of the international community were therefore desperate for accurate, up-to-date information as to the nature of the pandemics, their spread in different countries, and treatment possibilities. The solution came from a somewhat surprising source-the internet. The first signs of both diseases were discovered by automated web crawlers that screened local media sources in multiple languages, looking for specific keywords. In the case of SARS, a web crawler reported to the WHO about the early signs of the disease more than three months before the international community became aware of it. In the case of Swine Flu, a web crawler was similarly responsible for unearthing early reports on the disease and triggering further inquiry by the WHO. Information that flew from the internet impelled the WHO to approach local health agencies and demand that they conduct thorough investigations into the outbreaks. The role played by the internet expanded even further after the initial discovery of the diseases. The worldwide spread of SARS and, in particular, Swine Flu was closely monitored online by global networks of scientists and volunteers who shared their experiences and tagged relevant data on interactive maps. As the Director-General of the WHO declared, [f]or the first time in history, the international community could watch a pandemic unfold, and chart its evolution, in real time. This Article argues that these technological developments are not just helpful for better disease detection and surveillance, but rather, they reflect a deeper, broader conceptual shift in state compliance with international law. Information technologies allow international organizations (IOs) to play an unprecedented, and so far overlooked, role in this respect. In particular, they transform one of the core functions of IOs in international relations: compliance monitoring

    Access Denied

    Get PDF
    A study of Internet blocking and filtering around the world: analyses by leading researchers and survey results that document filtering practices in dozens of countries.Many countries around the world block or filter Internet content, denying access to information that they deem too sensitive for ordinary citizens—most often about politics, but sometimes relating to sexuality, culture, or religion. Access Denied documents and analyzes Internet filtering practices in more than three dozen countries, offering the first rigorously conducted study of an accelerating trend. Internet filtering takes place in more than three dozen states worldwide, including many countries in Asia, the Middle East, and North Africa. Related Internet content-control mechanisms are also in place in Canada, the United States and a cluster of countries in Europe. Drawing on a just-completed survey of global Internet filtering undertaken by the OpenNet Initiative (a collaboration of the Berkman Center for Internet and Society at Harvard Law School, the Citizen Lab at the University of Toronto, the Oxford Internet Institute at Oxford University, and the University of Cambridge) and relying on work by regional experts and an extensive network of researchers, Access Denied examines the political, legal, social, and cultural contexts of Internet filtering in these states from a variety of perspectives. Chapters discuss the mechanisms and politics of Internet filtering, the strengths and limitations of the technology that powers it, the relevance of international law, ethical considerations for corporations that supply states with the tools for blocking and filtering, and the implications of Internet filtering for activist communities that increasingly rely on Internet technologies for communicating their missions. Reports on Internet content regulation in forty different countries follow, with each two-page country profile outlining the types of content blocked by category and documenting key findings.ContributorsRoss Anderson, Malcolm Birdling, Ronald Deibert, Robert Faris, Vesselina Haralampieva [as per Rob Faris], Steven Murdoch, Helmi Noman, John Palfrey, Rafal Rohozinski, Mary Rundle, Nart Villeneuve, Stephanie Wang, Jonathan Zittrai

    Cyber Law and Espionage Law as Communicating Vessels

    Get PDF
    Professor Lubin\u27s contribution is Cyber Law and Espionage Law as Communicating Vessels, pp. 203-225. Existing legal literature would have us assume that espionage operations and “below-the-threshold” cyber operations are doctrinally distinct. Whereas one is subject to the scant, amorphous, and under-developed legal framework of espionage law, the other is subject to an emerging, ever-evolving body of legal rules, known cumulatively as cyber law. This dichotomy, however, is erroneous and misleading. In practice, espionage and cyber law function as communicating vessels, and so are better conceived as two elements of a complex system, Information Warfare (IW). This paper therefore first draws attention to the similarities between the practices – the fact that the actors, technologies, and targets are interchangeable, as are the knee-jerk legal reactions of the international community. In light of the convergence between peacetime Low-Intensity Cyber Operations (LICOs) and peacetime Espionage Operations (EOs) the two should be subjected to a single regulatory framework, one which recognizes the role intelligence plays in our public world order and which adopts a contextual and consequential method of inquiry. The paper proceeds in the following order: Part 2 provides a descriptive account of the unique symbiotic relationship between espionage and cyber law, and further explains the reasons for this dynamic. Part 3 places the discussion surrounding this relationship within the broader discourse on IW, making the claim that the convergence between EOs and LICOs, as described in Part 2, could further be explained by an even larger convergence across all the various elements of the informational environment. Parts 2 and 3 then serve as the backdrop for Part 4, which details the attempt of the drafters of the Tallinn Manual 2.0 to compartmentalize espionage law and cyber law, and the deficits of their approach. The paper concludes by proposing an alternative holistic understanding of espionage law, grounded in general principles of law, which is more practically transferable to the cyber realmhttps://www.repository.law.indiana.edu/facbooks/1220/thumbnail.jp

    Cyber Peace

    Get PDF
    Cyberspace is increasingly vital to the future of humanity and managing it peacefully and sustainably is critical to both security and prosperity in the twenty-first century. These chapters and essays unpack the field of cyber peace by investigating historical and contemporary analogies, in a wide-ranging and accessible Open Access publication
    • …
    corecore