564 research outputs found

    Hacking into International Humanitarian Law: The Principles of Distinction and Neutrality in the Age of Cyber Warfare

    Get PDF
    Cyber warfare is an emerging form of warfare not explicitly addressed by existing international law. While most agree that legal restrictions should apply to cyber warfare, the international community has yet to reach consensus on how international humanitarian law ( IHL ) applies to this new form of conflict. After providing an overview of the global Internet structure and outlining several cyber warfare scenarios, this Note argues that violations of the traditional principles of distinction and neutrality are more likely to occur in cyber warfare than in conventional warfare. States have strong incentives to engage in prohibited cyber attacks, despite the risk of war crimes accusations. This Note argues that belligerents will violate the principle of distinction more frequently in cyber warfare than in conventional warfare. Many cyber attacks will unavoidably violate neutrality law, making these violations more likely in cyber conflicts than in conventional wars. Rather than condemn all uses of cyber weapons, this Note argues that IHL should evolve to encourage the use of cyber warfare in some situations and provide states better guidance in the conduct of these attacks

    Judging traffic differentiation as network neutrality violation according to internet regulation

    Get PDF
    Network Neutrality (NN) is a principle that establishes that traffic generated by Internet applications should be treated equally and it should not be affected by arbitrary interfer- ence, degradation, or interruption. Despite this common sense, NN has multiple defi- nitions spread across the academic literature, which differ primarily on what constitutes the proper equality level to consider the network as neutral. NN definitions may also be included in regulations that control activities on the Internet. However, the regulations are set by regulators whose acts are valid within a geographical area, named jurisdic- tion. Thus, both the academia and regulations provide multiple and heterogeneous NN definitions. In this thesis, the regulations are used as guidelines to detect NN violations, which are, by this approach, the adoption of traffic management practices prohibited by regulators. Thereafter, the solutions can provide helpful information for users to support claims against illegal traffic management practices. However, state-of-the-art solutions adopt strict academic definitions (e.g., all traffic must be treated equally) or adopt the regulatory definitions from one jurisdiction, which is not realistic or does not consider that multiple jurisdictions may be traversed in an end-to-end network path, respectively An impact analysis showed that, under certain circumstances, from 39% to 48% of the detected Traffic Differentiations (TDs) are not NN violations when the regulations are considered, exposing that the regulatory aspect must not be ignored. In this thesis, a Reg- ulation Assessment step is proposed to be performed after the TD detection. This step shall consider all NN definitions that may be found in an end-to-end network path and point out NN violation when they are violated. A service is proposed to perform this step for TD detection solutions, given the unfeasibility of every solution implementing the re- quired functionalities. A Proof-of-Concept (PoC) prototype was developed based on the requirements identified along with the impact analysis, which was evaluated using infor- mation about TDs detected by a state-of-the-art solution. The verdicts were inconclusive (the TD is an NN violation or not) for a quarter of the scenarios due to lack of information about the traversed network paths and the occurrence zones (where in the network path, the TD is suspected of being deployed). However, the literature already has proposals of approaches to obtain such information. These results should encourage TD detection solution proponents to collect this data and submit them for the Regulation Assessment.Neutralidade da rede (NR) é um princípio que estabelece que o tráfego de aplicações e serviços seja tratado igualitariamente e não deve ser afetado por interferência, degradação, ou interrupção arbitrária. Apesar deste senso comum, NR tem múltiplas definições na literatura acadêmica, que diferem principalmente no que constitui o nível de igualdade adequado para considerar a rede como neutra. As definições de NR também podem ser incluídas nas regulações que controlam as atividades na Internet. No entanto, tais regu- lações são definidas por reguladores cujos atos são válidos apenas dentro de uma área geográfica denominada jurisdição. Assim, tanto a academia quanto a regulação forne- cem definições múltiplas e heterogêneas de NR. Nesta tese, a regulação é utilizada como guia para detecção de violação da NR, que nesta abordagem, é a adoção de práticas de gerenciamento de tráfego proibidas pelos reguladores. No entanto, as soluções adotam definições estritas da academia (por exemplo, todo o tráfego deve ser tratado igualmente) ou adotam as definições regulatórias de uma jurisdição, o que pode não ser realista ou pode não considerar que várias jurisdições podem ser atravessadas em um caminho de rede, respectivamente. Nesta tese, é proposta uma etapa de Avaliação da Regulação após a detecção da Diferenciação de Tráfego (DT), que deve considerar todas as definições de NR que podem ser encontradas em um caminho de rede e sinalizar violações da NR quando elas forem violadas. Uma análise de impacto mostrou que, em determinadas cir- cunstâncias, de 39% a 48% das DTs detectadas não são violações quando a regulação é considerada. É proposto um serviço para realizar a etapa de Avaliação de Regulação, visto que seria inviável que todas as soluções tivessem que implementar tal etapa. Um protótipo foi desenvolvido e avaliado usando informações sobre DTs detectadas por uma solução do estado-da-arte. Os veredictos foram inconclusivos (a DT é uma violação ou não) para 75% dos cenários devido à falta de informações sobre os caminhos de rede percorridos e sobre onde a DT é suspeita de ser implantada. No entanto, existem propostas para realizar a coleta dessas informações e espera-se que os proponentes de soluções de detecção de DT passem a coletá-las e submetê-las para o serviço de Avaliação de Regulação

    Network Neutrality Inference

    Get PDF
    When can we reason about the neutrality of a network based on external observations? We prove conditions under which it is possible to (a) detect neutrality violations and (b) localize them to specific links, based on external observations. Our insight is that, when we make external observations from different vantage points, these will most likely be inconsistent with each other if the network is not neutral. Where existing tomographic techniques try to form solvable systems of equations to infer network properties, we try to form unsolvable systems that reveal neutrality violations. We present an algorithm that relies on this idea to identify sets of non-neutral links based on external observations, and we show, through network emulation, that it achieves good accuracy for a variety of network conditions

    Effective techniques for detecting and locating traffic differentiation in the internet

    Get PDF
    Orientador: Elias P. Duarte Jr.Coorientador: Luis C. E. BonaTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 24/09/2019Inclui referências: p. 115-126Área de concentração: Ciência da ComputaçãoResumo: A Neutralidade da Rede torna-se cada vez mais relevante conforme se intensifica o debate global e diversos governos implementam regulações. Este princípio diz que todo tráfego deve ser processado sem diferenciação, independentemente da origem, destino e/ou conteúdo. Práticas de diferenciação de tráfego (DT) devem ser transparentes, independentemente de regulações, pois afetam significativamente usuários finais. Assim, é essencial monitorar DT na Internet. Várias soluções já foram propostas para detectar DT. Essas soluções baseiam-se em medições de rede e inferência estatística. Porém, existem desafios em aberto. Esta tese tem três objetivos principais: (i) consolidar o estado da arte referente ao problema de detectar DT; (ii) investigar a DT em contextos ainda não explorados, especificamente a Internet das Coisas (IoT); e (iii) propor novas soluções para detecção de DT que solucionem alguns dos desafios em aberto, em particular localizar a fonte de DT. Primeiramente descrevemos o atual estado da arte, incluindo várias soluções de detecção de DT. Também propomos uma taxonomia para os diferentes tipos de DT e de detecção, e identificamos desafios em aberto. Em seguida, avaliamos o impacto da DT na IoT, simulando DT de diferentes padrões de tráfego IoT. Resultados mostram que mesmo uma priorização pequena pode ter um impacto significativo no desempenho de dispositivos de IoT. Propomos então uma solução para detectar DT na Internet, que baseia-se em uma nova estratégia que combina diversas métricas para detectar tipos diferente de DT. Resultados de simulação mostram que esta estratégia é capaz de detectar DT em diversas situações. Em seguida, propomos um modelo geral para monitoramento contínuo de DT na Internet, que se propõe a unificar as soluções atuais e futuras de detecção de DT, ao mesmo tempo que tira proveito de tecnologias atuais e emergentes. Neste contexto, uma nova solução para identificar a fonte de DT na Internet é proposta. O objetivo desta proposta é tanto viabilizar a implementação do nosso modelo geral quanto solucionar o problema de localizar DT. A proposta tira proveito de propriedades de roteamento da Internet para identificar em qual Sistema Autônomo (AS) DT acontece. Medições de vários pontos de vista são combinadas, e a fonte de DT é inferida com base nos caminhos em nível de AS entre os pontos de medição. Para avaliar esta proposta, primeiramente executamos experimentos para confirmar que rotas na Internet realmente apresentam as propriedades requeridas. Diversas simulações foram então executadas para avaliar a eficiência da proposta de localização de DT. Resultados mostram que em diversas situações, efetuar medições a partir de poucos nodos no núcleo da Internet obtém resultados similares a efetuar medições a partir de muitos nodos na borda. Palavras-chave: Neutralidade da Rede, Diferenciação de Tráfego, Medição de Rede.Abstract: Network Neutrality is becoming increasingly important as the global debate intensifies and governments worldwide implement and withdraw regulations. According to this principle, all traffic must be processed without differentiation, regardless of origin, destination and/or content. Traffic Differentiation (TD) practices should be transparent, regardless of regulations, since they can significantly affect end-users. It is thus essential to monitor TD in the Internet. Several solutions have been proposed to detect TD. These solutions are based on network measurements and statistical inference. However, there are still open challenges. This thesis has three main objectives: (i) to consolidate the state of the art regarding the problem of detecting TD; (ii) to investigate TD on contexts not yet explored, in particular the Internet of Things (IoT); and (iii) to propose new solutions regarding TD detection that address open challenges, in particular locating the source of TD. We first describe the current state of the art, including a description of multiple solutions for detecting TD. We also propose a taxonomy for the different types of TD and the different types of detection, and identify open challenges. Then, we evaluate the impact of TD on IoT, by simulating TD on different IoT traffic patterns. Results show that even a small prioritization may have a significant impact on the performance of IoT devices. Next, we propose a solution for detecting TD in the Internet. This solution relies on a new strategy of combining several metrics to detect different types of TD. Simulation results show that this strategy is capable of detecting TD under several conditions. We then propose a general model for continuously monitoring TD on the Internet, which aims at unifying current and future TD detection solutions, while taking advantage of current and emerging technologies. In this context, a new solution for locating the source of TD in the Internet is proposed. The goal of this proposal is to both enable the implementation of our general model and address the problem of locating TD. The proposal takes advantage of properties of Internet peering to identify in which Autonomous System (AS) TD occurs. Probes from multiple vantage points are combined, and the source of TD is inferred based on the AS-level routes between the measurement points. To evaluate this proposal, we first ran several experiments to confirm that indeed Internet routes do present the required properties. Then, several simulations were performed to assess the efficiency of the proposal for locating TD. The results show that for several different scenarios issuing probes from a few end-hosts in core Internet ASes achieves similar results than from numerous end-hosts on the edge. Keywords: Network Neutrality, Traffic Differentiation, Network Measurement

    Network neutrality inference

    Full text link

    Unauthorized Access

    Get PDF
    Going beyond current books on privacy and security, this book proposes specific solutions to public policy issues pertaining to online privacy and security. Requiring no technical or legal expertise, it provides a practical framework to address ethical and legal issues. The authors explore the well-established connection between social norms, privacy, security, and technological structure. They also discuss how rapid technological developments have created novel situations that lack relevant norms and present ways to develop these norms for protecting informational privacy and ensuring sufficient information security

    Secure and Differentially Private Detection of Net Neutrality Violations by Means of Crowdsourced Measurements

    Get PDF
    Evaluating Network Neutrality requires comparing the quality of service experienced by multiple users served by different Internet Service Providers. Consequently, the issue of guaranteeing privacy-friendly network measurements has recently gained increasing interest. In this paper we propose a system which gathers throughput measurements from users of various applications and Internet services and stores it in a crowdsourced database, which can be queried by the users themselves to verify if their submitted measurements are compliant with the hypothesis of a neutral network. Since the crowdsourced data may disclose sensitive information about users and their habits, thus leading to potential privacy leakages, we adopt a privacy-preserving method based on randomized sampling and suppression of small clusters. Numerical results show that the proposed solution ensures a good trade-off between usefulness of the system, in terms of precision and recall of discriminated users, and privacy, in terms of differential privacy

    Making broadband access networks transparent to researchers, developers, and users

    Get PDF
    Broadband networks are used by hundreds of millions of users to connect to the Internet today. However, most ISPs are hesitant to reveal details about their network deployments,and as a result the characteristics of broadband networks are often not known to users,developers, and researchers. In this thesis, we make progress towards mitigating this lack of transparency in broadband access networks in two ways. First, using novel measurement tools we performed the first large-scale study of thecharacteristics of broadband networks. We found that broadband networks have very different characteristics than academic networks. We also developed Glasnost, a system that enables users to test their Internet access links for traffic differentiation. Glasnost has been used by more than 350,000 users worldwide and allowed us to study ISPs' traffic management practices. We found that ISPs increasingly throttle or even block traffic from popular applications such as BitTorrent. Second, we developed two new approaches to enable realistic evaluation of networked systems in broadband networks. We developed Monarch, a tool that enables researchers to study and compare the performance of new and existing transport protocols at large scale in broadband environments. Furthermore, we designed SatelliteLab, a novel testbed that can easily add arbitrary end nodes, including broadband nodes and even smartphones, to existing testbeds like PlanetLab.Breitbandanschlüsse werden heute von hunderten Millionen Nutzern als Internetzugang verwendet. Jedoch geben die meisten ISPs nur ungern über Details ihrer Netze Auskunft und infolgedessen sind Nutzern, Anwendungsentwicklern und Forschern oft deren Eigenheiten nicht bekannt. Ziel dieser Dissertation ist es daher Breitbandnetze transparenter zu machen. Mit Hilfe neuartiger Messwerkzeuge konnte ich die erste groß angelegte Studie über die Besonderheiten von Breitbandnetzen durchführen. Dabei stellte sich heraus, dass Breitbandnetze und Forschungsnetze sehr unterschiedlich sind. Mit Glasnost habe ich ein System entwickelt, das mehr als 350.000 Nutzern weltweit ermöglichte ihren Internetanschluss auf den Einsatz von Verkehrsmanagement zu testen. Ich konnte dabei zeigen, dass ISPs zunehmend BitTorrent Verkehr drosseln oder gar blockieren. Meine Studien zeigten dar überhinaus, dass existierende Verfahren zum Testen von Internetsystemen nicht die typischen Eigenschaften von Breitbandnetzen berücksichtigen. Ich ging dieses Problem auf zwei Arten an: Zum einen entwickelte ich Monarch, ein Werkzeug mit dem das Verhalten von Transport-Protokollen über eine große Anzahl von Breitbandanschlüssen untersucht und verglichen werden kann. Zum anderen habe ich SatelliteLab entworfen, eine neuartige Testumgebung, die, anders als zuvor, beliebige Internetknoten, einschließlich Breitbandknoten und sogar Handys, in bestehende Testumgebungen wie PlanetLab einbinden kann

    Effective Wide-Area Network Performance Monitoring and Diagnosis from End Systems.

    Full text link
    The quality of all network application services running on today’s Internet heavily depends on the performance assurance offered by the Internet Service Providers (ISPs). Large network providers inside the core of the Internet are instrumental in determining the network properties of their transit services due to their wide-area coverage, especially in the presence of the increasingly deployed real-time sensitive network applications. The end-to-end performance of distributed applications and network services are susceptible to network disruptions in ISP networks. Given the scale and complexity of the Internet, failures and performance problems can occur in different ISP networks. It is important to efficiently identify and proactively respond to potential problems to prevent large damage. Existing work to monitor and diagnose network disruptions are ISP-centric, which relying on each ISP to set up monitors and diagnose within its network. This approach is limited as ISPs are unwilling to revealing such data to the public. My dissertation research developed a light-weight active monitoring system to monitor, diagnose and react to network disruptions by purely using end hosts, which can help customers assess the compliance of their service-level agreements (SLAs). This thesis studies research problems from three indispensable aspects: efficient monitoring, accurate diagnosis, and effective mitigation. This is an essential step towards accountability and fairness on the Internet. To fully understand the limitation of relying on ISP data, this thesis first studies and demonstrates the monitor selection’s great impact on the monitoring quality and the interpretation of the results. Motivated by the limitation of ISP-centric approach, this thesis demonstrates two techniques to diagnose two types of finegrained causes accurately and scalably by exploring information across routing and data planes, as well as sharing information among multiple locations collaboratively. Finally, we demonstrate usefulness of the monitoring and diagnosis results with two mitigation applications. The first application is short-term prevention of avoiding choosing the problematic route by exploring the predictability from history. The second application is to scalably compare multiple ISPs across four important performance metrics, namely reachability, loss rate, latency, and path diversity completely from end systems without any ISP cooperation.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64770/1/wingying_1.pd
    corecore