13 research outputs found

    Towards Protection Against Low-Rate Distributed Denial of Service Attacks in Platform-as-a-Service Cloud Services

    Get PDF
    Nowadays, the variety of technology to perform daily tasks is abundant and different business and people benefit from this diversity. The more technology evolves, more useful it gets and in contrast, they also become target for malicious users. Cloud Computing is one of the technologies that is being adopted by different companies worldwide throughout the years. Its popularity is essentially due to its characteristics and the way it delivers its services. This Cloud expansion also means that malicious users may try to exploit it, as the research studies presented throughout this work revealed. According to these studies, Denial of Service attack is a type of threat that is always trying to take advantage of Cloud Computing Services. Several companies moved or are moving their services to hosted environments provided by Cloud Service Providers and are using several applications based on those services. The literature on the subject, bring to attention that because of this Cloud adoption expansion, the use of applications increased. Therefore, DoS threats are aiming the Application Layer more and additionally, advanced variations are being used such as Low-Rate Distributed Denial of Service attacks. Some researches are being conducted specifically for the detection and mitigation of this kind of threat and the significant problem found within this DDoS variant, is the difficulty to differentiate malicious traffic from legitimate user traffic. The main goal of this attack is to exploit the communication aspect of the HTTP protocol, sending legitimate traffic with small changes to fill the requests of a server slowly, resulting in almost stopping the access of real users to the server resources during the attack. This kind of attack usually has a small time window duration but in order to be more efficient, it is used within infected computers creating a network of attackers, transforming into a Distributed attack. For this work, the idea to battle Low-Rate Distributed Denial of Service attacks, is to integrate different technologies inside an Hybrid Application where the main goal is to identify and separate malicious traffic from legitimate traffic. First, a study is done to observe the behavior of each type of Low-Rate attack in order to gather specific information related to their characteristics when the attack is executing in real-time. Then, using the Tshark filters, the collection of those packet information is done. The next step is to develop combinations of specific information obtained from the packet filtering and compare them. Finally, each packet is analyzed based on these combinations patterns. A log file is created to store the data gathered after the Entropy calculation in a friendly format. In order to test the efficiency of the application, a Cloud virtual infrastructure was built using OpenNebula Sandbox and Apache Web Server. Two tests were done against the infrastructure, the first test had the objective to verify the effectiveness of the tool proportionally against the Cloud environment created. Based on the results of this test, a second test was proposed to demonstrate how the Hybrid Application works against the attacks performed. The conclusion of the tests presented how the types of Slow-Rate DDoS can be disruptive and also exhibited promising results of the Hybrid Application performance against Low-Rate Distributed Denial of Service attacks. The Hybrid Application was successful in identify each type of Low-Rate DDoS, separate the traffic and generate few false positives in the process. The results are displayed in the form of parameters and graphs.Actualmente, a variedade de tecnologias que realizam tarefas diárias é abundante e diferentes empresas e pessoas se beneficiam desta diversidade. Quanto mais a tecnologia evolui, mais usual se torna, em contraposição, essas empresas acabam por se tornar alvo de actividades maliciosas. Computação na Nuvem é uma das tecnologias que vem sendo adoptada por empresas de diferentes segmentos ao redor do mundo durante anos. Sua popularidade se deve principalmente devido as suas características e a maneira com o qual entrega seus serviços ao cliente. Esta expansão da Computação na Nuvem também implica que usuários maliciosos podem tentar explorá-la, como revela estudos de pesquisas apresentados ao longo deste trabalho. De acordo também com estes estudos, Ataques de Negação de Serviço são um tipo de ameaça que sempre estão a tentar tirar vantagens dos serviços de Computação na Nuvem. Várias empresas moveram ou estão a mover seus serviços para ambientes hospedados fornecidos por provedores de Computação na Nuvem e estão a utilizar várias aplicações baseadas nestes serviços. A literatura existente sobre este tema chama atenção sobre o fato de que, por conta desta expansão na adopção à serviços na Nuvem, o uso de aplicações aumentou. Portanto, ameaças de Negação de Serviço estão visando mais a camada de aplicação e também, variações de ataques mais avançados estão sendo utilizadas como Negação de Serviço Distribuída de Baixa Taxa. Algumas pesquisas estão a ser feitas relacionadas especificamente para a detecção e mitigação deste tipo de ameaça e o maior problema encontrado nesta variante é diferenciar tráfego malicioso de tráfego legítimo. O objectivo principal desta ameaça é explorar a maneira como o protocolo HTTP trabalha, enviando tráfego legítimo com pequenas modificações para preencher as solicitações feitas a um servidor lentamente, tornando quase impossível para usuários legítimos aceder os recursos do servidor durante o ataque. Este tipo de ataque geralmente tem uma janela de tempo curta mas para obter melhor eficiência, o ataque é propagado utilizando computadores infectados, criando uma rede de ataque, transformando-se em um ataque distribuído. Para este trabalho, a ideia para combater Ataques de Negação de Serviço Distribuída de Baixa Taxa é integrar diferentes tecnologias dentro de uma Aplicação Híbrida com o objectivo principal de identificar e separar tráfego malicioso de tráfego legítimo. Primeiro, um estudo é feito para observar o comportamento de cada tipo de Ataque de Baixa Taxa, a fim de recolher informações específicas relacionadas às suas características quando o ataque é executado em tempo-real. Então, usando os filtros do programa Tshark, a obtenção destas informações é feita. O próximo passo é criar combinações das informações específicas obtidas dos pacotes e compará-las. Então finalmente, cada pacote é analisado baseado nos padrões de combinações feitos. Um arquivo de registo é criado ao fim para armazenar os dados recolhidos após o cálculo da Entropia em um formato amigável. A fim de testar a eficiência da Aplicação Híbrida, uma infra-estrutura Cloud virtual foi construída usando OpenNebula Sandbox e servidores Apache. Dois testes foram feitos contra a infra-estrutura, o primeiro teste teve o objectivo de verificar a efectividade da ferramenta proporcionalmente contra o ambiente de Nuvem criado. Baseado nos resultados deste teste, um segundo teste foi proposto para verificar o funcionamento da Aplicação Híbrida contra os ataques realizados. A conclusão dos testes mostrou como os tipos de Ataques de Negação de Serviço Distribuída de Baixa Taxa podem ser disruptivos e também revelou resultados promissores relacionados ao desempenho da Aplicação Híbrida contra esta ameaça. A Aplicação Híbrida obteve sucesso ao identificar cada tipo de Ataque de Negação de Serviço Distribuída de Baixa Taxa, em separar o tráfego e gerou poucos falsos positivos durante o processo. Os resultados são exibidos em forma de parâmetros e grafos

    QoE-Centric Control and Management of Multimedia Services in Software Defined and Virtualized Networks

    Get PDF
    Multimedia services consumption has increased tremendously since the deployment of 4G/LTE networks. Mobile video services (e.g., YouTube and Mobile TV) on smart devices are expected to continue to grow with the emergence and evolution of future networks such as 5G. The end user’s demand for services with better quality from service providers has triggered a trend towards Quality of Experience (QoE) - centric network management through efficient utilization of network resources. However, existing network technologies are either unable to adapt to diverse changing network conditions or limited in available resources. This has posed challenges to service providers for provisioning of QoE-centric multimedia services. New networking solutions such as Software Defined Networking (SDN) and Network Function Virtualization (NFV) can provide better solutions in terms of QoE control and management of multimedia services in emerging and future networks. The features of SDN, such as adaptability, programmability and cost-effectiveness make it suitable for bandwidth-intensive multimedia applications such as live video streaming, 3D/HD video and video gaming. However, the delivery of multimedia services over SDN/NFV networks to achieve optimized QoE, and the overall QoE-centric network resource management remain an open question especially in the advent development of future softwarized networks. The work in this thesis intends to investigate, design and develop novel approaches for QoE-centric control and management of multimedia services (with a focus on video streaming services) over software defined and virtualized networks. First, a video quality management scheme based on the traffic intensity under Dynamic Adaptive Video Streaming over HTTP (DASH) using SDN is developed. The proposed scheme can mitigate virtual port queue congestion which may cause buffering or stalling events during video streaming, thus, reducing the video quality. A QoE-driven resource allocation mechanism is designed and developed for improving the end user’s QoE for video streaming services. The aim of this approach is to find the best combination of network node functions that can provide an optimized QoE level to end-users through network node cooperation. Furthermore, a novel QoE-centric management scheme is proposed and developed, which utilizes Multipath TCP (MPTCP) and Segment Routing (SR) to enhance QoE for video streaming services over SDN/NFV-based networks. The goal of this strategy is to enable service providers to route network traffic through multiple disjointed bandwidth-satisfying paths and meet specific service QoE guarantees to the end-users. Extensive experiments demonstrated that the proposed schemes in this work improve the video quality significantly compared with the state-of-the- art approaches. The thesis further proposes the path protections and link failure-free MPTCP/SR-based architecture that increases survivability, resilience, availability and robustness of future networks. The proposed path protection and dynamic link recovery scheme achieves a minimum time to recover from a failed link and avoids link congestion in softwarized networks

    Dissecting HTTP/2 and QUIC : measurement, evaluation and optimization

    Get PDF
    Tesi en cotutel·la: Universitat Politècnica de Catalunya i Université catholique de LouvainThe Internet is evolving from the perspective of both usage and connectivity. The meteoric rise of smartphones has not only facilitated connectivity for the masses, it has also increased their appetite for more responsive applications. The widespread availability of wireless networks has caused a paradigm shift in the way we access the Internet. This shift has resulted in a new trend where traditional applications are getting migrated to the cloud, e.g., Microsoft Office 365, Google Apps etc. As a result, modern web content has become extremely complex and requires efficient web delivery protocols to maintain users’ experience regardless of the technology they use to connect to the Internet and despite variations in the quality of users’ Internet connectivity. To achieve this goal, efforts have been put into optimizing existing web and transport protocols, designing new low latency transport protocols and introducing enhance- ments in the WiFi MAC layer. In recent years, several improvements have been introduced in the HTTP protocol resulting in the HTTP/2 standard which allows more efficient use of network resources and a reduced perception of latency. QUIC transport protocol is another example of these ambitious efforts. Initially developed by Google as an experiment, the protocol has already made phenomenal strides, thanks to its support in Google’s servers and Chrome browser. However there is a lack of sufficient understanding and evaluation of these new protocols across a range of environments, which opens new opportunities for research in this direction. This thesis provides a comprehensive study on the behavior, usage and performance of HTTP/2 and QUIC, and advances them by implementing several optimizations. First, in order to understand the behavior of HTTP/1 and HTTP/2 traffic we analyze datasets of passive measurements collected in various operational networks and discover that they have very different characteristics. This calls for a reappraisal of traffic models, as well as HTTP traffic simulation and benchmarking approaches that were built on the understanding of HTTP/1 traffic only and may no longer be valid for modern web traffic. We develop a machine learning-based method compatible with existing flow monitoring systems for the classification of encrypted web traffic into appropriate HTTP versions. This will enable network administrators to identify HTTP/1 and HTTP/2 flows for network managements tasks such as traffic shaping or prioritization. We also investigate the behavior of HTTP/2 stream multiplexing in the wild. We devise a methodology for analysis of large datasets of network traffic comprising over 200 million flows to quantify the usage of H2 multiplexing in the wild and to understand its implications for network infrastructure. Next, we show with the help of emulations that HTTP/2 exhibits poor performance in adverse scenarios such as under high packet losses or network congestion. We confirm that the use of a single connection sometimes impairs application performance of HTTP/2 and implement an optimization in Chromium browser to make it more robust in such scenarios. Finally, we collect and analyze QUIC and TCP traffic in a production wireless mesh network. Our results show that while QUIC outperforms TCP in fixed networks, it exhibits significantly lower performance than TCP when there are wireless links in the end-to-end path. To see why this is the case, we carefully examine how delay variations which are common in wireless networks impact the congestion control and loss detection algorithms of QUIC. We also explore the interaction of QUIC transport with the advanced link layer features of WiFi such as frame aggregation. We fine-tune QUIC based on our findings and show notable increase in performance.Internet está evolucionando desde la perspectiva del uso y la conectividad. El ascenso meteórico de los teléfonos inteligentes no solo ha facilitado la conectividad para las masas, sino que también ha aumentado su apetito por aplicaciones más exigentes. La disponibilidad generalizada de las redes inalámbricas ha provocado un cambio de paradigma en la forma en que accedemos a Internet. Este cambio ha dado lugar a una nueva tendencia en la que las aplicaciones tradicionales se están migrando a la nube. Como resultado, el contenido web moderno se ha vuelto extremadamente complejo y requiere protocolos de entrega web eficientes para mantener la calidad de experiencia de los usuarios. Para lograr este objetivo, se han realizado esfuerzos para optimizar los protocolos web y de transporte existentes, diseñar nuevos protocolos de transporte de baja latencia e introducir mejo-ras en la capa MAC de WiFi. En los últimos años, se han introducido varias mejoras en el proto-colo HTTP que dan como resultado el estándar HTTP/2 que permite un uso más eficiente de los recursos de la red y una menor percepción de la latencia. El protocolo de transporte QUIC es otro ejemplo de estos esfuerzos ambiciosos. Inicialmente desarrollado por Google como un experi-mento, el protocolo ya ha hecho grandes avances, gracias a su soporte en los servidores de Google y el navegador Chrome. Esta tesis proporciona un estudio exhaustivo sobre el comportamiento, uso y rendimiento de HTTP/2 y QUIC, y los mejora mediante la implementación de varias optimizaciones. Primero, para comprender el comportamiento del tráfico HTTP/1 y HTTP/2, analizamos los conjuntos de datos de mediciones pasivas recopiladas en varias redes operativas y descubrimos que tienen carac-terísticas muy diferentes. Esto requiere una reevaluación de los modelos de tráfico, así como los métodos de simulación y evaluación comparativa del tráfico HTTP que se desarrollaron en el es-tudio hecho anteriormente sólo considerando el tráfico HTTP/1, y que ya no sean válidos para el tráfico web moderno. Desarrollamos un método basado en aprendizaje automático compatible con los sistemas de monitoreo de flujo existentes para la clasificación del tráfico web encriptado en las versiones HTTP. Esto permitirá a los administradores de red identificar los flujos de HTTP/1 y HTTP/2 para las tareas de administración de red, como la configuración del tráfico o la prior-ización. También investigamos el comportamiento de la multiplexación de flujos HTTP/2. Dise-ñamos una metodología para el análisis de grandes conjuntos de datos de tráfico de red que comprende más de 200 millones de flujos para cuantificar el uso de la multiplexación HTTP/2 y para comprender sus implicaciones para la infraestructura de red. A continuación, mostramos con la ayuda de las emulaciones que HTTP/2 muestra un rendimiento deficiente en escenarios adversos, como por ejemplo, una gran pérdida de paquetes o la conges-tión de la red. Confirmamos que el uso de una sola conexión a veces perjudica el rendimiento de la aplicación de HTTP/2 e implementamos una optimización en el navegador Chromium para ha-cerlo más robusto en tales escenarios. Finalmente, recopilamos y analizamos el tráfico de QUIC y TCP en una red de malla inalámbrica en producción. Nuestros resultados muestran que, si bien QUIC supera a TCP en redes fijas, puede presentar un rendimiento significativamente menor que TCP cuando hay enlaces inalám-bricos en la ruta de extremo a extremo. Para ver por qué ocurre, examinamos cuidadosamente cómo las variaciones de retardo, que son comunes en las redes inalámbricas, afectan el control de congestión y los algoritmos de detección de pérdida de QUIC. También exploramos la interacción de QUIC con las características avanzadas de la capa de enlace de WiFi, como la agregación de tramas. Ajustando QUIC en función de nuestros hallazgos mostramos que puede conseguirse un notable aumento en el rendimientoPostprint (published version

    Opportunistic Routing with Network Coding in Powerline Communications

    Get PDF
    Opportunistic Routing (OR) can be used as an alternative to the legacy routing (LR) protocols in networks with a broadcast lossy channel and possibility of overhearing the signal. The power line medium creates such an environment. OR can better exploit the channel than LR because it allows the cooperation of all nodes that receive any data. With LR, only a chain of nodes is selected for communication. Other nodes drop the received information. We investigate OR for the one-source one-destination scenario with one traffic flow. First, we evaluate the upper bound on the achievable data rate and advocate the decentralized algorithm for its calculation. This knowledge is used in the design of Basic Routing Rules (BRR). They use the link quality metric that equals the upper bound on the achievable data rate between the given node and the destination. We call it the node priority. It considers the possibility of multi-path communication and the packet loss correlation. BRR allows achieving the optimal data rate pertaining certain theoretical assumptions. The Extended BRR (BRR-E) are free of them. The major difference between BRR and BRR-E lies in the usage of Network Coding (NC) for prognosis of the feedback. In this way, the protocol overhead can be severely reduced. We also study Automatic Repeat-reQuest (ARQ) mechanism that is applicable with OR. It differs to ARQ with LR in that each sender has several sinks and none of the sinks except destination require the full recovery of the original message. Using BRR-E, ARQ and other services like network initialization and link state control, we design the Advanced Network Coding based Opportunistic Routing protocol (ANChOR). With the analytic and simulation results we demonstrate the near optimum performance of ANChOR. For the triangular topology, the achievable data rate is just 2% away from the theoretical maximum and it is up to 90% higher than it is possible to achieve with LR. Using the G.hn standard, we also show the full protocol stack simulation results (including IP/UDP and realistic channel model). In this simulation we revealed that the gain of OR to LR can be even more increased by reducing the head-of-the-line problem in ARQ. Even considering the ANChOR overhead through additional headers and feedbacks, it outperforms the original G.hn setup in data rate up to 40% and in latency up to 60%.:1 Introduction 2 1.1 Intra-flow Network Coding 6 1.2 Random Linear Network Coding (RLNC) 7 2 Performance Limits of Routing Protocols in PowerLine Communications (PLC) 13 2.1 System model 14 2.2 Channel model 14 2.3 Upper bound on the achievable data rate 16 2.4 Achieving the upper bound data rate 17 2.5 Potential gain of Opportunistic Routing Protocol (ORP) over Common Single-path Routing Protocol (CSPR) 19 2.6 Evaluation of ORP potential 19 3 Opportunistic Routing: Realizations and Challenges 24 3.1 Vertex priority and cooperation group 26 3.2 Transmission policy in idealized network 34 3.2.1 Basic Routing Rules (BRR) 36 3.3 Transmission policy in real network 40 3.3.1 Purpose of Network Coding (NC) in ORP 41 3.3.2 Extended Basic Routing Rules (BRR) (BRR-E) 43 3.4 Automatic ReQuest reply (ARQ) 50 3.4.1 Retransmission request message contents 51 3.4.2 Retransmission Request (RR) origination and forwarding 66 3.4.3 Retransmission response 67 3.5 Congestion control 68 3.5.1 Congestion control in our work 70 3.6 Network initialization 74 3.7 Formation of the cooperation groups (coalitions) 76 3.8 Advanced Network Coding based Opportunistic Routing protocol (ANChOR) header 77 3.9 Communication of protocol information 77 3.10 ANChOR simulation . .79 3.10.1 ANChOR information in real time .80 3.10.2 Selection of the coding rate 87 3.10.3 Routing Protocol Information (RPI) broadcasting frequency 89 3.10.4 RR contents 91 3.10.5 Selection of RR forwarder 92 3.10.6 ANChOR stability 92 3.11 Summary 95 4 ANChOR in the Gigabit Home Network (G.hn) Protocol 97 4.1 Compatibility with the PLC protocol stack 99 4.2 Channel and noise model 101 4.2.1 In-home scenario 102 4.2.2 Access network scenario 102 4.3 Physical layer (PHY) layer implementation 102 4.3.1 Bit Allocation Algorithm (BAA) 103 4.4 Multiple Access Control layer (MAC) layer 109 4.5 Logical Link Control layer (LLC) layer 111 4.5.1 Reference Automatic Repeat reQuest (ARQ) 111 4.5.2 Hybrid Automatic Repeat reQuest (HARQ) in ANChOR 114 4.5.3 Modeling Protocol Data Unit (PDU) erasures on LLC 116 4.6 Summary 117 5 Study of G.hn with ANChOR 119 5.1 ARQ analysis 119 5.2 Medium and PHY requirements for “good” cooperation 125 5.3 Access network scenario 128 5.4 In-home scenario 135 5.4.1 Modeling packet erasures 136 5.4.2 Linear Dependence Ratio (LDR) 139 5.4.3 Worst case scenario 143 5.4.4 Analysis of in-home topologies 145 6 Conclusions . . . . . . . . . . . . . . . 154 A Proof of the neccessity of the exclusion rule 160 B Gain of ORPs to CSRPs 163 C Broadcasting rule 165 D Proof of optimality of BRR for triangular topology 167 E Reducing the retransmission probability 168 F Calculation of Expected Average number of transmissions (EAX) for topologies with bi-directional links 170 G Feedback overhead of full coding matrices 174 H Block diagram of G.hn physical layer in ns-3 model 175 I PER to BER mapping 17

    Enabling Dynamic Spectrum Allocation in Cognitive Radio Networks

    Get PDF
    The last decade has witnessed the proliferation of innovative wireless technologies, such asWi-Fi, wireless mesh networks, operating in unlicensed bands. Due to the increasing popularity and the wide deployments of these technologies, the unlicensed bands become overcrowded. The wireless devices operating in these bands interfere with each other and hurt the overall performance. To support fast growths of wireless technologies, more spectrums are required. However, as most "prime" spectrum has been allocated, there is no spectrum available to expand these innovative wireless services. Despite the general perception that there is an actual spectral shortage, the recent measurement results released by the FCC (Federal Communications Commission) show that on average only 5% of the spectrum from 30MHz to 30 GHz is used in the US. This indicates that the inefficient spectrum usage is the root cause of the spectral shortage problem. Therefore, this dissertation is focused on improving spectrum utilization and efficiency in tackling the spectral shortage problem to support ever-growing user demands for wireless applications. This dissertation proposes a novel concept of dynamic spectrum allocation, which adaptively divides available spectrum into non-overlapping frequency segments of different bandwidth considering the number of potentially interfering transmissions and the distribution of traffic load in a local environment. The goals are (1) to maximize spectrum efficiency by increasing parallel transmissions and reducing co-channel interferences, and (2) to improve fairness across a network by balancing spectrum assignments. Since existing radio systems offer very limited flexibility, cognitive radios, which can sense and adapt to radio environments, are exploited to support such a dynamic concept. We explore two directions to improve spectrum efficiency by adopting the proposed dynamic allocation concept. First, we build a cognitive wireless system called KNOWS to exploit unoccupied frequencies in the licensed TV bands. KNOWS is a hardware-software platform that includes new radio hardware, a spectrum-aware MAC (medium access control) protocol and an algorithm for implementing the dynamic spectrum allocation. We show that KNOWS accomplishes a remarkable 200% throughput gain over systems based on fixed allocations in common cases. Second, we enhance Wireless LANs (WLANs), the most popular network setting in unlicensed bands, by proposing a dynamic channelization structure and a scalable MAC design. Through analysis and extensive simulations, we show that the new channelization structure and the scalable MAC design improve not only network capacity but per-client fairness by allocating channels of variable width for access points in a WLAN. As a conclusion, we believe that our proposed concept of dynamic spectrum allocation lays down a solid foundation for building systems to efficiently use the invaluable spectrum resource

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF
    corecore