269 research outputs found

    Control plane optimization in Software Defined Networking and task allocation for Fog Computing

    Get PDF
    As the next generation of mobile wireless standard, the fifth generation (5G) of cellular/wireless network has drawn worldwide attention during the past few years. Due to its promise of higher performance over the legacy 4G network, an increasing number of IT companies and institutes have started to form partnerships and create 5G products. Emerging techniques such as Software Defined Networking and Mobile Edge Computing are also envisioned as key enabling technologies to augment 5G competence. However, as popular and promising as it is, 5G technology still faces several intrinsic challenges such as (i) the strict requirements in terms of end-to-end delays, (ii) the required reliability in the control plane and (iii) the minimization of the energy consumption. To cope with these daunting issues, we provide the following main contributions. As first contribution, we address the problem of the optimal placement of SDN controllers. Specifically, we give a detailed analysis of the impact that controller placement imposes on the reactivity of SDN control plane, due to the consistency protocols adopted to manage the data structures that are shared across different controllers. We compute the Pareto frontier, showing all the possible tradeoffs achievable between the inter-controller delays and the switch-to-controller latencies. We define two data-ownership models and formulate the controller placement problem with the goal of minimizing the reaction time of control plane, as perceived by a switch. We propose two evolutionary algorithms, namely Evo-Place and Best-Reactivity, to compute the Pareto frontier and the controller placement minimizing the reaction time, respectively. Experimental results show that Evo-Place outperforms its random counterpart, and Best-Reactivity can achieve a relative error of <= 30% with respect to the optimal algorithm by only sampling less than 10% of the whole solution space. As second contribution, we propose a stateful SDN approach to improve the scalability of traffic classification in SDN networks. In particular, we leverage the OpenState extension to OpenFlow to deploy state machines inside the switch and minimize the number of packets redirected to the traffic classifier. We experimentally compare two approaches, namely Simple Count-Down (SCD) and Compact Count-Down (CCD), to scale the traffic classifier and minimize the flow table occupancy. As third contribution, we propose an approach to improve the reliability of SDN controllers. We implement BeCheck, which is a software framework to detect ``misbehaving'' controllers. BeCheck resides transparently between the control plane and data plane, and monitors the exchanged OpenFlow traffic messages. We implement three policies to detect misbehaving controllers and forward the intercepted messages. BeCheck along with the different policies are validated in a real test-bed. As fourth contribution, we investigate a mobile gaming scenario in the context of fog computing, denoted as Integrated Mobile Gaming (IMG) scenario. We partition mobile games into individual tasks and cognitively offload them either to the cloud or the neighbor mobile devices, so as to achieve minimal energy consumption. We formulate the IMG model as an ILP problem and propose a heuristic named Task Allocation with Minimal Energy cost (TAME). Experimental results show that TAME approaches the optimal solutions while outperforming two other state-of-the-art task offloading algorithms

    Modelling and Analysis of Wi-Fi and LAA Coexistence with Priority Classes

    Get PDF
    The Licensed Assisted Access (LAA) is shown asa required technology to avoid overcrowding of the licensedbands by the increasing cellular traffic. Proposed by 3GPP,LAA uses a Listen Before Talk (LBT) and backoff mechanismsimilar to Wi-Fi. While many mathematical models have beenproposed to study the problem of the coexistence of LAAand Wi-Fi systems, few have tackled the problem of QoSprovisioning, and in particular analysed the behaviour of thevarious classes of priority available in Wi-Fi and LAA. Thispaper presents a new mathematical model to investigate theperformance of different priority classes in coexisting Wi-Fi andLAA networks. Using Discrete Time Markov Chains, we modelthe saturation throughput of all eight priority classes used byWi-Fi and LAA. The numerical results show that with the 3GPPproposed parameters, a fair coexistence between Wi-Fi and LAAcannot be achieved. Wi-Fi users in particular suffer a significantdegradation of their performance caused by the collision withLAA transmissions which has a longer duration compared toWi-Fi transmissions

    Efficient aggregate computations in large-scale dense wireless sensor networks

    Get PDF
    Tese de doutoramento em InformáticaAssuming a world where we can be surrounded by hundreds or even thousands of inexpensive computing nodes densely deployed, each one with sensing and wireless communication capabilities, the problem of efficiently dealing with the enormous amount of information generated by those nodes emerges as a major challenge. The research in this dissertation addresses this challenge. This research work proves that it is possible to obtain aggregate quantities with a timecomplexity that is independent of the number of nodes, or grows very slowly as the number of nodes increases. This is achieved by co-designing the distributed algorithms for obtaining aggregate quantities and the underlying communication system. This work describes (i) the design and implementation of a prioritized medium access control (MAC) protocol which enforces strict priorities over wireless channels and (ii) the algorithms that allow exploiting this MAC protocol to obtain the minimum (MIN), maximum (MAX) and interpolation of sensor values with a time-complexity that is independent of the number of nodes deployed, whereas other state-of-the-art approaches have a time-complexity that is dependent on the number of nodes. These techniques also enable to efficiently obtain estimates of the number of nodes (COUNT) and the median of the sensor values (MEDIAN). The novel approach proposed to efficiently obtain aggregate quantities in large-scale, dense wireless sensor networks (WSN) is based on the adaptation to wireless media of a MAC protocol, known as dominance/binary countdown, which existed previously only for wired media, and design algorithms that exploit this MAC protocol for efficient data aggregation. Designing and implementing such MAC protocol for wireless media is not trivial. For this reason, a substantial part of this work is focused on the development and implementation of WiDom (short for Wireless Dominance) - a wireless MAC protocol that enables efficient data aggregation in large-scale, dense WSN. An implementation of WiDom is first proposed under the assumption of a fully connected network (a network with a single broadcast domain). This implementation can be exploited to efficiently obtain aggregated quantities. WiDom can also implement static priority scheduling over wireless media. Therefore, a schedulability analysis for WiDom is also proposed. WiDom is then extended to operate in sensor networks where a single transmission cannot reach all nodes, in a network with multiple broadcast domains. These results are significant because often networks of nodes that take sensor readings are designed to be large scale, dense networks and it is exactly for such scenarios that the proposed distributed algorithms for obtaining aggregate quantities excel. The implementation and test of these distributed algorithms in a hardware platform developed shows that aggregate quantities in large-scale, dense wireless sensor systems can be obtained efficientlly.É possível prever um mundo onde estaremos rodeados por centenas ou até mesmo milhares de pequenos nós computacionais densamente instalados. Cada um destes nós será de dimensões muito reduzidas e possui capacidades para obter dados directamente do ambiente através de sensores e transmitir informação via rádio. Frequentemente, este tipo de redes são denominadas de redes de sensores sem fio. Perante tal cenário, o problema de lidar com a considerável quantidade de informação gerada por todos estes nós emerge como um desafio de grande relevância. A investigação apresentada nesta dissertação atenta neste desafio. Este trabalho de investigação prova que é possível obter quantidades agregadas com uma complexidade temporal que é independente do número de nós computacionais envolvidos, ou cresce muito lentamente quando o número de nós aumenta. Isto é conseguido através uma co-concepção dos algoritmos para obter quantidades agregadas e do sistema de comunicação subjacente. Este trabalho descreve (i) a concepção e implementação de um protocolo de acesso ao meio que garante prioridades estáticas em canais de comunicação sem fio e (ii) os algoritmos que permitem tirar partido deste protocolo de acesso ao meio para obter quantidades agregadas como o mínimo (MIN), máximo (MAX) e interpolação de valores obtidos a partir de sensores ambientais com uma complexidade que é independente do número de nós computacionais envolvidos. Estas técnicas também permitem obter, de forma eficiente, estimativas do número de nós (COUNT) e a mediana dos valores dos sensores (MEDIAN). A abordagem inovadora, proposta para obter de forma eficiente quantidades agregadas em redes de sensores sem fio de larga escala, é baseada na adaptação para meios de comunicação sem fio de um protocolo de acesso ao meio anteriormente apenas existente em sistemas cablados, e na concepção de algoritmos que tiram partido deste protocolo para agregação de dados eficiente. A concepção e implementação de tal protocolo de acesso ao meio não é trivial. Por esta razão, uma parte substancial deste trabalho é focada no desenvolvimento e implementação de um protocolo de acesso ao meio que permite agregação de dados eficiente em redes de sensores sem fio densas e de larga escala. Esta implementação é denominada de WiDom. A implementação do WiDom apresentada foi inicialmente desenvolvida assumindo que a rede é totalmente ligada (uma transmisão de um nó alcança todos os outros nós). Esta implementação pode ser explorada para obter quantidades agregadas de forma eficiente. Adicionalmente, o protocolo WiDom pode implementar escalonamento utilizando prioridades fixas, permitindo a proposta de uma análise de resposta temporal. Neste trabalho, o WiDom é também estendido para funcionar em redes onde a transmissão de um nó não pode alcançar todos os outros nós. Os resultados apresentados neste trabalho são relevantes porque as redes de sensores sem fio são frequentemente concebidas para serem densas e de larga escala. É exactamente nestes casos que os algoritmos propostos para obter quantidades agregadas de forma eficiente apresentam maiores vantagens. A implementação e teste destes algoritmos distribuídos numa plataforma especialmente desenvolvida para o efeito demonstra que de facto podem ser obtidas quandidades agregadas de forma eficiente, mesmo em redes de sensores sem fio densas e de larga escala.This research was partially developed at the Real-Time Computing System Research Centre (CISTER), from the School of Engineering of the Polytechnic of Porto (ISEP/IPP
    corecore