23 research outputs found

    Mutation Based Hybrid Routing Algorithm for Mobile Ad-hoc Networks

    Get PDF
    Mobile Adhoc NETworks (MANETs) usually present challenges such as a highly dynamic topology due to node mobility, route rediscovery process, and packet loss. This leads to low throughput, a lot of energy consumption, delay and low packet delivery ratio. In order to ensure that the route is not rediscovered over and over, multipath routing protocols such as Adhoc Multipath Distance Vector (AOMDV) is used in order to utilize the alternate routes. However, nodes that have low residual energy can die and add to the problem of disconnection of network and route rediscovery. This paper proposes a multipath routing algorithm based on AOMDV and genetic mutation. It takes into account residual energy, hop count, congestion and received signal strength for primary route selection. For secondary path selection it uses residual energy, hop count, congestion and received signal strength together with mutation. The simulation results show that the proposed algorithm gives better performance results compared to AOMDV by 11% for residual energy, 45% throughput, 3% packet delivery ratio, and 63% less delay

    Transport Layer solution for bulk data transfers over Heterogeneous Long Fat Networks in Next Generation Networks

    Get PDF
    Aquesta tesi per compendi centra les seves contribucions en l'aprenentatge i innovació de les Xarxes de Nova Generació. És per això que es proposen diferents contribucions en diferents àmbits (Smart Cities, Smart Grids, Smart Campus, Smart Learning, Mitjana, eHealth, Indústria 4.0 entre d'altres) mitjançant l'aplicació i combinació de diferents disciplines (Internet of Things, Building Information Modeling, Cloud Storage, Ciberseguretat, Big Data, Internet de el Futur, Transformació Digital). Concretament, es detalla el monitoratge sostenible del confort a l'Smart Campus, la que potser es la meva aportació més representativa dins de la conceptualització de Xarxes de Nova Generació. Dins d'aquest innovador concepte de monitorització s'integren diferents disciplines, per poder oferir informació sobre el nivell de confort de les persones. Aquesta investigació demostra el llarg recorregut que hi ha en la transformació digital dels sectors tradicionals i les NGNs. Durant aquest llarg aprenentatge sobre les NGN a través de les diferents investigacions, es va poder observar una problemàtica que afectava de manera transversal als diferents camps d'aplicació de les NGNs i que aquesta podia tenir una afectació en aquests sectors. Aquesta problemàtica consisteix en el baix rendiment durant l'intercanvi de grans volums de dades sobre xarxes amb gran capacitat d'ample de banda i remotament separades geogràficament, conegudes com a xarxes elefant. Concretament, això afecta al cas d'ús d'intercanvi massiu de dades entre regions Cloud (Cloud Data Sharing use case). És per això que es va estudiar aquest cas d'ús i les diferents alternatives a nivell de protocols de transport,. S'estudien les diferents problemàtiques que pateixen els protocols i s'observa per què aquests no són capaços d'arribar a rendiments òptims. Deguda a aquesta situació, s'hipotetiza que la introducció de mecanismes que analitzen les mètriques de la xarxa i que exploten eficientment la capacitat de la mateixa milloren el rendiment dels protocols de transport sobre xarxes elefant heterogènies durant l'enviament massiu de dades. Primerament, es dissenya l’Adaptative and Aggressive Transport Protocol (AATP), un protocol de transport adaptatiu i eficient amb l'objectiu de millorar el rendiment sobre aquest tipus de xarxes elefant. El protocol AATP s'implementa i es prova en un simulador de xarxes i un testbed sota diferents situacions i condicions per la seva validació. Implementat i provat amb èxit el protocol AATP, es decideix millorar el propi protocol, Enhanced-AATP, sobre xarxes elefant heterogènies. Per això, es dissenya un mecanisme basat en el Jitter Ràtio que permet fer aquesta diferenciació. A més, per tal de millorar el comportament del protocol, s’adapta el seu sistema de fairness per al repartiment just dels recursos amb altres fluxos Enhanced-AATP. Aquesta evolució s'implementa en el simulador de xarxes i es realitzen una sèrie de proves. A l'acabar aquesta tesi, es conclou que les Xarxes de Nova Generació tenen molt recorregut i moltes coses a millorar causa de la transformació digital de la societat i de l'aparició de nova tecnologia disruptiva. A més, es confirma que la introducció de mecanismes específics en la concepció i operació dels protocols de transport millora el rendiment d'aquests sobre xarxes elefant heterogènies.Esta tesis por compendio centra sus contribuciones en el aprendizaje e innovación de las Redes de Nueva Generación. Es por ello que se proponen distintas contribuciones en diferentes ámbitos (Smart Cities, Smart Grids, Smart Campus, Smart Learning, Media, eHealth, Industria 4.0 entre otros) mediante la aplicación y combinación de diferentes disciplinas (Internet of Things, Building Information Modeling, Cloud Storage, Ciberseguridad, Big Data, Internet del Futuro, Transformación Digital). Concretamente, se detalla la monitorización sostenible del confort en el Smart Campus, la que se podría considerar mi aportación más representativa dentro de la conceptualización de Redes de Nueva Generación. Dentro de este innovador concepto de monitorización se integran diferentes disciplinas, para poder ofrecer información sobre el nivel de confort de las personas. Esta investigación demuestra el recorrido que existe en la transformación digital de los sectores tradicionales y las NGNs. Durante este largo aprendizaje sobre las NGN a través de las diferentes investigaciones, se pudo observar una problemática que afectaba de manera transversal a los diferentes campos de aplicación de las NGNs y que ésta podía tener una afectación en estos sectores. Esta problemática consiste en el bajo rendimiento durante el intercambio de grandes volúmenes de datos sobre redes con gran capacidad de ancho de banda y remotamente separadas geográficamente, conocidas como redes elefante, o Long Fat Networks (LFNs). Concretamente, esto afecta al caso de uso de intercambio de datos entre regiones Cloud (Cloud Data Data use case). Es por ello que se estudió este caso de uso y las diferentes alternativas a nivel de protocolos de transporte. Se estudian las diferentes problemáticas que sufren los protocolos y se observa por qué no son capaces de alcanzar rendimientos óptimos. Debida a esta situación, se hipotetiza que la introducción de mecanismos que analizan las métricas de la red y que explotan eficientemente la capacidad de la misma mejoran el rendimiento de los protocolos de transporte sobre redes elefante heterogéneas durante el envío masivo de datos. Primeramente, se diseña el Adaptative and Aggressive Transport Protocol (AATP), un protocolo de transporte adaptativo y eficiente con el objetivo maximizar el rendimiento sobre este tipo de redes elefante. El protocolo AATP se implementa y se prueba en un simulador de redes y un testbed bajo diferentes situaciones y condiciones para su validación. Implementado y probado con éxito el protocolo AATP, se decide mejorar el propio protocolo, Enhanced-AATP, sobre redes elefante heterogéneas. Además, con tal de mejorar el comportamiento del protocolo, se mejora su sistema de fairness para el reparto justo de los recursos con otros flujos Enhanced-AATP. Esta evolución se implementa en el simulador de redes y se realizan una serie de pruebas. Al finalizar esta tesis, se concluye que las Redes de Nueva Generación tienen mucho recorrido y muchas cosas a mejorar debido a la transformación digital de la sociedad y a la aparición de nueva tecnología disruptiva. Se confirma que la introducción de mecanismos específicos en la concepción y operación de los protocolos de transporte mejora el rendimiento de estos sobre redes elefante heterogéneas.This compendium thesis focuses its contributions on the learning and innovation of the New Generation Networks. That is why different contributions are proposed in different areas (Smart Cities, Smart Grids, Smart Campus, Smart Learning, Media, eHealth, Industry 4.0, among others) through the application and combination of different disciplines (Internet of Things, Building Information Modeling, Cloud Storage, Cybersecurity, Big Data, Future Internet, Digital Transformation). Specifically, the sustainable comfort monitoring in the Smart Campus is detailed, which can be considered my most representative contribution within the conceptualization of New Generation Networks. Within this innovative monitoring concept, different disciplines are integrated in order to offer information on people's comfort levels. . This research demonstrates the long journey that exists in the digital transformation of traditional sectors and New Generation Networks. During this long learning about the NGNs through the different investigations, it was possible to observe a problematic that affected the different application fields of the NGNs in a transversal way and that, depending on the service and its requirements, it could have a critical impact on any of these sectors. This issue consists of a low performance operation during the exchange of large volumes of data on networks with high bandwidth capacity and remotely geographically separated, also known as Elephant networks, or Long Fat Networks (LFNs). Specifically, this critically affects the Cloud Data Sharing use case. That is why this use case and the different alternatives at the transport protocol level were studied. For this reason, the performance and operation problems suffered by layer 4 protocols are studied and it is observed why these traditional protocols are not capable of achieving optimal performance. Due to this situation, it is hypothesized that the introduction of mechanisms that analyze network metrics and efficiently exploit network’s capacity meliorates the performance of Transport Layer protocols over Heterogeneous Long Fat Networks during bulk data transfers. First, the Adaptive and Aggressive Transport Protocol (AATP) is designed. An adaptive and efficient transport protocol with the aim of maximizing its performance over this type of elephant network.. The AATP protocol is implemented and tested in a network simulator and a testbed under different situations and conditions for its validation. Once the AATP protocol was designed, implemented and tested successfully, it was decided to improve the protocol itself, Enhanced-AATP, to improve its performance over heterogeneous elephant networks. In addition, in order to upgrade the behavior of the protocol, its fairness system is improved for the fair distribution of resources among other Enhanced-AATP flows. Finally, this evolution is implemented in the network simulator and a set of tests are carried out. At the end of this thesis, it is concluded that the New Generation Networks have a long way to go and many things to improve due to the digital transformation of society and the appearance of brand-new disruptive technology. Furthermore, it is confirmed that the introduction of specific mechanisms in the conception and operation of transport protocols improves their performance on Heterogeneous Long Fat Networks

    Network attack analysis of an indoor power line communication network

    Get PDF
    Abstract: The use of network security mechanisms within communication networks, should be prioritized and considered more in a small office/home office (SOHO) network setup such as a power line communication (PLC) network. In PLC networks, network attacks such as denial of service attacks (DOS), phishing attacks and man-in-the-middle attacks are some of the network security issues yet to be critically researched on SOHO network setups such as PLC networks. Therefore, this paper describes and analyzes the possibility of various network attacks on the network and data link layer of a PLC network setup. To achieve this, the PLC network setup will be assessed for vulnerabilities, and if detected, will be exploited using various attack techniques. Graphical charts will be plotted to represent the possibility and effect of the attacks on the PLC network setup. Finally, network security solutions will be provided to mitigate some of the recorded possible attacks on the PLC network setup. The observations and solutions presented in this research paper are for educational purposes and will be helpful to subsequent network security researchers and help improve security within an indoor PLC network setup

    Defence against Denial of Service (DoS) attacks using Identifier-Locator Network Protocol (ILNP) and Domain Name System (DNS)

    Get PDF
    This research considered a novel approach to network security by combining a new networking architecture based on the Identifier-Locator Network Protocol (ILNP) and the existing Domain Name System (DNS). Specifically, the investigations considered the mitigation of network-level and transport-level based Denial of Service (DoS) attacks. The solutions presented for DoS are applicable to secure servers that are visible externally from an enterprise network. DoS was chosen as an area of concern because in recent years DoS has become the most common and hard to defend against attacks. The novelty of this approach was to consider the way the DNS and ILNP can work together, transparently to the application, within an enterprise scenario. This was achieved by the introduction of a new application-level access control function - the Capability Management System (CMS) - which applies configuration at the application level (DNS data) and network level (ILNP namespaces). CMS provides dynamic, ephemeral identity and location information to clients and servers, in order to effectively partition legitimate traffic from attack traffic. This was achieved without modifying existing network components such as switches and routers and making standard use of existing functions, such as access control lists, and DNS servers, all within a single trust domain that is under the control of the enterprise. The prime objectives of this research were: • to defend against DoS attacks with the use of naming and DNS within an enterprise scenario. • to increase the attacker’s effort in launching a successful DoS attack. • to reduce the visibility of vulnerabilities that can be discovered by an attacker by active probing approaches. • to practically demonstrate the effectiveness of ILNP and DNS working together to provide a solution for DoS mitigation. The solution methodology is based on the use of network and transport level capabilities, dynamic changes to DNS data, and a Moving Target Defence (MTD) paradigm. There are three solutions presented which use ILNP namespaces. These solutions are referred to as identifier-based, locator-based, and combined identifier-locator based solutions, respectively. ILNP-based node identity values were used to provide transport-level per-client server capabilities, thereby providing per-client isolation of traffic. ILNP locator values were used to allow a provision of network-level traffic separation for externally accessible enterprise services. Then, the identifier and locator solutions were combined, showing the possibility of protecting the services, with per-client traffic control and topological traffic path separation. All solutions were site-based solutions and did not require any modification in the core/external network, or the active cooperation of an ISP, therefore limiting the trust domain to the enterprise itself. Experiments were conducted to evaluate all the solutions on a test-bed consisting of off-the-shelf hardware, open-source software, an implementation of the CMS written in C, all running on Linux. The discussion includes considering the efficacy of the solutions, comparisons with existing methods, the performance of each solution, and critical analysis highlighting future improvements that could be made

    Planning broadband infrastructure - a reference model

    Get PDF

    On the cyclostationarity of orthogonal frequency division multiplexing and single carrier linear digital modulations: theoretical developments and applications

    Get PDF
    In recent years, new technologies for wireless communications have emerged. The wireless industry has shown great interest in orthogonal frequency division multiplexing (OFDM) technology, due to the efficiency of OFDM schemes to convey information in a frequency selective fading channel without requiring complex equalizers. On the other hand, the emerging OFDM wireless communication technology raises new challenges for the designers of intelligent radios, such as discriminating between OFDM and single-carrier modulations. To achieve this objective we study the cyclostationarity of OFDM and single carrier linear digital (SCLD) modulated signals. -- In this thesis, we first investigate the nth-order cyclostationarity of OFDM and SCLD modulated signals embedded in additive white Gaussian noise (AWGN) and subject to phase, frequency and timing offsets. We derive the analytical closed-form expressions for the nth-order (q-conjugate) cyclic cumulants (CCs) and cycle frequencies (CFs), and the nth-order (q-conjugate) cyclic cumulant polyspectra (CCPs) of OFDM signal, and obtain a necessary and sufficient condition on the oversampling factor (per subcarrier) to avoid cycle aliasing. An algorithm based on a second-order CC is proposed to recognize OFDM against SCLD modulations in AWGN channel, as an application of signal cyclostationarity to modulation recognition problem. -- We further study the nth-order cyclostationarity of OFDM and SCLD modulated signals, affected by a time dispersive channel, AWGN, carrier phase, and frequency and timing offsets. The analytical closed-form expressions for the nth-order (q-conjugate) CCs and CFs, the nth-order (q-conjugate) CCPs of such signals are derived, and a necessary and sufficient condition on the oversampling factor (per subcarrier) is obtained to eliminate cycle aliasing for both OFDM and SCLD signals. We extend the applicability of the proposed algorithm in AWGN channel to time dispersive channels to recognize OFDM against SCLD modulations. The proposed algorithm obviates the preprocessing tasks; such as symbol timing, carrier and waveform recovery, and signal and noise power estimation. This is of practical significance, as algorithms that rely less on preprocessing are of crucial interest for receivers that operate with no prior information in a non-cooperative environment. It is shown that the recognition performance of the proposed algorithm in time dispersive channel is close to that in AWGN channel. In addition, we have noticed that the performance of recognizing both OFDM and SCLD signals does not depend on the modulation format used on each subcarrier for OFDM and for SCLD signals respectively

    Location inaccuracies in WSAN placement algorithms

    Get PDF
    The random deployment of Wireless Sensor and Actuator Network (WSAN) nodes in areas often inaccessible, results in so-called coverage holes – i.e. areas in the network that are not adequately covered by nodes to suit the requirements of the network. Various coverage protocol algorithms have been designed to reduce or eliminate coverage holes within WSANs by indicating how to move the nodes. The effectiveness of such coverage protocols could be jeopardised by inaccuracy in the initial node location data that is broadcast by the respective nodes. This study examines the effects of location inaccuracies on five sensor deployment and reconfiguration algorithms – They include two algorithms which assume that mobile nodes are deployed (referred to as the VEC and VOR algorithms); two that assume static nodes are deployed (referred to as the CNPSS and OGDC algorithms); and a single algorithm (based on a bidding protocol) that assumes a hybrid scenario in which both static and mobile nodes are deployed. Two variations of this latter algorithm are studied. A location simulation tool was built using the GE Smallworld GIS application and the Magik programming language. The simulation results are based on three above-mentioned deployment scenarios; mobile, hybrid and static. The simulation results suggest the VOR algorithm is reasonably robust if the location inaccuracies are somewhat lower than the sensing distance and also if a high degree of inaccuracy is limited to a relatively small percentage of the nodes. The VEC algorithm is considerably less robust, but prevents nodes from drifting beyond the boundaries in the case of large inaccuracies. The bidding protocol used by the hybrid algorithm appears to be robust only when the static nodes are accurate and there is a low degree of inaccuracy within the mobile nodes. Finally the static algorithms are shown to be the most robust; the CPNSS algorithm appears to be immune to location inaccuracies whilst the OGDC algorithm was shown to reduce the number of active nodes in the network to a better extent than that of the CPNSS algorithm. CopyrightDissertation (MSc)--University of Pretoria, 2010.Computer Scienceunrestricte

    Performanz Evaluation von PQC in TLS 1.3 unter variierenden Netzwerkcharakteristiken

    Full text link
    Quantum computers could break currently used asymmetric cryptographic schemes in a few years using Shor's algorithm. They are used in numerous protocols and applications to secure authenticity as well as key agreement, and quantum-safe alternatives are urgently needed. NIST therefore initiated a standardization process. This requires intensive evaluation, also with regard to performance and integrability. Here, the integration into TLS 1.3 plays an important role, since it is used for 90% of all Internet connections. In the present work, algorithms for quantum-safe key exchange during TLS 1.3 handshake were reviewed. The focus is on the influence of dedicated network parameters such as transmission rate or packet loss in order to gain insights regarding the suitability of the algorithms under corresponding network conditions. For the implementation, a framework by Paquin et al. was extended to emulate network scenarios and capture the handshake duration for selected algorithms. It is shown that the evaluated candidates Kyber, Saber and NTRU as well as the alternative NTRU Prime have a very good overall performance and partly undercut the handshake duration of the classical ECDH. The choice of a higher security level or hybrid variants does not make a significant difference here. This is not the case with alternatives such as FrodoKEM, SIKE, HQC or BIKE, which have individual disadvantages and whose respective performance varies greatly depending on the security level and hybrid implementation. This is especially true for the data-intensive algorithm FrodoKEM. In general, the prevailing network characteristics should be taken into account when choosing scheme and variant. Further it becomes clear that the performance of the handshake is influenced by external factors such as TCP mechanisms or MTU, which could compensate for possible disadvantages due to PQC if configured appropriately.Comment: Master's thesis, 160 pages, in Germa

    DDoS Capability and Readiness - Evidence from Australian Organisations

    Get PDF
    A common perception of cyber defence is that it should protect systems and data from malicious attacks, ideally keeping attackers outside of secure perimeters and preventing entry. Much of the effort in traditional cyber security defence is focused on removing gaps in security design and preventing those with legitimate permissions from becoming a gateway or resource for those seeking illegitimate access. By contrast, Distributed Denial of Service (DDoS) attacks do not use application backdoors or software vulnerabilities to create their impact. They instead utilise legitimate entry points and knowledge of system processes for illegitimate purposes. DDoS seeks to overwhelm system and infrastructure resources so that legitimate requests are prevented from reaching their intended destination. For this thesis, a literature review was performed using sources from two perspectives. Reviews of both industry literature and academic literature were combined to build a balanced view of knowledge of this area. Industry and academic literature revealed that DDoS is outpacing internet growth, with vandalism, criminal and ideological motivations rising to prominence. From a defence perspective, the human factor remains a weak link in cyber security due to proneness for mistakes, oversights and the variance in approach and methods expressed by differing cultures. How cyber security is perceived, approached, and applied can have a critical effect on the overall outcome achieved, even when similar technologies are implemented. In addition, variance in the technical capabilities of those responsible for the implementation may create further gaps and vulnerabilities. While discussing technical challenges and theoretical concepts, existing literature failed to cover the experiences held by the victim organisations, or the thoughts and feelings of their personnel. This thesis addresses these identified gaps through exploratory research, which used a mix of descriptive and qualitative analysis to develop results and conclusions. The websites of 60 Australian organisations were analysed to uncover the level and quality of cyber security information they were willing to share and the methods and processes they used to engage with their audience. In addition, semi-structured interviews were conducted with 30 employees from around half of those websites analysed. These were analysed using NVivo12 qualitative analysis software. The difficulty experienced with attracting willing participants reflected the comfort that organisations showed with sharing cyber security information and experiences. However, themes found within the results show that, while DDoS is considered a valid threat, without encouragement to collaborate and standardise minimum security levels, firms may be missing out on valuable strategies to improve their cyber security postures. Further, this reluctance to share leads organisations to rely on their own internal skill and expertise, thus failing to realise the benefits of established frameworks and increased diversity in the workforce. Along with the size of the participant pool, other limitations included the diversity of participants and the impact of COVID-19 which may have influenced participants' thoughts and reflections. These limitations however, present opportunity for future studies using greater participant numbers or a narrower target focus. Either option would be beneficial to the recommendations of this study which were made from a practical, social, theoretical and policy perspective. On a practical and social level, organisational capabilities suffer due to the lack of information sharing and this extends to the community when similar restrictions prevent collaboration. Sharing of knowledge and experiences while protecting sensitive information is a worthy goal and this is something that can lead to improved defence. However, while improved understanding is one way to reduce the impact of cyber-attacks, the introduction of minimum cyber security standards for products, could reduce the ease at which devices can be used to facilitate attacks, but only if policy and effective governance ensures product compliance with legislation. One positive side to COVID-19's push to remote working, was an increase in digital literacy. As more roles were temporarily removed from their traditional physical workplace, many employees needed to rapidly accelerate their digital competency to continue their employment. To assist this transition, organisations acted to implement technology solutions that eased the ability for these roles to be undertaken remotely and as a consequence, they opened up these roles to a greater pool of available candidates. Many of these roles are no longer limited to the geographical location of potential employees or traditional hours of availability. Many of these roles could be accessed from almost anywhere, at any time, which had a positive effect on organisational capability and digital sustainability
    corecore