252 research outputs found

    Failure Resilience and Traffic Engineering for Multi-Controller Software Defined Networking

    Get PDF
    This thesis explores and proposes solutions to address the challenges faced by Multi-Controller SDN (MCSDN) systems when deploying TE optimisation on WANs. Despite the interest from the research community, existing MCSDN systems present limitations. For example, TE optimisation systems are computationally complex, have high consistency requirements, and need network-wide state to operate. Because of such requirements, MCSDN systems can encounter performance overheads and state consistency problems when implementing TE. Moreover, performance and consistency problems are more prominent when deploying the system on WANs as these network types have higher inter-device latency, delaying state propagation. Unlike existing literature, this thesis presents several design choices that address all four challenges affecting MCSDN systems (scalability, consistency, resilience, and coordination). We use the presented design choices to build Helix, a hierarchical MCSDN system. Helix provides better scalability, performance and failure resilience compared to existing MCSDN systems by sharing minimal state between controllers, offloading operations closer to the data plane and deploying lightweight tasks. A challenge that we faced when building Helix was that existing TE algorithms did not meet Helix's design choices. This thesis presents a new CSPF-based TE algorithm that needs minimal state to operate and supports offloading inter-area TE to local controllers, fulfilling Helix's requirements. Helix's TE algorithm provides better performance and forwarding stability, addressing 1.6x more congestion while performing up to 29x fewer path modifications than the other algorithms evaluated in our experiments. While MCSDN literature has explored evaluating different aspects of system performance, there is a lack of readily available tools and concrete testing methodologies. To this end, this thesis provides concrete testing methodologies and tools readily available to the MCSDN community to evaluate the data plane failure resilience, control plane failure resilience, and TE optimisation performance of MCSDN systems

    Towards a wireless local area network security control framework for small, medium and micro enterprises in South Africa

    Get PDF
    There is little literature available that is specific to the use of wireless local area network [WLAN) security among small, medium and micro enterprises (SMMEs) in South Africa. This research study developed a framework which may be used by SMMEs for the purposes of securing their WLANs. In view of the fact that the aim of the study was to develop a system for improving information technology security, the study followed a design science approach. A literature review was conducted on security control framework standards and WLAN technologies. The needs of SMMEs regarding WLANs were also established. The result of this process was an artefact in the form of a WLAN Security Control Framework for securing WLANs for SMMEs in South Africa. The suitability of the framework was validated by means of a focus group

    Cloud and mobile infrastructure monitoring for latency and bandwidth sensitive applications

    Get PDF
    This PhD thesis involves the study of cloud computing infrastructures (from the networking perspective) to assess the feasibility of applications gaining increasing popularity over recent years, including multimedia and telemedicine applications, demanding low, bounded latency and sufficient bandwidth. I also focus on the case of telemedicine, where remote imaging applications (for example, telepathology or telesurgery) need to achieve a low and stable latency for the remote transmission of images, and also for the remote control of such equipment. Another important use case for telemedicine is denoted as remote computation, which involves the offloading of image processing to help diagnosis; also in this case, bandwidth and latency requirements should be enforced to ensure timely results, although they are less strict compared to the previous scenario. Nowadays, the capability of gaining access to IT resources in a rapid and on-demand fashion, according to a pay-as-you-go model, has made the cloud computing a key-enabler for innovative multimedia and telemedicine services. However, the partial obscurity of cloud performance, and also security concerns are still hindering the adoption of cloud infrastructure. To ensure that the requirements of applications running on the cloud are satisfied, there is the need to design and evaluate proper methodologies, according to the metric of interest. Moreover, some kinds of applications have specific requirements that cannot be satisfied by the current cloud infrastructure. In particular, since the cloud computing involves communication to remote servers, two problems arise: firstly, the core network infrastructure can be overloaded, considering the massive amount of data that has to flow through it to allow clients to reach the datacenters; secondly, the latency resulting from this remote interaction between clients and servers is increased. For these, and many other cases also beyond the field of telemedicine, the Edge and Fog computing paradigms were introduced. In these new paradigms, the IT resources are deployed not only in the core cloud datacenters, but also at the edge of the network, either in the telecom operator access network or even leveraging other users' devices. The proximity of resources to end-users allows to alleviate the burden on the core network and at the same time to reduce latency towards users. Indeed, the latency from users to remote cloud datacenters encompasses delays from the access and core networks, as well as the intra-datacenter delay. Therefore, this latency is expected to be higher than that required to interconnect users to edge servers, which in the envisioned paradigm are deployed in the access network, that is, nearby final users. Therefore, the edge latency is expected to be reduced to only a portion of the overall cloud delay. Moreover, the edge and central resources can be used in conjunction, and therefore attention to core cloud monitoring is of capital importance even when edge architectures will have a widespread adoption, which is not the case yet. While a lot of research work has been presented for monitoring several network-related metrics, such as bandwidth, latency, jitter and packet loss, less attention was given to the monitoring of latency in cloud and edge cloud infrastructures. In detail, while some works target cloud-latency monitoring, the evaluation is lacking a fine-grained analysis of latency considering spatial and temporal trends. Furthermore, the widespread adoption of mobile devices, and the Internet of Things paradigm further accelerate the shift towards the cloud paradigm for the additional benefits it can provide in this context, allowing energy savings and augmenting the computation capabilities of these devices, creating a new scenario denoted as mobile cloud. This scenario poses additional challenges for its bandwidth constraints, accentuating the need for tailored methodologies that can ensure that the crucial requirements of the aforementioned applications can be met by the current infrastructure. In this sense, there is still a gap of works monitoring bandwidth-related metrics in mobile networks, especially when performing in-the-wild assessment targeting actual mobile networks and operators. Moreover, even the few works testing real scenarios typically consider only one provider in one country for a limited period of time, lacking an in-depth assessment of bandwidth variability over space and time. In this thesis, I therefore consider monitoring methodologies for challenging scenarios, focusing on latency perceived by customers of public cloud providers, and bandwidth in mobile broadband networks. Indeed, as described, achieving low latency is a critical requirement for core cloud infrastructures, while providing enough bandwidth is still challenging in mobile networks compared to wired settings, even with the adoption of 4G mobile broadband networks, expecting to overcome this issue only with the widespread availability of 5G connections (with half of total traffic expected to come from 5G networks by 2026). Therefore, in the research activities carried on during my PhD, I focused on monitoring latency and bandwidth on cloud and mobile infrastructures, assessing to which extent the current public cloud infrastructure and mobile network make multimedia and telemedicine applications (as well as others having similar requirements) feasible

    Using real options to select stable Middleware-induced software architectures

    Get PDF
    The requirements that force decisions towards building distributed system architectures are usually of a non-functional nature. Scalability, openness, heterogeneity, and fault-tolerance are examples of such non-functional requirements. The current trend is to build distributed systems with middleware, which provide the application developer with primitives for managing the complexity of distribution, system resources, and for realising many of the non-functional requirements. As non-functional requirements evolve, the `coupling' between the middleware and architecture becomes the focal point for understanding the stability of the distributed software system architecture in the face of change. It is hypothesised that the choice of a stable distributed software architecture depends on the choice of the underlying middleware and its flexibility in responding to future changes in non-functional requirements. Drawing on a case study that adequately represents a medium-size component-based distributed architecture, it is reported how a likely future change in scalability could impact the architectural structure of two versions, each induced with a distinct middleware: one with CORBA and the other with J2EE. An option-based model is derived to value the flexibility of the induced-architectures and to guide the selection. The hypothesis is verified to be true for the given change. The paper concludes with some observations that could stimulate future research in the area of relating requirements to software architectures

    Implementing network security at Layer 2 and Layer 3 OSI model

    Get PDF
    This thesis investigated the features of security devices that would be suitable for implementations in medium to large enterprise networks at the global scale. In the thesis are covered open standard and proprietary security features. The open standard security features that are discussed in the report are the one that are developed by Internet Engineering Task Force – IETF and described in their Request For Comments – RFC. The proprietary features discussed in this report are from Cisco Systems and these features are always implemented in the Cisco Systems equipment. The author at the beginning describes common vulnerabilities, threats and attacks and then used comparative and quantities methodology to analyze the security features and its mitigation. Then in details were analyzed features of Cisco security devices, which operate at layer two and three of the OSI model, as the most commonly used equipment worldwide for securing entire computer networks. Based on their features and technical specifications it is shown that Cisco IOS Firewall feature set and Cisco Adaptive Security Appliance features are suitable for medium to big networks and with a staff that has advanced knowledge of risk security at computer networks. Network security is the process by which digital information assets are protected. The goals of security are to protect confidentiality, maintain integrity, and assure availability. With this in mind, it is imperative that all networks be protected from threats and vulnerabilities in order for a business to achieve its fullest potential. Typically, these threats are persistent due to vulnerabilities, which can arise from misconfigured hardware or software, poor network design, inherent technology weaknesses, or end-user carelessness. With the help of the Packet Tracer simulation software, different features and implementations of security features are tested. Using Packet Tracer software the author has created configuration script for every case used in a designed topology. At the end of the thesis under the Appendixes section is introduced operation of the Packet Tracer and configuration topology that is used throughout this report for the testing purposes

    Software-Driven and Virtualized Architectures for Scalable 5G Networks

    Full text link
    In this dissertation, we argue that it is essential to rearchitect 4G cellular core networks–sitting between the Internet and the radio access network–to meet the scalability, performance, and flexibility requirements of 5G networks. Today, there is a growing consensus among operators and research community that software-defined networking (SDN), network function virtualization (NFV), and mobile edge computing (MEC) paradigms will be the key ingredients of the next-generation cellular networks. Motivated by these trends, we design and optimize three core network architectures, SoftMoW, SoftBox, and SkyCore, for different network scales, objectives, and conditions. SoftMoW provides global control over nationwide core networks with the ultimate goal of enabling new routing and mobility optimizations. SoftBox attempts to enhance policy enforcement in statewide core networks to enable low-latency, signaling-efficient, and customized services for mobile devices. Sky- Core is aimed at realizing a compact core network for citywide UAV-based radio networks that are going to serve first responders in the future. Network slicing techniques make it possible to deploy these solutions on the same infrastructure in parallel. To better support mobility and provide verifiable security, these architectures can use an addressing scheme that separates network locations and identities with self-certifying, flat and non-aggregatable address components. To benefit the proposed architectures, we designed a high-speed and memory-efficient router, called Caesar, for this type of addressing schemePHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146130/1/moradi_1.pd

    Smart Metering Technology and Services

    Get PDF
    Global energy context has become more and more complex in the last decades; the raising prices of fuels together with economic crisis, new international environmental and energy policies that are forcing companies. Nowadays, as we approach the problem of global warming and climate changes, smart metering technology has an effective use and is crucial for reaching the 2020 energy efficiency and renewable energy targets as a future for smart grids. The environmental targets are modifying the shape of the electricity sectors in the next century. The smart technologies and demand side management are the key features of the future of the electricity sectors. The target challenges are coupling the innovative smart metering services with the smart meters technologies, and the consumers' behaviour should interact with new technologies and polices. The book looks for the future of the electricity demand and the challenges posed by climate changes by using the smart meters technologies and smart meters services. The book is written by leaders from academia and industry experts who are handling the smart meters technologies, infrastructure, protocols, economics, policies and regulations. It provides a promising aspect of the future of the electricity demand. This book is intended for academics and engineers who are working in universities, research institutes, utilities and industry sectors wishing to enhance their idea and get new information about the smart meters

    Fault injection testing method of software implemented fault tolerance mechanisms of web service systems

    Get PDF
    Testing Web Services applications and their Fault Tolerance Mechanisms (FTMs) is crucial for the development of today's applications. The performance and FTMs of composed service systems are hard to measure at design time because service instability is often caused by the nature of the network. Testing in a real internet environment is difficult to set up and control. However, the adequacy of FTMs and the performance of Web Service applications can be tested efficiently by injecting faults and observing how the target system performs under faulty conditions. This thesis investigates what is involved in testing the software-implemented fault tolerance mechanisms of Web Service systems through fault injection. We have developed a fault injection toolkit that emulates a WAN within a LAN environment between composed service components and offers full control over the emulated environments, in addition to the ability to inject communication and specific software faults. The tool also generates background workloads on the tested system for producing more realistic results. The testing method requires that the target system be constructed as a collection of Web Services applications interacting via messages. This enables the insertion of faults into the target system to emulate the incorrect behaviour of faulty conditions by injecting communication faults and manipulating messages. This approach allows the injection of faults while not requiring any significant changes to the target system. This testing method injects two classes of faults, manly communication and interface faults due to their big impact on Web service system dependability. The method differs from the previous work not only by injecting communication faults based on a Wide Area Network emulator, but also in its ability to inject a combination of communication and interface faults, which could cause what are called Byzantine faults (Arbitrary faults) at the application level. The proposed fault injection method has been applied to test a Web Service system deploying what is called a WS-Mediator for improving the system reliability. The WS-Mediator claims to offer comprehensive off-the-shelf fault tolerance mechanisms to cope with various kinds of typical Web Service application scenarios. We chose to use the N-version programming mechanism offered by the WS-Mediator, which has been tested through out tool. The testing demonstrated the usefulness of the method and its capacity to test the target system under different circumstances and faulty conditions.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    ACUTA Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue President\u27s Message From the Editor lT-Style Alphabet Soup Software-Defined WAN (SO-WAN)- Moving Beyond MPLS loT: The lnternet of Things ls the LPWAN in Your Future? lngredient for Wireless Success: DAS Hot lssues in Communications Technology Law lnstitutional Excellence Award: CSU Fullerton\u27s Shared Cloud Services DlDs for ELINs? lSE...ERP... KnowBe

    A Generic Network and System Management Framework

    Get PDF
    Networks and distributed systems have formed the basis of an ongoing communications revolution that has led to the genesis of a wide variety of services. The constantly increasing size and complexity of these systems does not come without problems. In some organisations, the deployment of Information Technology has reached a state where the benefits from downsizing and rightsizing by adding new services are undermined by the effort required to keep the system running. Management of networks and distributed systems in general has a straightforward goal: to provide a productive environment in which work can be performed effectively. The work required for management should be a small fraction of the total effort. Most IT systems are still managed in an ad hoc style without any carefully elaborated plan. In such an environment the success of management decisions depends totally on the qualification and knowledge of the administrator. The thesis provides an analysis of the state of the art in the area of Network and System Management and identifies the key requirements that must be addressed for the provisioning of Integrated Management Services. These include the integration of the different management related aspects (i.e. integration of heterogeneous Network, System and Service Management). The thesis then proposes a new framework, INSMware, for the provision of Management Services. It provides a fundamental basis for the realisation of a new approach to Network and System Management. It is argued that Management Systems can be derived from a set of pre-fabricated and reusable Building Blocks that break up the required functionality into a number of separate entities rather than being developed from scratch. It proposes a high-level logical model in order to accommodate the range of requirements and environments applicable to Integrated Network and System Management that can be used as a reference model. A development methodology is introduced that reflects principles of the proposed approach, and provides guidelines to structure the analysis, design and implementation phases of a management system. The INSMware approach can further be combined with the componentware paradigm for the implementation of the management system. Based on these principles, a prototype for the management of SNMP systems has been implemented using industry standard middleware technologies. It is argued that development of a management system based on Componentware principles can offer a number of benefits. INSMware Components may be re-used and system solutions will become more modular and thereby easier to construct and maintain
    • 

    corecore