321 research outputs found

    Will SDN be part of 5G?

    Get PDF
    For many, this is no longer a valid question and the case is considered settled with SDN/NFV (Software Defined Networking/Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.Comment: 33 pages, 10 figure

    Policy-based Information Sharing using Software-Defined Networking in Cloud Systems

    Get PDF
    Cloud Computing is rapidly becoming a ubiquitous technology. It enables an escalation in computing capacity, storage and performance without the need to invest in new infrastructure and the maintenance expenses that follow. Security is among the major concerns of organizations that are still reluctant to adopt this technology: The cloud is dynamic, and with so many different parameters involved, it is a diffi cult task to regulate it. With an approach that blends Usage Management and Statistical Learning, this research yielded a novel approach to mitigate some of the issues arising due to questionable security, and to regulate performance (utilization of resources).This research also explored how to enforce the policies related to the resources inside a Virtual Machine(VM), apart from providing initial access control. As well, this research compared various encryption schemes and observed their behavior in the cloud. We considered various components in the cloud to deduce a multi-cost function, which in turn helps to regulate the cloud. While guaranteeing security policies in the cloud, it is essential to add security to the network because the virtual cloud and SDN tie together. Enforcing network-wide policies has always been a challenging task in the domain of communication networks. Software-defined networking (SDN) enables the use of a central controller to define policies, and to use each network switch to enforce policies. While this presents an attractive operational model, it uses a very low-level framework, and is not suitable for directly implement- ing high-level policies. Therefore, we present a new framework for defining policies and easily compiling them from a user interface directly into OpenFlow actions and usage management system processes. This demonstrated capability allows cloud administrators to enforce both network and usage polices on the cloud

    An access control and authorization model with Open stack cloud for Smart Grid

    Get PDF
    In compare to Authentication for identification and relationship of an identity of a user with its task and process within the system, authorization in access control is much anxious about confirming that user and its task in the form of system process, access to the assets of any particular domain is only approved when proven obedient to the identified policies. Access control and authorization is always an area of interest for researchers for enhancing security of critical assets from many decades. Our prime focus and interest is in the field of access control model based on Attribute base access control (ABAC) and with this paper we tried to integrate ABAC with openstack cloud for achieving finer level of granularity in access policies for domain like smart grid. Technical advancement of current era demands that critical infrastructure like traditional electrical grid open ups to the modern information and communication technology to get the benefit in terms of efficiency, scalability, accessibility and transparency for better adaptability in real world. Incorporation of ICT with electric grid makes it possible to do greater level of bi-directional interaction among stake holders like customer, generation units, distribution units and administrations and these leads international organization to contribute for standardization of smart grid concepts and technology so that the realization of smart grid becomes reality. Smart grid is a distributed system of very large scale by its nature and needs to integrate available legacy systems with its own security requirements. Cloud computing proven to be most efficient approach for said requirements and we have identified openstack as our cloud platform. We have integrated ABAC approach with default RBAC approach of openstack and provide a frame work that supports and integrate multiple access control polices in making authorization decisions. Smart grid domain in considered as case study which requires support of multiple access policies (RBAC, ABAC or DAC etc) with our model for access control and authorization

    Open Source Solutions for Building IaaS Clouds

    Get PDF
    Cloud Computing is not only a pool of resources and services offered through the internet, but also a technology solution that allows optimization of resources use, costs minimization and energy consumption reduction. Enterprises moving towards cloud technologies have to choose between public cloud services, such as: Amazon Web Services, Microsoft Cloud and Google Cloud services, or private self built clouds. While the firsts are offered with affordable fees, the others provide more privacy and control. In this context, many open source softwares approach the buiding of private, public or hybrid clouds depending on the users need and on the available capabilities. To choose among the different open source solutions, an analysis is necessary in order to select the most suitable according with the enterprise’s goals and requirements. In this paper, we present a depth study and comparison of five open source frameworks that are gaining more attention recently and growing fast: CloudStack, OpenStack, Eucalyptus, OpenNebula and Nimbus. We present their architectures and discuss different properties, features, useful information and our own insights on these frameworks

    Agile management and interoperability testing of SDN/NFV-enriched 5G core networks

    Get PDF
    In the fifth generation (5G) era, the radio internet protocol capacity is expected to reach 20Gb/s per sector, and ultralarge content traffic will travel across a faster wireless/wireline access network and packet core network. Moreover, the massive and mission-critical Internet of Things is the main differentiator of 5G services. These types of real-time and large-bandwidth-consuming services require a radio latency of less than 1 ms and an end-to-end latency of less than a few milliseconds. By distributing 5G core nodes closer to cell sites, the backhaul traffic volume and latency can be significantly reduced by having mobile devices download content immediately from a closer content server. In this paper, we propose a novel solution based on software-defined network and network function virtualization technologies in order to achieve agile management of 5G core network functionalities with a proof-of-concept implementation targeted for the PyeongChang Winter Olympics and describe the results of interoperability testing experiences between two core networks

    Virtual Cluster Management for Analysis of Geographically Distributed and Immovable Data

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics and Computing, 2015Scenarios exist in the era of Big Data where computational analysis needs to utilize widely distributed and remote compute clusters, especially when the data sources are sensitive or extremely large, and thus unable to move. A large dataset in Malaysia could be ecologically sensitive, for instance, and unable to be moved outside the country boundaries. Controlling an analysis experiment in this virtual cluster setting can be difficult on multiple levels: with setup and control, with managing behavior of the virtual cluster, and with interoperability issues across the compute clusters. Further, datasets can be distributed among clusters, or even across data centers, so that it becomes critical to utilize data locality information to optimize the performance of data-intensive jobs. Finally, datasets are increasingly sensitive and tied to certain administrative boundaries, though once the data has been processed, the aggregated or statistical result can be shared across the boundaries. This dissertation addresses management and control of a widely distributed virtual cluster having sensitive or otherwise immovable data sets through a controller. The Virtual Cluster Controller (VCC) gives control back to the researcher. It creates virtual clusters across multiple cloud platforms. In recognition of sensitive data, it can establish a single network overlay over widely distributed clusters. We define a novel class of data, notably immovable data that we call "pinned data", where the data is treated as a first-class citizen instead of being moved to where needed. We draw from our earlier work with a hierarchical data processing model, Hierarchical MapReduce (HMR), to process geographically distributed data, some of which are pinned data. The applications implemented in HMR use extended MapReduce model where computations are expressed as three functions: Map, Reduce, and GlobalReduce. Further, by facilitating information sharing among resources, applications, and data, the overall performance is improved. Experimental results show that the overhead of VCC is minimum. The HMR outperforms traditional MapReduce model while processing a particular class of applications. The evaluations also show that information sharing between resources and application through the VCC shortens the hierarchical data processing time, as well satisfying the constraints on the pinned data
    • …
    corecore