149 research outputs found

    SDN/NFV-enabled satellite communications networks: opportunities, scenarios and challenges

    Get PDF
    In the context of next generation 5G networks, the satellite industry is clearly committed to revisit and revamp the role of satellite communications. As major drivers in the evolution of (terrestrial) fixed and mobile networks, Software Defined Networking (SDN) and Network Function Virtualisation (NFV) technologies are also being positioned as central technology enablers towards improved and more flexible integration of satellite and terrestrial segments, providing satellite network further service innovation and business agility by advanced network resources management techniques. Through the analysis of scenarios and use cases, this paper provides a description of the benefits that SDN/NFV technologies can bring into satellite communications towards 5G. Three scenarios are presented and analysed to delineate different potential improvement areas pursued through the introduction of SDN/NFV technologies in the satellite ground segment domain. Within each scenario, a number of use cases are developed to gain further insight into specific capabilities and to identify the technical challenges stemming from them.Peer ReviewedPostprint (author's final draft

    To Investigate Data Center Performance and Quality of service in IaaS CloudComputing Systems.

    Get PDF
    Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to the federation with other clouds. Performance evaluation of Cloud Computing infrastructures is required to predict and quantify the cost benefit of a strategy portfolio and the corresponding Quality of Service (QoS) experienced by users. Such analyses are not feasible by simulation or on the field experimentation, due to the great number of parameters that have to be investigated. In this paper, we present an analytical model, based on Stochastic Reward Nets (SRNs), that is both scalable to model systems composed of thousands of resources and flexible to represent different policies and cloud specific strategies. Several performance metrics are defined and evaluated to analyze the behavior of a Cloud data center: utilization, availability, waiting time, and responsiveness. A resiliency analysis is also provided to take into account load bursts. Finally, a general approach is presented that, starting from the concept of system capacity, can help system managers to opportunely set the data center parameters under different working conditions

    An Embryonics Inspired Architecture for Resilient Decentralised Cloud Service Delivery

    Get PDF
    Data-driven artificial intelligence applications arising from Internet of Things technologies can have profound wide-reaching societal benefits at the cross-section of the cyber and physical domains. Usecases are expanding rapidly. For example, smart-homes and smart-buildings provide intelligent monitoring, resource optimisation, safety, and security for their inhabitants. Smart cities can manage transport, waste, energy, and crime on large scales. Whilst smart-manufacturing can autonomously produce goods through the self-management of factories and logistics. As these use-cases expand further, the requirement to ensure data is processed accurately and timely is ever crucial, as many of these applications are safety critical. Where loss off life and economic damage is a likely possibility in the event of system failure. While the typical service delivery paradigm, cloud computing, is strong due to operating upon economies of scale, their physical proximity to these applications creates network latency which is incompatible with these safety critical applications. To complicate matters further, the environments they operate in are becoming increasingly hostile. With resource-constrained and mobile wireless networking, commonplace. These issues drive the need for new service delivery architectures which operate closer to, or even upon, the network devices, sensors and actuators which compose these IoT applications at the network edge. These hostile and resource constrained environments require adaptation of traditional cloud service delivery models to these decentralised mobile and wireless environments. Such architectures need to provide persistent service delivery within the face of a variety of internal and external changes or: resilient decentralised cloud service delivery. While the current state of the art proposes numerous techniques to enhance the resilience of services in this manner, none provide an architecture which is capable of providing data processing services in a cloud manner which is inherently resilient. Adopting techniques from autonomic computing, whose characteristics are resilient by nature, this thesis presents a biologically-inspired platform modelled on embryonics. Embryonic systems have an ability to self-heal and self-organise whilst showing capacity to support decentralised data processing. An initial model for embryonics-inspired resilient decentralised cloud service delivery is derived according to both the decentralised cloud, and resilience requirements given for this work. Next, this model is simulated using cellular automata, which illustrate the embryonic concept’s ability to provide self-healing service delivery under varying system component loss. This highlights optimisation techniques, including: application complexity bounds, differentiation optimisation, self-healing aggression, and varying system starting conditions. All attributes of which can be adjusted to vary the resilience performance of the system depending upon different resource capabilities and environmental hostilities. Next, a proof-of-concept implementation is developed and validated which illustrates the efficacy of the solution. This proof-of-concept is evaluated on a larger scale where batches of tests highlighted the different performance criteria and constraints of the system. One key finding was the considerable quantity of redundant messages produced under successful scenarios which were helpful in terms of enabling resilience yet could increase network contention. Therefore balancing these attributes are important according to use-case. Finally, graph-based resilience algorithms were executed across all tests to understand the structural resilience of the system and whether this enabled suitable measurements or prediction of the application’s resilience. Interestingly this study highlighted that although the system was not considered to be structurally resilient, the applications were still being executed in the face of many continued component failures. This highlighted that the autonomic embryonic functionality developed was succeeding in executing applications resiliently. Illustrating that structural and application resilience do not necessarily coincide. Additionally, one graph metric, assortativity, was highlighted as being predictive of application resilience, although not structural resilience

    Desarrollode un simulador de redes de procesadores que evolucionan (NEPS) en la nube (SPARK)

    Full text link
    Máster Universitario en Investigación e Innovación en Tecnologías de la Información y las Comunicaciones (i2-TIC)The natural-inspired computing has becomeone of the most frequently used techniques to handle complex problems such as the NP-Hard optimization problems. This kind of computing has several advantages over traditional computing, including resiliency, parallel data processing, and low consumptionof power. One of the active research areas of the natural-inspired algorithms is Network of Evolutionary Processors (NEPs). A NEP consists of several cells that are attached together; at the same time the edges of the graph are to transfer data between the nodes in system, while cells are representing the nodes.In this thesis we construct a NEPs system which is implemented over the Hadoop spark environment. The use of the spark platform is essential in this work due to the capabilities supplied by this platform. It is a suitable environment used solving some complicated problems. Using the environment is a possible choice in order to design the NEPs system. For this reason, in this thesis, we detailed on how to install, design and operate this system on the Apache the spark environment is used because it has the capability to implement the NEPs system in a distributed manner. The NEPs simulation is delivered in this work. An analysis of system’s parameters was also provided in this work for the system performance evaluation via the examination of each single factor affecting the performance of the NEPs individually. After testing the system, it become clear that using NEPs on the decentralize cloud eco-system can be thought as an effective method to handle data of different formats and also to execute optimization problems such as Adelman, 3-colorabilty and Massive-NEP problems. Moreover, this scheme is also robust that can be adaptable to handle data which might be scaled up to be big data which is characterized by its volume and heterogeneity. In this context heterogeneity might be referring to collecting data from different sources. Moreover, the utilization of the spark environment as a platform to operate the NEPs system has it is prospects. This environment is characterized by its fast task handing chunks of data to Hadoop architecture that is used to implement the spark system which is mainly based on the map and reduce functions. Thus, the task is distributed on NEPs system using the cloud based environment system made it possible to have logical result in all of the three examples investigated and examined in this method

    ASiMOV: Microservices-based verifiable control logic with estimable detection delay against cyber-attacks to cyber-physical systems

    Get PDF
    The automatic control in Cyber-Physical-Systems brings advantages but also increased risks due to cyber-attacks. This Ph.D. thesis proposes a novel reference architecture for distributed control applications increasing the security against cyber-attacks to the control logic. The core idea is to replicate each instance of a control application and to detect attacks by verifying their outputs. The verification logic disposes of an exact model of the control logic, although the two logics are decoupled on two different devices. The verification is asynchronous to the feedback control loop, to avoid the introduction of a delay between the controller(s) and system(s). The time required to detect a successful attack is analytically estimable, which enables control-theoretical techniques to prevent damage by appropriate planning decisions. The proposed architecture for a controller and an Intrusion Detection System is composed of event-driven autonomous components (microservices), which can be deployed as separate Virtual Machines (e.g., containers) on cloud platforms. Under the proposed architecture, orchestration techniques enable a dynamic re-deployment acting as a mitigation or prevention mechanism defined at the level of the computer architecture. The proposal, which we call ASiMOV (Asynchronous Modular Verification), is based on a model that separates the state of a controller from the state of its execution environment. We provide details of the model and a microservices implementation. Through the analysis of the delay introduced in both the control loop and the detection of attacks, we provide guidelines to determine which control systems are suitable for adopting ASiMOV. Simulations show the behavior of ASiMOV both in the absence and in the presence of cyber-attacks
    • …
    corecore